- **Description:** Changing the key from `response` to
`structured_response` for middleware agent to keep it sync with agent
without middleware. This a breaking change.
- **Issue:** #33154
Porting the [planning
middleware](39c0138d0f/src/deepagents/middleware.py (L21))
over from deepagents.
Also adding the ability to configure:
* System prompt
* Tool description
```py
from langchain.agents.middleware.planning import PlanningMiddleware
from langchain.agents import create_agent
agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])
result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})
print(result["todos"]) # Array of todo items with status tracking
```
Multiple improvements to HITL flow:
* On a `response` type resume, we should still append the tool call to
the last AIMessage (otherwise we have a ToolResult without a
corresponding ToolCall)
* When all interrupts have `response` types (so there's no pending tool
calls), we should jump back to the first node (instead of end) as we
enforced in the previous `post_model_hook_router`
* Added comments to `model_to_tools` router so clarify all of the
potential exit conditions
Additionally:
* Lockfile update to use latest LG alpha release
* Added test for `jump_to` behaving ephemerally, this was fixed in LG
but surfaced as a bug w/ `jump_to`.
* Bump version to v1.0.0a10 to prep for alpha release
---------
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
Remove redundant/outdated `@pytest.mark.requires("jinja2")` decorator
Pytest marks (like `@pytest.mark.requires(...)`) applied directly to
fixtures have no effect and are deprecated.
Excluded pydantic_v1 module from import testing
Acceptable since this pydantic_v1 is explicitly deprecated. Testing its
importability at this stage serves little purpose since users should
migrate away from it.
## Summary
Adds test coverage for the `stringify_value` utility function to handle
complex nested data structures that weren't previously tested.
## Changes
- Added `test_stringify_value_nested_structures()` to `test_strings.py`
- Tests nested dictionaries within lists
- Tests mixed-type lists with various data types
- Verifies proper stringification of complex nested structures
## Why This Matters
- Fills a gap in test coverage for edge cases
- Ensures `stringify_value` handles complex data structures correctly
- Improves confidence in string utility functions used throughout the
codebase
- Low risk addition that strengthens existing test suite
## Testing
```bash
uv run --group test pytest libs/core/tests/unit_tests/utils/test_strings.py::test_stringify_value_nested_structures -v
```
This test addition follows the project's testing patterns and adds
meaningful coverage without introducing any breaking changes.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Enhance the pull request workflows by updating the `pull_request_target`
types and ensuring safety by avoiding checkout of the PR's head. Update
the action to use a specific commit from the archived repository.
**Description:** Right now, we interrupt even if the provided ToolConfig
has all false values. We should ignore ToolConfigs which do not have at
least one value marked as true (just as we would if tool_name: False was
passed into the dict).
# Main Changes
1. Adding decorator utilities for dynamically defining middleware with
single hook functions (see an example below for dynamic system prompt)
2. Adding better conditional edge drawing with jump configuration
attached to middleware. Can be registered w/ the decorator new
decorator!
## Decorator Utilities
```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import modify_model_request
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver
@modify_model_request
def modify_system_prompt(request: ModelRequest, state: AgentState) -> ModelRequest:
request.system_prompt = (
"You are a helpful assistant."
f"Please record the number of previous messages in your response: {len(state['messages'])}"
)
return request
agent = create_agent(
model="openai:gpt-4o-mini",
middleware=[modify_system_prompt]
).compile(checkpointer=InMemorySaver())
```
## Visualization and Routing improvements
We now require that middlewares define the valid jumps for each hook.
If using the new decorator syntax, this can be done with:
```py
@before_model(jump_to=["__end__"])
@after_model(jump_to=["tools", "__end__"])
```
If using the subclassing syntax, you can use these two class vars:
```py
class MyMiddlewareAgentMiddleware):
before_model_jump_to = ["__end__"]
after_model_jump_to = ["tools", "__end__"]
```
Open for debate if we want to bundle these in a single jump map / config
for a middleware. Easy to migrate later if we decide to add more hooks.
We will need to **really clearly document** that these must be
explicitly set in order to enable conditional edges.
Notice for the below case, `Middleware2` does actually enable jumps.
<table>
<thead>
<tr>
<th>Before (broken), adding conditional edges unconditionally</th>
<th>After (fixed), adding conditional edges sparingly</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<img width="619" height="508" alt="Screenshot 2025-09-23 at 10 23 23 AM"
src="https://github.com/user-attachments/assets/bba2d098-a839-4335-8e8c-b50dd8090959"
/>
</td>
<td>
<img width="469" height="490" alt="Screenshot 2025-09-23 at 10 23 13 AM"
src="https://github.com/user-attachments/assets/717abf0b-fc73-4d5f-9313-b81247d8fe26"
/>
</td>
</tr>
</tbody>
</table>
<details>
<summary>Snippet for the above</summary>
```py
from typing import Any
from langchain.agents.tool_node import InjectedState
from langgraph.runtime import Runtime
from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.agents.middleware_agent import create_agent
from langchain_core.tools import tool
from typing import Annotated
from langchain_core.messages import HumanMessage
from typing_extensions import NotRequired
@tool
def simple_tool(input: str) -> str:
"""A simple tool."""
return "successful tool call"
class Middleware1(AgentMiddleware):
"""Custom middleware that adds a simple tool."""
tools = [simple_tool]
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
class Middleware2(AgentMiddleware):
before_model_jump_to = ["tools", "__end__"]
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
class Middleware3(AgentMiddleware):
def before_model(self, state: AgentState, runtime: Runtime) -> None:
return None
def after_model(self, state: AgentState, runtime: Runtime) -> None:
return None
builder = create_agent(
model="openai:gpt-4o-mini",
middleware=[Middleware1(), Middleware2(), Middleware3()],
system_prompt="You are a helpful assistant.",
)
agent = builder.compile()
```
</details>
## More Examples
### Guardrails `after_model`
<img width="379" height="335" alt="Screenshot 2025-09-23 at 10 40 09 AM"
src="https://github.com/user-attachments/assets/45bac7dd-398e-45d1-ae58-6ecfa27dfc87"
/>
<details>
<summary>Code</summary>
```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import after_model
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.checkpoint.memory import InMemorySaver
from typing import cast, Any
@after_model(jump_to=["model", "__end__"])
def after_model_hook(state: AgentState) -> dict[str, Any]:
"""Check the last AI message for safety violations."""
last_message_content = cast(AIMessage, state["messages"][-1]).content.lower()
print(last_message_content)
unsafe_keywords = ["pineapple"]
if any(keyword in last_message_content for keyword in unsafe_keywords):
# Jump back to model to regenerate response
return {"jump_to": "model", "messages": [HumanMessage("Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.")]}
return {"jump_to": "__end__"}
# Create agent with guardrails middleware
agent = create_agent(
model="openai:gpt-4o-mini",
middleware=[after_model_hook],
system_prompt="Keep your responses to one sentence please!"
).compile()
# Test with potentially unsafe input
result = agent.invoke(
{"messages": [HumanMessage("Tell me something about pineapples")]},
)
for msg in result["messages"]:
print(msg.pretty_print())
"""
================================ Human Message =================================
Tell me something about pineapples
None
================================== Ai Message ==================================
Pineapples are tropical fruits known for their sweet, tangy flavor and distinctive spiky exterior.
None
================================ Human Message =================================
Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.
None
================================== Ai Message ==================================
Apples are popular fruits that come in various varieties, known for their crisp texture and sweetness, and are often used in cooking and baking.
None
"""
```
</details>
Mostly adding a descriptive frontmatter to workflow files. Also address
some formatting and outdated artifacts
No functional changes outside of
[d5457c3](d5457c39ee),
[90708a0](90708a0d99),
and
[338c82d](338c82d21e)
The file-based and title-based labeler workflows were conflicting,
causing the bot to add and remove identical labels in the same
operation. Hopefully this fixes
- Removes Codespell from deps, docs, and `Makefile`s
- Python version requirements in all `pyproject.toml` files now use the
`~=` (compatible release) specifier
- All dependency groups and main dependencies now use explicit lower and
upper bounds, reducing potential for breaking changes
We want state schema as the input schema to middleware nodes because the
conditional edges after these nodes need access to the full state.
Also, we just generally want all state passed to middleware nodes, so we
should be specifying this explicitly. If we don't, the state annotations
used by users in their node signatures are used (so they might be
missing fields).
# Changes
## Adds support for `DynamicSystemPromptMiddleware`
```py
from langchain.agents.middleware import DynamicSystemPromptMiddleware
from langgraph.runtime import Runtime
from typing_extensions import TypedDict
class Context(TypedDict):
user_name: str
def system_prompt(state: AgentState, runtime: Runtime[Context]) -> str:
user_name = runtime.context.get("user_name", "n/a")
return f"You are a helpful assistant. Always address the user by their name: {user_name}"
middleware = DynamicSystemPromptMiddleware(system_prompt)
```
## Adds support for `runtime` in middleware hooks
```py
class AgentMiddleware(Generic[StateT, ContextT]):
def modify_model_request(
self,
request: ModelRequest,
state: StateT,
runtime: Runtime[ContextT], # Optional runtime parameter
) -> ModelRequest:
# upgrade model if runtime.context.subscription is `top-tier` or whatever
```
## Adds support for omitting state attributes from input / output
schemas
```py
from typing import Annotated, NotRequired
from langchain.agents.middleware.types import PrivateStateAttr, OmitFromInput, OmitFromOutput
class CustomState(AgentState):
# Private field - not in input or output schemas
internal_counter: NotRequired[Annotated[int, PrivateStateAttr]]
# Input-only field - not in output schema
user_input: NotRequired[Annotated[str, OmitFromOutput]]
# Output-only field - not in input schema
computed_result: NotRequired[Annotated[str, OmitFromInput]]
```
## Additionally
* Removes filtering of state before passing into middleware hooks
Typing is not foolproof here, still need to figure out some of the
generics stuff w/ state and context schema extensions for middleware.
TODO:
* More docs for middleware, should hold off on this until other prios
like MCP and deepagents are met
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
## Summary
This PR fixes several bugs and improves the example code in
`BaseChatMessageHistory` docstring that would prevent it from working
correctly.
### Bugs Fixed
- **Critical bug**: Fixed `json.dump(messages, f)` →
`json.dump(serialized, f)` - was using wrong variable
- **NameError**: Fixed bare variable references to use
`self.storage_path` and `self.session_id`
- **Missing imports**: Added required imports (`json`, `os`, message
converters) to make example runnable
### Improvements
- Added missing type hints following project standards (`messages() ->
list[BaseMessage]`, `clear() -> None`)
- Added robust error handling with `FileNotFoundError` exception
handling
- Added directory creation with `os.makedirs(exist_ok=True)` to prevent
path errors
- Improved performance: `json.load(f)` instead of `json.loads(f.read())`
- Added explicit UTF-8 encoding to all file operations
- Updated stores.py to use modern union syntax (`int | None` vs
`Optional[int]`)
### Test Plan
- [x] Code passes linting (`ruff check`)
- [x] Example code now has all required imports and proper syntax
- [x] Fixed variable references prevent runtime errors
- [x] Follows project's type annotation standards
The example code in the docstring is now fully functional and follows
LangChain's coding standards.
---------
Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
- **Description:** Updated the dead/unreachable links to Docling from
the additional resources section of the langchain-docling docs
- **Issue:** Fixes langchain-ai/docs/issues/574
- **Dependencies:** None
# Main changes / new features
## Better support for parallel tool calls
1. Support for multiple tool calls requiring human input
2. Support for combination of tool calls requiring human input + those
that are auto-approved
3. Support structured output w/ tool calls requiring human input
4. Support structured output w/ standard tool calls
## Shortcut for allowed actions
Adds a shortcut where tool config can be specified as a `bool`, meaning
"all actions allowed"
```py
HumanInTheLoopMiddleware(tool_configs={"expensive_tool": True})
```
## A few design decisions here
* We only raise one interrupt w/ all `HumanInterrupt`s, currently we
won't be able to execute all tools until all of these are resolved. This
isn't super blocking bc we can't re-invoke the model until all tools
have finished execution. That being said, if you have a long running
auto-approved tool, this could slow things down.
## TODOs
* Ideally, we would rename `accept` -> `approve`
* Ideally, we would rename `respond` -> `reject`
* Docs update (@sydney-runkle to own)
* In another PR I'd like to refactor testing to have one file for each
prebuilt middleware :)
Fast follow to https://github.com/langchain-ai/langchain/pull/32962
which was deemed as too breaking
Adds documentation for the integration langchain-scraperapi, which
contains 3 tools using the ScraperAPI service.
The tools give AI agents the ability to
Scrape the web and return HTML/text/markdown
Perform Google search and return json output
Perform Amazon search and return json output
For reference, here is the official repo for langchain_scraperapi:
https://github.com/scraperapi/langchain-scraperapi
Replaced `input_message` parameter with a directly called tuple, e.g.
`{"messages": [("user", "What is my name?")]}`
Before, the memory function wasn't working with the agent, using the
format of the input_message parameter.
Specifically, on page [Build an
Agent#adding-in-memory](https://python.langchain.com/docs/tutorials/agents/#adding-in-memory)
In the previous code, the query "What's my name?" wasn't working, as the
agent could not recall memory correctly.
<img width="860" height="679" alt="image"
src="https://github.com/user-attachments/assets/dfbca21e-ffe9-4645-a810-3be7a46d81d5"
/>
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.
- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
- /docs/how_to/summarize_map_reduce/
- /docs/how_to/summarize_refine/
Type: docs-only (no code changes)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.
Issue:
N/A (documentation improvement only)
Dependencies:
None
**Description:**
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.
**Issue:**
N/A (documentation improvement only)
**Dependencies:**
None
Bumps [CodSpeedHQ/action](https://github.com/codspeedhq/action) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codspeedhq/action/releases">CodSpeedHQ/action's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>💥 BREAKING</h2>
<p>It's now required to explicitly set the runner mode to
<code>instrumentation</code> or <code>walltime</code> using either:</p>
<ul>
<li>the <code>mode</code> argument</li>
<li>or the <code>CODSPEED_RUNNER_MODE</code> environment variable</li>
</ul>
<blockquote>
<p>[!TIP]
Before, this variable was automatically set to
<code>instrumentation</code> on every runner except for <a
href="https://codspeed.io/docs/instruments/walltime">CodSpeed macro
runners</a> where it was set to <code>walltime</code> by default.</p>
</blockquote>
<p>Find more details in <a
href="https://codspeed.io/docs/instruments">the instruments
documentation</a>.</p>
<h2>Details</h2>
<h3><!-- raw HTML omitted -->🚀 Features</h3>
<ul>
<li>Make perf profiling enabled by default by <a
href="https://github.com/GuillaumeLagrange"><code>@GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/110">#110</a></li>
<li>Make the runner mode argument required by <a
href="https://github.com/GuillaumeLagrange"><code>@GuillaumeLagrange</code></a></li>
<li>Use introspected node in walltime mode by <a
href="https://github.com/GuillaumeLagrange"><code>@GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/108">#108</a></li>
<li>Add instrumented go shell script by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/102">#102</a></li>
</ul>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Compute proper load bias by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/107">#107</a></li>
<li>Increase timeout for first perf ping by <a
href="https://github.com/GuillaumeLagrange"><code>@GuillaumeLagrange</code></a></li>
<li>Prevent running with valgrind by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/106">#106</a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Change go-runner binary name by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/111">#111</a></li>
</ul>
<p><strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Don't show error when libpython is not found by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Improve conditional compilation in
<code>get_pipe_open_options</code> by <a
href="https://github.com/art049"><code>@art049</code></a> in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/100">#100</a></li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Internals</h3>
<ul>
<li>Change log level to warn for venv_compat error by <a
href="https://github.com/not-matthias"><code>@not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/104">#104</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1">https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1</a>
<strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="653fdc30e6"><code>653fdc3</code></a>
Release v4.0.1 🚀</li>
<li><a
href="4da7be1bda"><code>4da7be1</code></a>
chore: bump runner version to 4.0.1</li>
<li><a
href="172d6c5630"><code>172d6c5</code></a>
chore: make the comment about input validation more discrete</li>
<li><a
href="d15e1ce813"><code>d15e1ce</code></a>
chore: improve the release script</li>
<li><a
href="6eeb021fd0"><code>6eeb021</code></a>
Release v4.0.0 🚀</li>
<li><a
href="74312dabbe"><code>74312da</code></a>
chore: improve the release script</li>
<li><a
href="8a17a350a8"><code>8a17a35</code></a>
ci: add modes to the matrix</li>
<li><a
href="8e3f02a649"><code>8e3f02a</code></a>
feat: make the mode argument required</li>
<li><a
href="97c7a6f5fc"><code>97c7a6f</code></a>
chore: bump runner version to 4.0.0</li>
<li><a
href="8a4cadd026"><code>8a4cadd</code></a>
chore: point the changelog to the runner</li>
<li>See full diff in <a
href="https://github.com/codspeedhq/action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
## Description
This PR adds documentation for the new ZeusDB vector store integration
with LangChain.
## Motivation
ZeusDB is a high-performance vector database (Python/Rust backend)
designed for AI applications that need fast similarity search and
real-time vector ops. This integration brings ZeusDB's capabilities to
the LangChain ecosystem, giving developers another production-oriented
option for vector storage and retrieval.
**Key Features:**
- **User-Friendly Python API**: Intuitive interface that integrates
seamlessly with Python ML workflows
- **High Performance**: Powered by a robust Rust backend for
lightning-fast vector operations
- **Enterprise Logging**: Comprehensive logging capabilities for
monitoring and debugging production systems
- **Advanced Features**: Includes product quantization and persistence
capabilities
- **AI-Optimized**: Purpose-built for modern AI applications and RAG
pipelines
## Changes
- Added provider documentation:
`docs/docs/integrations/providers/zeusdb.mdx` (installation, setup).
- Added vector store documentation:
`docs/docs/integrations/vectorstores/zeusdb.ipynb` (quickstart for
creating/querying a ZeusDBVectorStore).
- Registered langchain-zeusdb in `libs/packages.yml` for discovery.
## Target users
- AI/ML engineers building RAG pipelines
- Data scientists working with large document collections
- Developers needing high-throughput vector search
- Teams requiring near real-time vector operations
## Testing
- Followed LangChain's "How to add standard tests to an integration"
guidance.
- Code passes format, lint, and test checks locally.
- Tested with LangChain Core 0.3.74
- Works with Python 3.10 to 3.13
## Package Information
**PyPI:** https://pypi.org/project/langchain-zeusdb
**Github:** https://github.com/ZeusDB/langchain-zeusdb
## Summary
- Add comprehensive type hints to the MyInMemoryStore example code in
BaseStore docstring
- Improve documentation quality and educational value for developers
- Align with LangChain's coding standards requiring type hints on all
Python code
## Changes Made
- Added return type annotations to all methods (__init__, mget, mset,
mdelete, yield_keys)
- Added parameter type annotations using proper generic types (Sequence,
Iterator)
- Added instance variable type annotation for the store attribute
- Used modern Python union syntax (str | None) for optional types
## Test Plan
- Verified Python syntax validity with ast.parse()
- No functional changes to actual code, only documentation improvements
- Example code now follows best practices and coding standards
This change improves the educational value of the example code and
ensures consistency with LangChain's requirement that "All Python code
MUST include type hints and return types" as specified in the
development guidelines.
---------
Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
**Description:**
Introduces documentation notebooks for AI/ML API integration covering
the following use cases:
- Chat models (`ChatAimlapi`)
- Text completion models (`AimlapiLLM`)
- Provider usage examples
- Text embedding models (`AimlapiEmbeddings`)
Additionally, adds the `langchain-aimlapi` package entry to
`libs/packages.yml` for package management.
This PR aims to provide a comprehensive starting point for developers
integrating AI/ML API models with LangChain via the new
`langchain-aimlapi` package.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @aimlapi
---
### **To-Do Before Submitting PR:**
* [x] Run `make format`
* [x] Run `make lint`
* [x] Confirm all documentation notebooks are in
`docs/docs/integrations/`
* [x] Double-check `libs/packages.yml` has the correct repo path
* [x] Confirm no `pyproject.toml` modifications were made unnecessarily
Co-authored-by: Mason Daugherty <mason@langchain.dev>
**Description:**
This PR updates the free searches per month from **100** to **250** and
renames SerpAPI to [SerpApi](https://serpapi.com/) to prevent confusion.
Add import API keys and enhance usage instructions in the Jupyter
notebook
**Issue:** N/A
**Dependencies:** N/A
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
**Description:**
This PR updated links to the latest Anthropic documentation. Changes
include revised links for model overview, tool usage, web search tool,
text editor tool, and more.
**Issue:**
N/A
**Dependencies:**
None
**Twitter handle:**
N/A
- **Description:** The `langchain-yugabytedb` package implementations of
core LangChain abstractions using `YugabyteDB` Distributed SQL Database.
YugabyteDB is a cloud-native distributed PostgreSQL-compatible database
that combines strong consistency with ultra-resilience, seamless
scalability, geo-distribution, and highly flexible data locality to
deliver business-critical, transactional applications.
[YugabyteDB](https://www.yugabyte.com/ai/) combines the power of the
`pgvector` PostgreSQL extension with an inherently distributed
architecture. This future-proofed foundation helps you build GenAI
applications using RAG retrieval that demands high-performance vector
search.
- [ ] **tests and docs**:
1. `langchain-yugabytedb`
[github](https://github.com/yugabyte/langchain-yugabytedb) repo.
2. YugabyteDB VectorStore example notebook showing its use. It lives in
`langchain/docs/docs/integrations/vectorstores/yugabytedb.ipynb`
directory.
3. Running `langchain-yugabytedb` unit tests
- Setting up a Development Environment
This document details how to set up a local development environment that
will
allow you to contribute changes to the project.
Acquire sources and create virtualenv.
```shell
git clone https://github.com/yugabyte/langchain-yugabytedb
cd langchain-yugabytedb
uv venv --python=3.13
source .venv/bin/activate
```
Install package in editable mode.
```shell
uv pip install pipx
pipx install poetry
poetry install
uv pip install pytest pytest_asyncio pytest-timeout langchain-core langchain_tests sqlalchemy psycopg psycopg-binary numpy pgvector
```
Start YugabyteDB RF-1 Universe.
```shell
docker run -d --name yugabyte_node01 --hostname yugabyte01 \
-p 7000:7000 -p 9000:9000 -p 15433:15433 -p 5433:5433 -p 9042:9042 \
yugabytedb/yugabyte:2.25.2.0-b359 bin/yugabyted start --background=false \
--master_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true" \
--tserver_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true"
docker exec -it yugabyte_node01 bin/ysqlsh -h yugabyte01 -c "CREATE extension vector;"
```
Invoke test cases.
```shell
pytest -vvv tests/unit_tests/yugabytedb_tests
```
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [x] **feat(docs)**: add Bigtable Key-value store doc
- [X] **feat(docs)**: add Bigtable Vector store doc
This PR adds a doc for Bigtable and LangChain Key-value store
integration. It contains guides on how to add, delete, get, and yield
key-value pairs from Bigtable Key-value Store for LangChain.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
# feat(integrations): Add Timbr tools integration
## DESCRIPTION
This PR adds comprehensive documentation and integration support for
Timbr's semantic layer tools in LangChain.
[Timbr](https://timbr.ai/) provides an ontology-driven semantic layer
that enables natural language querying of databases through
business-friendly concepts. It connects raw data to governed business
measures for consistent access across BI, APIs, and AI applications.
[`langchain-timbr`](https://pypi.org/project/langchain-timbr/) is a
Python SDK that extends
[LangChain](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangChain)
and
[LangGraph](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangGraph)
with custom agents, chains, and nodes for seamless integration with the
Timbr semantic layer. It enables converting natural language prompts
into optimized semantic-SQL queries and executing them directly against
your data.
**What's Added:**
- Complete integration documentation for `langchain-timbr` package
- Tool documentation page with usage examples and API reference
**Integration Components:**
- `IdentifyTimbrConceptChain` - Identify relevant concepts from user
prompts
- `GenerateTimbrSqlChain` - Generate SQL queries from natural language
- `ValidateTimbrSqlChain` - Validate queries against knowledge graph
schemas
- `ExecuteTimbrQueryChain` - Execute queries against semantic databases
- `GenerateAnswerChain` - Generate human-readable answers from results
## Documentation Added
- `/docs/integrations/providers/timbr.mdx` - Provider overview and
configuration
- `/docs/integrations/tools/timbr.ipynb` - Comprehensive tool usage
examples
## Links
- [PyPI Package](https://pypi.org/project/langchain-timbr/)
- [GitHub Repository](https://github.com/WPSemantix/langchain-timbr)
- [Official
Documentation](https://docs.timbr.ai/doc/docs/integration/langchain-sdk/)
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
**Description:**
Add documentation for Qwen integration in LangChain, including setup
instructions, usage examples, and configuration details. Update related
qwq documentation to reflect current best practices and improve clarity
for users.
This PR enhances the documentation ecosystem by:
- Adding a new guide for integrating Qwen models
- Updating outdated or incomplete qwq documentation
- Improving structure and readability of relevant sections
**Issue:** N/A
**Dependencies:** None
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
**Description:** Adds documentation for ZenRows integration with
LangChain, including provider overview and detailed tool documentation.
ZenRows is an enterprise-grade web scraping solution that enables
LangChain agents to extract web content at scale with advanced features
like JavaScript rendering, anti-bot bypass, geo-targeting, and multiple
output formats.
This PR includes:
- Provider documentation
(`docs/docs/integrations/providers/zenrows.ipynb`)
- Tool documentation
(`docs/docs/integrations/tools/zenrows_universal_scraper.ipynb`)
- Complete usage examples and API reference links
**Issue:** N/A
**Dependencies:**
- [langchain-zenrows](https://github.com/ZenRows-Hub/langchain-zenrows)
package (external, available on
[PyPI](https://pypi.org/project/langchain-zenrows/))
- No changes to core LangChain dependencies
**LinkedIn handle:** https://www.linkedin.com/company/zenrows/
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Adding Oracle Generative AI as one of the providers for langchain.
Updated the old examples in the documentation with the new working
examples.
---------
Co-authored-by: Vishal Karwande <vishalkarwande@Vishals-MacBook-Pro.local>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
**Description:** Fixes a small typo in `_get_document_with_hash` inside
`libs/core/langchain_core/indexing/api.py`.
**Issue:** N/A (no related issue)
**Dependencies:** None
Especially helpful for the text splitters tests where we're installing
pytorch (expensive and slow slow slow). Should speed up CI by 5-10 mins.
w/o caches, CI taking 20 minutes 😨
w/ caches, CI taking 3 minutes
Taking advantage of [partial
runs](https://codspeed.io/docs/features/partial-runs)!
This should save us minutes on every CI job, we only run codspeed for
libs w/ changes and this doesn't affect benchmarking drops
Oversight when moving back to basic function call for
`modify_model_request` rather than implementation as its own node.
Basic test right now failing on main, passing on this branch
Revealed a gap in testing. Will write up a more robust test suite for
basic middleware features.
### Description
* Replace the Mermaid graph node label escaping logic
(`_escape_node_label`) with `_to_safe_id`, which converts a string into
a unique, Mermaid-compatible node id. Ensures nodes with special
characters always render correctly.
**Before**
* Invalid characters (e.g. `开`) replaced with `_`. Causes collisions
between nodes with names that are the same length and contain all
non-safe characters:
```python
_escape_node_label("开") # '_'
_escape_node_label("始") # '_' same as above, but different character passed in. not a unique mapping.
```
**After**
```python
_to_safe_id("开") # \5f00
_to_safe_id("始") # \59cb unique!
```
### Tests
* Rename `test_graph_mermaid_escape_node_label()` to
`test_graph_mermaid_to_safe_id()` and update function logic to use
`_to_safe_id`
* Add `test_graph_mermaid_special_chars()`
### Issue
Fixeslangchain-ai/langgraph#6036
Reusable workflows are not currently supported by PyPI's Trusted
Publishing
functionality, and are subject to breakage. Users are strongly
encouraged
to avoid using reusable workflows for Trusted Publishing until support
becomes official. Please, do not report bugs if this breaks.
Description: Fixes a bug in RunnableRetry where .batch / .abatch could
return misordered outputs (e.g. inputs [0,1,2] yielding [1,1,2]) when
some items succeeded on an earlier attempt and others were retried. Root
cause: successful results were stored keyed by the index within the
shrinking “pending” subset rather than the original input index, causing
collisions and reordered/duplicated outputs after retries. Fix updates
_batch and _abatch to:
- Track remaining original indices explicitly.
- Call underlying batch/abatch only on remaining inputs.
- Map results back to original indices.
- Preserve final ordering by reconstructing outputs in original
positional order.
Issue: Fixes#21326
Tests:
- Added regression tests: test_retry_batch_preserves_order and
test_async_retry_batch_preserves_order asserting correct ordering after
a single controlled failure + retry.
- Existing retry tests still pass.
Dependencies:
- None added or changed.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This removes langchain-experimental from api reference.
We do not recommend it to users for production use cases, so let's also
deprecate it from documentation
**Description:** Fixes infinite recursion issue in JSON schema
dereferencing when objects contain both $ref and other properties (e.g.,
nullable, description, additionalProperties). This was causing Apollo
MCP server schemas to hang indefinitely during tool binding.
**Problem:**
- Commit fb5da8384 changed the condition from `set(obj.keys()) ==
{"$ref"}` to `"$ref" in set(obj.keys())`
- This caused objects with $ref + other properties to be treated as pure
$ref nodes
- Result: other properties were lost and infinite recursion occurred
with complex schemas
**Solution:**
- Restore pure $ref detection for objects with only $ref key
- Add proper handling for mixed $ref objects that preserves all
properties
- Merge resolved reference content with other properties
- Maintain cycle detection to prevent infinite recursion
**Impact:**
- Fixes Apollo MCP server schema integration
- Resolves tool binding infinite recursion with complex GraphQL schemas
- Preserves backward compatibility with existing functionality
- No performance impact - actually improves handling of complex schemas
**Issue:** Fixes#32511
**Dependencies:** None
**Testing:**
- Added comprehensive unit tests covering mixed $ref scenarios
- All existing tests pass (1326 passed, 0 failed)
- Tested with realistic Apollo GraphQL schemas
- Stress tested with 100 iterations of complex schemas
**Verification:**
- ✅ `make format` - All files properly formatted
- ✅ `make lint` - All linting checks pass
- ✅ `make test` - All 1326 unit tests pass
- ✅ No breaking changes - full backwards compatibility maintained
---------
Co-authored-by: Marcus <marcus@Marcus-M4-MAX.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
On Friday, October 10th, the moonshotai/kimi-k2-instruct model will be
decommissioned in favor of the latest version,
moonshotai/kimi-k2-instruct-0905.
Until then, requests to moonshotai/kimi-k2-instruct will automatically
be routed to moonshotai/kimi-k2-instruct-0905.
# Description
This PR fixes a bug in _recursive_set_additional_properties_false used
in function_calling.convert_to_openai_function.
Previously, schemas with "additionalProperties=True" were not correctly
overridden when strict validation was expected, which could lead to
invalid OpenAI function schemas.
The updated implementation ensures that:
- Any schema with "additionalProperties" already set will now be forced
to False under strict mode.
- Recursive traversal of properties, items, and anyOf is preserved.
- Function signature remains unchanged for backward compatibility.
# Issue
When using tool calling in OpenAI structured output strict mode
(strict=True), 400: "Invalid schema for response_format XXXXX
'additionalProperties' is required to be supplied and to be false" error
raises for the parameter that contains dict type. OpenAI requires
additionalProperties to be set to False.
Some PRs try to resolved the issue.
- PR #25169 introduced _recursive_set_additional_properties_false to
recursively set additionalProperties=False.
- PR #26287 fixed handling of empty parameter tools for OpenAI function
generation.
- PR #30971 added support for Union type arguments in strict mode of
OpenAI function calling / structured output.
Despite these improvements, since Pydantic 2.11, it will always add
`additionalProperties: True` for arbitrary dictionary schemas dict or
Any (https://pydantic.dev/articles/pydantic-v2-11-release#changes).
Schemas that already had additionalProperties=True in such cases were
not being overridden, which this PR addresses to ensure strict mode
behaves correctly in all cases.
# Dependencies
No Changes
---------
Co-authored-by: Zhong, Yu <yzhong@freewheel.com>
This PR adds a new cookbook demonstrating how to build a RAG pipeline
with LangChain and track + evaluate it using MLflow.
Currently not much documentation on LangChain MLflow integration, hope
this can help folks trying to monitor and evaluate their LangChain
applications.
- ArXiv document loader
- In Memory vector store
- LCEL rag pipeline
- MLflow tracing
- MLflow evaluation
Issue:
N/A
Dependencies:
N/A
**Description:**
Updates the Confident AI integration documentation to use modern
patterns and improve code quality. This change:
- Replaces deprecated `DeepEvalCallbackHandler` with the new
`CallbackHandler` from `deepeval.integrations.langchain`
- Updates installation and authentication instructions to match current
best practices
- Adds modern integration examples using LangChain's latest patterns
- Removes deprecated metrics and outdated code examples
- Updates code samples to follow current best practices
The changes make the documentation more maintainable and ensure users
follow the recommended integration patterns.
**Issue:** Fixes#32444
**Dependencies:**
- deepeval
- langchain
- langchain-openai
**Twitter handle:** @Muwinuddin
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Description:
Added "Method Two: Quick Setup (Linux)" section to prerequisites,
providing a curl-based installation method for deploying JaguarDB
without Docker. Retained original Docker setup instructions for
flexibility.
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- **Description:** Aerospike Vector Store has been retired. It is no
longer supported so It should no longer be documented on the Langchain
site.
- **Add tests and docs**: Removes docs for retired Aerospike vector
store.
- **Lint and test**: NA
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
### Description
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.
### Issue
Fixeslangchain-ai/langchain-weaviate#197
### Dependencies
None
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.
- [x] **PR message**:
- **Description:** Fixing the import path for `WatsonxToolkit` in
examples after releasing `lnagchain-ibm==0.3.17`
- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
### Description
This PR is primarily aimed at updating some usage methods in the
`modelscope.mdx` file.
Specifically, it changes from `ModelScopeLLM` to `ModelScopeEndpoint`.
### Relevant PR
The relevant PR link is:
https://github.com/langchain-ai/langchain/pull/28941
**Description:**
Raise a more descriptive OutputParserException when JSON parsing results
in a non-dict type. This improves debugging and aligns behavior with
expectations when using expected_keys.
**Issue:**
Fixes#32233
**Twitter handle:**
@yashvtobre
**Testing:**
- Ran make format and make lint from the root directory; both passed
cleanly.
- Attempted make test but no such target exists in the root Makefile.
- Executed tests directly via pytest targeting the relevant test file,
confirming all tests pass except for unrelated async test failures
outside the scope of this change.
**Notes:**
- No additional dependencies introduced.
- Changes are backward compatible and isolated within the output parser
module.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
- **Description:** Currently,
`langchain_core.runnables.graph_mermaid.py` is hardcoded to use
mermaid.ink to render graph diagrams. It would be nice to allow users to
specify a custom URL, e.g. for self-hosted instances of the Mermaid
server.
- **Issue:** [Langchain Forum: allow custom mermaid API
URL](https://forum.langchain.com/t/feature-request-allow-custom-mermaid-api-url/1472)
- **Dependencies:** None
- [X] **Add tests and docs**: Added unit tests using mock requests.
- [X] **Lint and test**: Run `make format`, `make lint` and `make test`.
Minimal example using the feature:
```python
import os
import operator
from pathlib import Path
from typing import Any, Annotated, TypedDict
from langgraph.graph import StateGraph
class State(TypedDict):
messages: Annotated[list[dict[str, Any]], operator.add]
def hello_node(state: State) -> State:
return {"messages": [{"role": "assistant", "content": "pong!"}]}
builder = StateGraph(State)
builder.add_node("hello_node", hello_node)
builder.add_edge("__start__", "hello_node")
builder.add_edge("hello_node", "__end__")
graph = builder.compile()
# Run graph
output = graph.invoke({"messages": [{"role": "user", "content": "ping?"}]})
# Draw graph
Path("graph.png").write_bytes(graph.get_graph().draw_mermaid_png(base_url="https://custom-mermaid.ink"))
```
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- Beta isn't needed for search result tests anymore
- Add TODO for other tests to come back when generally available
- Regenerate remote MCP snapshot after some testing (now the same, but
fresher)
- Bump deps
This pull request introduces a failing unit test to reproduce the bug
reported in issue #32028.
The test asserts the expected behavior: `BaseCallbackManager.merge()`
should combine `handlers` and `inheritable_handlers` independently,
without mixing them. This test will fail on the current codebase and is
intended to guide the fix and prevent future regressions.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
The Ollama chat model adapter does not support all of the possible
message content formats. That leads to Ollama model adapter crashing on
some messages from different models (e.g. Gemini 2.5 Flash).
These changes should fix one known scenario - when `content` is a list
containing a string.
This allows to use PEP604 syntax for `ToolNode` error handlers
```python
def error_handler(e: ValueError | ToolException) -> str:
return "error"
ToolNode(my_tool, handle_tool_errors=error_handler).invoke(...)
```
Without this change, this fails with `AttributeError: 'types.UnionType'
object has no attribute '__mro__'`
This is better than using a subclass as returning a `property` works
with `ClassWithBetaMethods.beta_property.__doc__`
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Added an id field to the Document passed to filter for
InMemoryVectorStore similarity search. This allows filtering by Document
id and brings the input to the filter in line with the result returned
by the vector similarity search.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- stars badge redundant (look at the top of the page)
- remove version badge since we have many pkgs (and it was only showing
core) -- also, just look at the releases tab to the right of the readme
- **Description:** The vectorstore standard-test mistakenly assumes that
the store's `get_by_ids` respects the order of the provided `ids`. This
is not the case (as the base class docstring states). This PR fixes
those tests that would fail otherwise (see issue #32820 for details,
repro and all). Fixes#32820
- **Issue:** Fixes#32820
- **Dependencies:** none
Co-authored-by: Stefano Lottini <stefano.lottini@ibm.com>
## Overview
Adding new `AgentMiddleware` primitive that supports `before_model`,
`after_model`, and `prepare_model_request` hooks.
This is very exciting! It makes our `create_agent` prebuilt much more
extensible + capable. Still in alpha and subject to change.
This is different than the initial
[implementation](https://github.com/langchain-ai/langgraph/tree/nc/25aug/agent)
in that it:
* Fills in gaps w/ missing features, for ex -- new structured output,
optionality of tools + system prompt, sync and async model requests,
provider builtin tools
* Exposes private state extensions for middleware, enabling things like
model call tracking, etc
* Middleware can register tools
* Uses a `TypedDict` for `AgentState` -- dataclass subclassing is tricky
w/ required values + required decorators
* Addition of `model_settings` to `ModelRequest` so that we can pass
through things to bind (like cache kwargs for anthropic middleware)
## TODOs
### top prio
- [x] add middleware support to existing agent
- [x] top prio middlewares
- [x] summarization node
- [x] HITL
- [x] prompt caching
other ones
- [x] model call limits
- [x] tool calling limits
- [ ] usage (requires output state)
### secondary prio
- [x] improve typing for state updates from middleware (not working
right now w/ simple `AgentUpdate` and `AgentJump`, at least in Python)
- [ ] add support for public state (input / output modifications via
pregel channel mods) -- to be tackled in another PR
- [x] testing!
### docs
See https://github.com/langchain-ai/docs/pull/390
- [x] high level docs about middleware
- [x] summarization node
- [x] HITL
- [x] prompt caching
## open questions
Lots of open questions right now, many of them inlined as comments for
the short term, will catalog some more significant ones here.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description:**
Remove a character in tool_calling.ipynb that causes a grammatical error
Verification: Local docs build passed after fix
**Issue:**
None (direct hotfix for rendering issue identified during documentation
review)
**Dependencies:**
None
**Description:** This PR fixes the broken Anthropic model example in the
documentation introduction page and adds a comment field to display
model version warnings in code blocks. The changes ensure that users can
successfully run the example code and are reminded to check for the
latest model versions.
**Issue:** https://github.com/langchain-ai/langchain/issues/32806
**Changes made:**
- Update Anthropic model from broken "claude-3-5-sonnet-latest" to
working "claude-3-7-sonnet-20250219"
- Add comment field to display model version warnings in code blocks
- Improve user experience by providing working examples and version
guidance
**Dependencies:** None required
Fixes#32747
SpaCy integration test fixture was trying to use pip to download the
SpaCy language model (`en_core_web_sm`), but uv-managed environments
don't include pip by default. Fail test if not installed as opposed to
downloading.
Removed a period in bulleted list for consistency
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes#123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
Completed the sentence by adding a period ".", in sync with other points
>> Click "Propose changes"
to
>> Click "Propose changes".
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes#123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
Update `langchain-core` dependency min from `>=0.3.63` to `>=0.3.75`.
### Motivation
- We located the `langchain-core` package locally in the monorepo and
need to align `langchain-tests` with the new minimum version.
### Overview
Preparing the `1.0.0a1` release of `langchain-tests` to align with
`langchain-core` version `1.0.0a1`.
### Changes
- Bump package version to `1.0.0a1`
- Relax `langchain-core` requirement from `<1.0.0,>=0.3.63` to
`<2.0.0,>=0.3.63`
### Motivation
All main LangChain packages are now publishing `1.0.0a` prereleases.
`langchain-tests` needs a matching prerelease so downstreams can install
tests alongside the 1.0 series without conflicts.
### Tests
- Verified installation and tests against both `0.3.75` and `1.0.0a1`.
Description:
Added the content= keyword when creating SystemMessage and HumanMessage
in the messages list, making it consistent with the API reference.
### Summary
This PR updates the sentence on the "How-to guides" landing page to
replace smart (curly) quotes with straight quotes in the phrase:
> "How do I...?"
### Why This Change?
- Ensures formatting consistency across documentation
- Avoids encoding or rendering issues with smart quotes
- Matches standard Markdown and inline code formatting
This is a small change, but improves clarity and polish on a key landing
page.
Change "Linkedin" to "LinkedIn" to be consistent with LinkedIn's
spelling.
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
Adding `create_react_agent` and introducing `langchain.agents`!
## Enhanced Structured Output
`create_react_agent` supports coercion of outputs to structured data
types like `pydantic` models, dataclasses, typed dicts, or JSON schemas
specifications.
### Structural Changes
In langgraph < 1.0, `create_react_agent` implemented support for
structured output via an additional LLM call to the model after the
standard model / tool calling loop finished. This introduced extra
expense and was unnecessary.
This new version implements structured output support in the main loop,
allowing a model to choose between calling tools or generating
structured output (or both).
The same basic pattern for structured output generation works:
```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from pydantic import BaseModel
class Weather(BaseModel):
temperature: float
condition: str
def weather_tool(city: str) -> str:
"""Get the weather for a city."""
return f"it's sunny and 70 degrees in {city}"
agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather)
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```
### Advanced Configuration
The new API exposes two ways to configure how structured output is
generated. Under the hood, LangChain will attempt to pick the best
approach if not explicitly specified. That is, if provider native
support is available for a given model, that takes priority over
artificial tool calling.
1. Artificial tool calling (the default for most models)
LangChain generates a tool (or tools) under the hood that match the
schema of your response format. When the model calls those tools,
LangChain coerces the args to the desired format. Note, LangChain does
not validate outputs adhering to JSON schema specifications.
<details>
<summary>Extended example</summary>
```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel
class Weather(BaseModel):
temperature: float
condition: str
def weather_tool(city: str) -> str:
"""Get the weather for a city."""
return f"it's sunny and 70 degrees in {city}"
agent = create_react_agent(
"openai:gpt-4o-mini",
tools=[weather_tool],
response_format=ToolStrategy(
schema=Weather, tool_message_content="Final Weather result generated"
),
)
result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
message.pretty_print()
"""
================================ Human Message =================================
What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
weather_tool (call_Gg933BMHMwck50Q39dtBjXm7)
Call ID: call_Gg933BMHMwck50Q39dtBjXm7
Args:
city: Tokyo
================================= Tool Message =================================
Name: weather_tool
it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
Tool Calls:
Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l)
Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l
Args:
temperature: 70
condition: sunny
================================= Tool Message =================================
Name: Weather
Final Weather result generated
"""
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```
</details>
2. Provider implementations (limited to OpenAI, Groq)
Some providers support structured output generating directly. For those
cases, we offer the `ProviderStrategy` hint:
<details>
<summary>Extended example</summary>
```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel
class Weather(BaseModel):
temperature: float
condition: str
def weather_tool(city: str) -> str:
"""Get the weather for a city."""
return f"it's sunny and 70 degrees in {city}"
agent = create_react_agent(
"openai:gpt-4o-mini",
tools=[weather_tool],
response_format=ProviderStrategy(Weather),
)
result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
message.pretty_print()
"""
================================ Human Message =================================
What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp)
Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp
Args:
city: Tokyo
================================= Tool Message =================================
Name: weather_tool
it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
{"temperature":70,"condition":"sunny"}
Weather(temperature=70.0, condition='sunny')
"""
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```
Note! The final tool message has the custom content provided by the dev.
</details>
Prompted output was previously supported and is no longer supported via
the `response_format` argument to `create_react_agent`. If there's
significant demand for this, we'd be happy to engineer a solution.
## Error Handling
`create_react_agent` now exposes an API for managing errors associated
with structured output generation. There are two common problems with
structured output generation (w/ artificial tool calling):
1. **Parsing error** -- the model generates data that doesn't match the
desired structure for the output
2. **Multiple tool calls error** -- the model generates 2 or more tool
calls associated with structured output schemas
A developer can control the desired behavior for this via the
`handle_errors` arg to `ToolStrategy`.
<details>
<summary>Extended example</summary>
```py
from langchain_core.messages import HumanMessage
from pydantic import BaseModel
from langchain.agents import create_react_agent
from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy
class Weather(BaseModel):
temperature: float
condition: str
def weather_tool(city: str) -> str:
"""Get the weather for a city."""
return f"it's sunny and 70 degrees in {city}"
def handle_validation_error(error: Exception) -> str:
if isinstance(error, StructuredOutputValidationError):
return (
f"Please call the {error.tool_name} call again with the correct arguments. "
f"Your mistake was: {error.source}"
)
raise error
agent = create_react_agent(
"openai:gpt-5",
tools=[weather_tool],
response_format=ToolStrategy(
schema=Weather,
handle_errors=handle_validation_error,
),
)
```
</details>
## Error Handling for Tool Calling
Tools fail for two main reasons:
1. **Invocation failure** -- the args generated by the model for the
tool are incorrect (missing, incompatible data types, etc)
2. **Execution failure** -- the tool execution itself fails due to a
developer error, network error, or some other exception.
By default, when tool **invocation** fails, the react agent will return
an artificial `ToolMessage` to the model asking it to correct its
mistakes and retry.
Now, when tool **execution** fails, the react agent raises the
`ToolException` by default instead of asking the model to retry. This
helps to avoid looping that should be avoided due to the aforementioned
issues.
Developers can configure their desired behavior for retries / error
handling via the `handle_tool_errors` arg to `ToolNode`.
## Pre-Bound Models
`create_react_agent` no longer supports inputs to `model` that have been
pre-bound w/ tools or other configuration. To properly support
structured output generation, the agent itself needs the power to bind
tools + structured output kwargs.
This also makes the devx cleaner - it's always expected that `model` is
an instance of `BaseChatModel` (or `str` that we coerce into a chat
model instance).
Dynamic model functions can return a pre-bound model **IF** structured
output is not also used. Dynamic model functions can then bind tools /
structured output logic.
## Import Changes
Users should now use `create_react_agent` from `langchain.agents`
instead of `langgraph.prebuilts`.
Other imports have a similar migration path, `ToolNode` and `AgentState`
for example.
* `chat_agent_executor.py` -> `react_agent.py`
Some notes:
1. Disabled blockbuster + some linting in `langchain/agents` -- beyond
ideal, but necessary to get this across the line for the alpha. We
should re-enable before official release.
- **Description:** Updated Docker command to use ClickHouse 25.7 (has
`vector_similarity` index support). Added `CLICKHOUSE_SKIP_USER_SETUP=1`
env param to [bypass default user
setup](https://clickhouse.com/docs/install/docker#managing-default-user)
and allow external network access. There was also a bug where if you try
to access results using `similarity_search_with_relevance_scores`, they
need to unpacked first.
- **Issue:** Fixes#32094 if someone following tutorial with default
Clickhouse configurations.
# Description
Updated documentation to reflect Microsoft’s rebranding of Azure AI
Studio to Azure AI Foundry. This ensures consistency with current Azure
terminology across the docs.
# Issue
N/A
# Dependencies
None
The async version of the test should use the `ayield_keys` method
instead of `yield_keys`.
Otherwise tools such as `blockbuster` may trigger on a blocking call.
**Description:**
Fixed corrupted text in the code cell output of the documentation
notebook. The code cell itself was correct, but the saved output
contained garbage text.
**Issue:**
The saved output in the documentation notebook contained garbage/typo
text in the table name.
**Dependencies:**
None
Having vercel attempt to deploy on each commit (even if unrelated to
docs) was getting annoying. Options:
- `[skip-preview]`
- `[no-preview]`
- `[skip-deploy]`
Full example: `fix(core): resolve memory leak [no-preview]`
* Create usage metadata on
[`message_delta`](https://docs.anthropic.com/en/docs/build-with-claude/streaming#event-types)
instead of at the beginning. Consequently, token counts are not included
during streaming but instead at the end. This allows for accurate
reporting of server-side tool usage (important for billing)
* Add some clarifying comments
* Fix some outstanding Pylance warnings
* Remove unnecessary `text` popping in thinking blocks
* Also now correctly reports `input_cache_read`/`input_cache_creation`
as a result
When citations are returned from streaming, they include a `file_id:
null` field in their `content_block_location` structure.
When these citations are passed back to the API in subsequent messages,
the API rejects them with "Extra inputs are not permitted" for the
`file_id` field.
**Description:**
Corrected LangGraph documentation link (changed to “guides”), and added
a link to LangGraph JS how-to guides for clarity.
**Issue:**
N/A
**Dependencies:**
None
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
The appropriate `ToolNode` attribute for error handling is called
`handle_tool_errors` instead of `handle_tool_error`.
For further info see [ToolNode source code in
LangGraph](https://github.com/langchain-ai/langgraph/blob/main/libs/prebuilt/langgraph/prebuilt/tool_node.py#L255)
**Twitter handle:** gitaroktato
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
## Description
This PR adds support for custom header patterns in
`MarkdownHeaderTextSplitter`, allowing users to define non-standard
Markdown header formats (like `**Header**`) and specify their hierarchy
levels.
**Issue:** Fixes#22738
**Dependencies:** None - this change has no new dependencies
**Key Changes:**
- Added optional `custom_header_patterns` parameter to support
non-standard header formats
- Enable splitting on patterns like `**Header**` and `***Header***`
- Maintain full backward compatibility with existing usage
- Added comprehensive tests for custom and mixed header scenarios
## Example Usage
```python
from langchain_text_splitters import MarkdownHeaderTextSplitter
headers_to_split_on = [
("**", "Chapter"),
("***", "Section"),
]
custom_header_patterns = {
"**": 1, # Level 1 headers
"***": 2, # Level 2 headers
}
splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
custom_header_patterns=custom_header_patterns,
)
# Now **Chapter 1** is treated as a level 1 header
# And ***Section 1.1*** is treated as a level 2 header
```
## Testing
- ✅ Added unit tests for custom header patterns
- ✅ Added tests for mixed standard and custom headers
- ✅ All existing tests pass (backward compatibility maintained)
- ✅ Linting and formatting checks pass
---
The implementation provides a flexible solution while maintaining the
simplicity of the existing API. Users can continue using the splitter
exactly as before, with the new functionality being entirely opt-in
through the `custom_header_patterns` parameter.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Claude <noreply@anthropic.com>
Supersedes #32461
Fixed incorrect input token reporting during streaming when tools are
used. Previously, input tokens were counted at `message_start` before
tool execution, leading to inaccurate counts. Now input tokens are
properly deferred until `message_delta` (completion), aligning with
Anthropic's billing model and SDK expectations.
**Before Fix:**
- Streaming with tools: Input tokens = 0 ❌
- Non-streaming with tools: Input tokens = 472 ✅
**After Fix:**
- Streaming with tools: Input tokens = 472 ✅
- Non-streaming with tools: Input tokens = 472 ✅
Aligns with Anthropic's SDK expectations. The SDK handles input token
updates in `message_delta` events:
```python
# https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/streaming/_messages.py
if event.usage.input_tokens is not None:
current_snapshot.usage.input_tokens = event.usage.input_tokens
```
Supersedes #32544
Changes to the `trimmer` behavior resulted in the call `"What math
problem was asked?"` to no longer see the relevant query due to the
number of the queries' tokens. Adjusted to not trigger trimming the
relevant part of the message history. Also, add print to the trimmer to
increase observability on what is leaving the context window.
Add note to trimming tut & format links as inline
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes#123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
---------
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Enhance the integrations table by adding the `js:
'@langchain/community'` reference for several packages and updating the
titles of specific integrations to avoid improper capitalization
Supersedes #32408
Description:
This PR ensures that tool calls without explicitly provided `args` will
default to an empty dictionary (`{}`), allowing tools with no parameters
(e.g. `def foo() -> str`) to be registered and invoked without
validation errors. This change improves compatibility with agent
frameworks that may omit the `args` field when generating tool calls.
Issue:
See
[langgraph#5722](https://github.com/langchain-ai/langgraph/issues/5722)
–
LangGraph currently emits tool calls without `args`, which leads to
validation errors
when tools with no parameters are invoked. This PR ensures compatibility
by defaulting
`args` to `{}` when missing.
Dependencies:
None
---------
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes#123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
---------
Signed-off-by: jitokim <pigberger70@gmail.com>
Co-authored-by: jito <pigberger70@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
**Description**
Corrected a typo in the Ollama chatbot example output in
`docs/docs/integrations/chat/ollama.ipynb` where `"got-oss"` was
mistakenly used instead of `"gpt-oss"`.
No functional changes to code; documentation-only update.
All notebook outputs were cleared to keep the diff minimal.
**Issue**
N/A
**Dependencies**
None
**Twitter handle**
N/A
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**
- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
fix#30146
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
```python
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
caching_llm = llm.bind(cache_control={"type": "ephemeral"})
caching_llm.invoke(
[
HumanMessage("..."),
AIMessage("..."),
HumanMessage("..."), # <-- final message / content block gets cache annotation
]
)
```
Potentially useful given's Anthropic's [incremental
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation)
capabilities:
> During each turn, we mark the final block of the final message with
cache_control so the conversation can be incrementally cached. The
system will automatically lookup and use the longest previously cached
prefix for follow-up messages.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
This commit removes redundant integration info from details page,
additionally, changing reference from "DigitalOcean GradientAI" to
"DigitalOcean Gradient™ AI" and updating the setup instructions
accordingly.
**Description:**
Two broken links were reported by another LangChain employee. This PR
fixes those links.
Fixed and tested locally.
**Dependencies:**
None
This PR adds documentation for integrating [TrueFoundry’s AI
Gateway](https://www.truefoundry.com/ai-gateway) with Langfuse using the
Langraph OpenAI SDK.
The integration sends requests through TrueFoundry’s AI Gateway for
unified governance, observability, and routing, while Langraph runs on
the client side to capture execution traces and telemetry.
- Issue: N/A
- Dependencies: None
- Twitter - https://x.com/truefoundry
tests - Not applicable
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
- Example: "core: add foobar LLM"
- **Description:** Integrated the Scrapeless package to enable Langchain
users to seamlessly incorporate Scrapeless into their agents.
- **Dependencies:** None
- **Twitter handle:** [Scrapelessteam](https://x.com/Scrapelessteam)
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
# Description
This PR updates the docs for the
[langchain-anchorbrowser](https://pypi.org/project/langchain-anchorbrowser/)
package. It adds a few tools
[Anchor Browser](https://anchorbrowser.io/?utm=langchain) is the
platform for AI Agentic browser automation, which solves the challenge
of automating workflows for web applications that lack APIs or have
limited API coverage. It simplifies the creation, deployment, and
management of browser-based automations, transforming complex web
interactions into simple API endpoints.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
This PR introduces a new Google partner guide for MCP Toolbox. The
primary goal of this new documentation is to enhance the discoverability
of MCP Toolbox for developers working within the Google ecosystem,
providing them with a clear and direct path to using our tools.
> [!IMPORTANT]
> This PR contains link to a page which is added in #32344. This will
cause deployment failure until that PR is merged.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
This PR introduces a new integration guide for MCP Toolbox. The primary
goal of this new documentation is to enhance the discoverability of MCP
Toolbox for developers working within the LangChain ecosystem, providing
them with a clear and direct path to using our tools.
This approach was chosen to provide users with a practical, hands-on
example that they can easily follow.
> [!NOTE]
> The page added in this PR is linked to from a section in Google
partners page added in #32356.
---------
Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
In [Rag Part 1
Tutorial](https://python.langchain.com/docs/tutorials/rag/), when QDrant
vector store is selected, the sample code does not work
It fails with error `ValueError: Collection test not found`
So, this fix is creating that collection and ensuring its dimension size
is matching the selection the embedding size of the selected LLM Model
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
```messages_to_pass = [
HumanMessage(content="What's the capital of France?"),
AIMessage(content="The capital of France is Paris."),
HumanMessage(content="And what about Germany?")
]
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
print(formatted_prompt)```
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
**Description:**
I've added a small clarification to the chatbot tutorial. The tutorial
mentions setting the `LANGSMITH_API_KEY`, but doesn't explain how a new
user can get the key from the website. This change adds a brief note to
guide them to the Settings page.
P.S. This is my first pull request, so I'm excited to learn and
contribute!
**Issue:**
N/A
**Dependencies:**
N/A
**Twitter handle:**
@sohamactive
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Closes#32320
This PR updates the `langgraph_agentic_rag.ipynb` notebook to clarify
that LangGraph does not automatically prepend a `SystemMessage`. A
markdown note and an inline Python comment have been added to guide
users to explicitly include a `SystemMessage` when needed.
This improves documentation for developers working with LangGraph-based
agents and avoids confusion about system-level behavior not being
applied.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5>✅ No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
with:
artifact-ids: 12345
path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code></tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/actions/download-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
**Description:**
In the `docs/docs/how_to/structured_output.ipynb` notebook, an
`AIMessage` within the tool-calling few-shot example was missing the
`name="example_assistant"` parameter. This was inconsistent with the
other `AIMessage` instances in the same list.
This change adds the missing `name` parameter to ensure all examples in
the section are consistent, improving the clarity and correctness of the
documentation.
**Issue:** N/A
**Dependencies:** N/A
While trying the line People.schema got a warning.
```The `schema` method is deprecated; use `model_json_schema` instead```
So made the changes and now working file.
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes#123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
Description:
Corrected the guide title from "How deal with high cardinality
categoricals" to "How to deal with high-cardinality categoricals".
- Added missing "to" for grammatical correctness.
- Hyphenated "high-cardinality" for standard compound adjective usage.
Issue:
N/A
Dependencies:
None
Twitter handle:
https://x.com/mishraravibhush
**Description**
Updated the quick setup instructions for JaguarDB in the documentation.
Replaced the outdated Docker image `jaguardb/jaguardb_with_http` with
the current recommended image `jaguardb/jaguardb` for pulling and
running the server.
Not all retrievers use `k` as param name to set the number of results to
return. Even in LangChain itself. Eg:
bc4251b9e0/libs/core/langchain_core/indexing/in_memory.py (L31)
So it's helpful to be able to change it for a given retriever.
The change also adds hints to disable the tests if the retriever doesn't
support setting the param in the constructor or in the invoke method
(for instance, the `InMemoryDocumentIndex` in the link supports in the
constructor but not in the invoke method).
This change is backward compatible.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
**Description:** fix an issue I discovered when attempting to merge
messages in which one message has an `index` key in its content
dictionary and another does not.
**Description:** This PR improves the contribution setup guide by adding
comprehensive Windows-specific instructions. The changes address a
common pain point for Windows contributors who don't have `make`
installed by default, making the LangChain contribution process more
accessible across different operating systems.
The main improvements include:
- Added a dedicated "Windows Users" section with multiple installation
options for `make` (Chocolatey, Scoop, WSL)
- Provided direct `uv` commands as alternatives to all `make` commands
throughout the setup guide
- Included Windows-specific instructions for testing, formatting,
linting, and spellchecking
- Enhanced the documentation to be more inclusive for Windows developers
This change makes it easier for Windows users to contribute to LangChain
without requiring additional tool installation, while maintaining the
existing workflow for users who already have `make` available.
**Issue:** This addresses the common barrier Windows users face when
trying to contribute to LangChain due to missing `make` commands.
**Dependencies:** None required - this is purely a documentation
improvement.
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
## **Description:**
Updated incorrect package names across multiple integration docs by
replacing underscores with hyphens to reflect their actual names on
PyPI. This aligns with the actual PyPI package names and prevents
potential confusion or installation issues.
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
langchain-gradientai is Digitalocean's integration with Langchain. It
will help users to build langchain applications using Digitalocean's
GradientAI platform.
---------
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Description:
Fixed minor typos in the `google_imagen.ipynb` integration notebook
related to image generation prompt formatting. No functional changes
were made — just a documentation correction to improve clarity.
## **Description:**
Updated incorrect package names in `FeatureTables.js` by replacing
underscores with hyphens to reflect their actual names on PyPI. This
aligns with the actual PyPI package names and prevents potential
confusion or installation issues.
The following package names were corrected:
- `langchain_aws` ➝ `langchain-aws`
- `langchain_community` ➝ `langchain-community`
- `langchain_elasticsearch` ➝ `langchain-elasticsearch`
- `langchain_google_community` ➝ `langchain-google-community`
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
Description: Documentation is inconsistent with API docs.
Current documentation implies that to use the integration you must have
credentials configured AND store the path to a service account JSON
file.
API docs explain that you must only complete EITHER of the steps
regarding credentials.
I have updated the docs to make them consistent with the API wording.
## **Description:**
Refactored multiple entries in `kv_store_feat_table.py` to ensure that
all vector store metadata is accurate, consistent, and aligned with
LangChain's latest documentation structure and PyPI naming standards.
**Key improvements across all updated entries:**
- Updated `class` links to point to their respective **docs-based
integration pages** (e.g., `/docs/integrations/stores/...`) instead of
raw API reference URLs.
- Corrected `package` display names to use **hyphenated PyPI-compliant
names** (e.g., `langchain-astradb` instead of `langchain_astradb`).
- Updated `package` links to point to the **specific class-level API
references** (e.g., `/api_reference/.../storage/...ClassName.html`) for
precision.
These improvements enhance:
- Navigation experience for users
- Alignment with PyPI and docs naming conventions
- Clarity across LangChain’s integrations documentation
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
docs(alpha_vantage): add link for ALPHAVANTAGE_API_KEY generation in
integration notebook
**Description:**
This PR updates the `docs/docs/integrations/tools/alpha_vantage.ipynb`
integration notebook to help users locate the API key registration page
for Alpha Vantage. The following markdown line was added:
---------
Co-authored-by: Mason Daugherty <mason@langchain.dev>
@@ -15,12 +15,12 @@ You may use the button above, or follow these steps to open this repo in a Codes
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
> [!NOTE]
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
## New features
For new features, please start a new [discussion](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).
description:Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels:["bug"]
type:bug
body:
- type:markdown
attributes:
value:|
Thank you for taking the time to file a bug report.
Thank you for taking the time to file a bug report.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
description:Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels:["feature request"]
type:feature
body:
- type:markdown
attributes:
value:|
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
description:Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels:["task"]
type:task
body:
- type:markdown
attributes:
value:|
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type:checkboxes
id:maintainer
attributes:
label:Maintainer task
description:Confirm that you are allowed to create a task here.
options:
- label:I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required:true
- type:textarea
id:task-description
attributes:
label:Task Description
description:|
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder:|
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required:true
- type:textarea
id:acceptance-criteria
attributes:
label:Acceptance Criteria
description:|
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder:|
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required:true
- type:textarea
id:context
attributes:
label:Context and Background
description:|
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder:|
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required:false
- type:textarea
id:dependencies
attributes:
label:Dependencies
description:|
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ]**PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
@@ -9,14 +11,13 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
-*Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ]**PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ]**Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
@@ -26,7 +27,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square" alt="Open in Dev Containers">
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
```bash
pip install -U langchain
```
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If you’re looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
---
**Documentation**: To learn more about LangChain, check out [the docs](https://python.langchain.com/docs/introduction/).
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
## Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChain’s vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your application’s needs. As the industry
frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without
losing momentum.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain’s vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application’s needs. As the industry frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without losing momentum.
## LangChain’s ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
@@ -4,9 +4,9 @@ LangChain has a large ecosystem of integrations with various external resources
## Best practices
When building such applications developers should remember to follow good security practices:
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
@@ -22,9 +22,7 @@ Example scenarios with mitigation strategies:
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting OSS Vulnerabilities
@@ -37,10 +35,8 @@ open source projects at [huntr](https://huntr.com/bounties/disclose/?target=http
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best Practices](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#supporting-packages) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
### In-Scope Targets
@@ -67,8 +63,7 @@ All out of scope targets defined by huntr as well as:
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
* Code documented with security notices. This will be decided on a case by
case basis, but likely will not be eligible for a bounty as the code is already
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
"\n",
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
"\n",
"\n",
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
" \"llm\": \"gpt-5-nano\",\n",
" \"temperature\": 0,\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "8a2985f1",
"metadata": {},
"source": [
"#### ArXiv Dcoument Loading and Processing"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f32aa36",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
]
}
],
"source": [
"# Load documents from ArXiv\n",
"loader = ArxivLoader(\n",
" query=\"1706.03762\",\n",
" load_max_docs=1,\n",
")\n",
"docs = loader.load()\n",
"print(docs[0].metadata)\n",
"\n",
"# Split documents into chunks\n",
"splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=CONFIG[\"chunk_size\"],\n",
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
")\n",
"chunks = splitter.split_documents(docs)\n",
"\n",
"\n",
"# Join chunks into a single string\n",
"def join_chunks(chunks):\n",
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
"- Input queries\n",
"- Retrieved documents\n",
"- Generated responses\n",
"- Execution time\n",
"- Chain intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b45fc04",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What is the main idea of the paper?\n",
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
]
}
],
"source": [
"@mlflow.trace\n",
"def predict_fn(question: str) -> str:\n",
" return rag_chain.invoke(question)\n",
"\n",
"\n",
"# Test the prediction function\n",
"sample_question = \"What is the main idea of the paper?\"\n",
"response = predict_fn(sample_question)\n",
"print(f\"Question: {sample_question}\")\n",
"print(f\"Response: {response}\")"
]
},
{
"cell_type": "markdown",
"id": "421469de",
"metadata": {},
"source": [
"#### Evaluation Dataset and Scoring\n",
"\n",
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
"\n",
"<u>Evaluation Components:</u>\n",
"- **Dataset**: Questions with expected concepts and facts\n",
"- **Scorers**: \n",
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
" - `Correctness`: Evaluates factual accuracy of the response\n",
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
"\n",
"<u>Best Practices:</u>\n",
"- Create diverse test cases covering different query types\n",
"- Include expected concepts to guide evaluation\n",
"- Use multiple scoring metrics for comprehensive assessment"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c1dc4f2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation)
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation).
## Structure
The primary documentation is located in the `docs/` directory. This directory contains
both the source files for the main documentation as well as the API reference doc
build process.
### API Reference
API reference documentation is located in `docs/api_reference/` and is generated from
the codebase using Sphinx.
The API reference have additional build steps that differ from the main documentation.
#### Deployment Process
Currently, the build process roughly follows these steps:
1. Using the `api_doc_build.yml` GitHub workflow, the API reference docs are
[built](#build-technical-details) and copied to the `langchain-api-docs-html`
repository. This workflow is triggered either (1) on a cron routine interval or (2)
triggered manually.
In short, the workflow extracts all `langchain-ai`-org-owned repos defined in
`langchain/libs/packages.yml`, clones them locally (in the workflow runner's file
system), and then builds the API reference RST files (using `create_api_rst.py`).
Following post-processing, the HTML files are pushed to the
`langchain-api-docs-html` repository.
2. After the HTML files are in the `langchain-api-docs-html` repository, they are **not**
automatically published to the [live docs site](https://python.langchain.com/api_reference/).
The docs site is served by Vercel. The Vercel deployment process copies the HTML
files from the `langchain-api-docs-html` repository and deploys them to the live
site. Deployments are triggered on each new commit pushed to `master`.
#### Build Technical Details
The build process creates a virtual monorepo by syncing multiple repositories, then generates comprehensive API documentation:
1.**Repository Sync Phase:**
-`.github/scripts/prep_api_docs_build.py` - Clones external partner repos and organizes them into the `libs/partners/` structure to create a virtual monorepo for documentation building
2.**RST Generation Phase:**
-`docs/api_reference/create_api_rst.py` - Main script that **generates RST files** from Python source code
- Scans `libs/` directories and extracts classes/functions from each module (using `inspect`)
- Creates `.rst` files using specialized templates for different object types
- Templates in `docs/api_reference/templates/` (`pydantic.rst`, `runnable_pydantic.rst`, etc.)
3.**HTML Build Phase:**
- Sphinx-based, uses `sphinx.ext.autodoc` (auto-extracts docstrings from the codebase)
-`docs/api_reference/conf.py` (sphinx config) configures `autodoc` and other extensions
-`sphinx-build` processes the generated `.rst` files into HTML using autodoc
-`docs/api_reference/scripts/custom_formatter.py` - Post-processes the generated HTML
- Copies `reference.html` to `index.html` to create the default landing page (artifact? might not need to do this - just put everyhing in index directly?)
4.**Deployment:**
-`.github/workflows/api_doc_build.yml` - Workflow responsible for orchestrating the entire build and deployment process
- Built HTML files are committed and pushed to the `langchain-api-docs-html` repository
#### Local Build
For local development and testing of API documentation, use the Makefile targets in the repository root:
```bash
# Full build
make api_docs_build
```
Like the CI process, this target:
- Installs the CLI package in editable mode
- Generates RST files for all packages using `create_api_rst.py`
- Builds HTML documentation with Sphinx
- Post-processes the HTML with `custom_formatter.py`
- Opens the built documentation (`reference.html`) in your browser
**Quick Preview:**
```bash
make api_docs_quick_preview API_PKG=openai
```
- Generates RST files for only the specified package (default: `text-splitters`)
- Builds and post-processes HTML documentation
- Opens the preview in your browser
Both targets automatically clean previous builds and handle the complete build pipeline locally, mirroring the CI process but for faster iteration during development.
#### Documentation Standards
**Docstring Format:**
The API reference uses **Google-style docstrings** with reStructuredText markup. Sphinx processes these through the `sphinx.ext.napoleon` extension to generate documentation.
@@ -12,7 +12,7 @@ You are expected to be familiar with asynchronous programming in Python before r
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming.
:::
## Langchain asynchronous APIs
## LangChain asynchronous APIs
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
@@ -31,7 +31,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tools](/docs/concepts/tools).
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
- **[batch](/docs/concepts/runnables)**: Used to execute a runnable with batch inputs.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
@@ -147,7 +147,7 @@ An `AIMessage` has the following attributes. The attributes which are **standard
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. See [Message IDs](#message-ids) for details. |
@@ -243,3 +243,37 @@ At the moment, the output of the model will be in terms of LangChain messages, s
need OpenAI format for the output as well.
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
## Message IDs
LangChain messages include an optional `id` field that serves as a unique identifier. Understanding when and how these IDs are assigned can be helpful for debugging, tracing, and working with message history.
### When Messages Get IDs
Messages receive IDs in the following scenarios:
**Automatically assigned by LangChain:**
- When generated through chat model invocation (`.invoke()`, `.stream()`, `.astream()`) with an active run manager/tracing context
@@ -31,7 +31,7 @@ The key attributes that correspond to the tool's **schema**:
The key methods to execute the function associated with the **tool**:
- **invoke**: Invokes the tool with the given arguments.
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async).
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with LangChain](/docs/concepts/async).
## Create tools using the `@tool` decorator
@@ -89,7 +89,7 @@ Please see the [API reference for @tool](https://python.langchain.com/api_refere
## Tool artifacts
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
```python
@tool(response_format="content_and_artifact")
@@ -171,6 +171,26 @@ Please see the [InjectedState](https://langchain-ai.github.io/langgraph/referenc
Please see the [InjectedStore](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedStore) documentation for more details.
## Tool Artifacts vs. Injected State
Although similar conceptually, tool artifacts in LangChain and [injected state in LangGraph](https://langchain-ai.github.io/langgraph/reference/agents/#langgraph.prebuilt.tool_node.InjectedState) serve different purposes and operate at different levels of abstraction.
**Tool Artifacts**
- **Purpose:** Store and pass data between tool executions within a single chain/workflow
- **Scope:** Limited to tool-to-tool communication
- **Lifecycle:** Tied to individual tool calls and their immediate context
- **Usage:** Temporary storage for intermediate results that tools need to share
**Injected State (LangGraph)**
- **Purpose:** Maintain persistent state across the entire graph execution
- **Scope:** Global to the entire graph workflow
- **Lifecycle:** Persists throughout the entire graph execution and can be saved/restored
- **Usage:** Long-term state management, conversation memory, user context, workflow checkpointing
Tool artifacts are ephemeral data passed between tools, while injected state is persistent workflow-level state that survives across multiple steps, tool calls, and even execution sessions in LangGraph.
## Best practices
When designing tools to be used by models, keep the following in mind:
@@ -9,6 +9,14 @@ This project utilizes [uv](https://docs.astral.sh/uv/) v0.5+ as a dependency man
Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/getting-started/installation/)**.
### Windows Users
If you're on Windows and don't have `make` installed, you can install it via:
- **Option 1**: Install via [Chocolatey](https://chocolatey.org/): `choco install make`
- **Option 2**: Install via [Scoop](https://scoop.sh/): `scoop install make`
- **Option 3**: Use [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/)
- **Option 4**: Use the direct `uv` commands shown in the sections below
## Different packages
This repository contains multiple packages:
@@ -48,7 +56,11 @@ uv sync
Then verify dependency installation:
```bash
# If you have `make` installed:
make test
# If you don't have `make` (Windows alternative):
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
```
## Testing
@@ -61,7 +73,11 @@ If you add new logic, please add a unit test.
To run unit tests:
```bash
# If you have `make` installed:
make test
# If you don't have make (Windows alternative):
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
```
There are also [integration tests and code-coverage](../testing.mdx) available.
@@ -72,7 +88,12 @@ If you are only developing `langchain_core`, you can simply install the dependen
```bash
cd libs/core
# If you have `make` installed:
make test
# If you don't have `make` (Windows alternative):
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
```
## Formatting and linting
@@ -86,20 +107,37 @@ Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules
To run formatting for docs, cookbook and templates:
```bash
# If you have `make` installed:
make format
# If you don't have make (Windows alternative):
uv run --all-groups ruff format .
uv run --all-groups ruff check --fix .
```
To run formatting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
# If you have `make` installed:
make format
# If you don't have make (Windows alternative):
uv run --all-groups ruff format .
uv run --all-groups ruff check --fix .
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
# If you have `make` installed:
make format_diff
# If you don't have `make` (Windows alternative):
# First, get the list of modified files:
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff format
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
@@ -111,52 +149,89 @@ Linting for this project is done via a combination of [ruff](https://docs.astral
To run linting for docs, cookbook and templates:
```bash
# If you have `make` installed:
make lint
# If you don't have `make` (Windows alternative):
uv run --all-groups ruff check .
uv run --all-groups ruff format . --diff
uv run --all-groups mypy . --cache-dir .mypy_cache
```
To run linting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
# If you have `make` installed:
make lint
# If you don't have `make` (Windows alternative):
uv run --all-groups ruff check .
uv run --all-groups ruff format . --diff
uv run --all-groups mypy . --cache-dir .mypy_cache
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Spellcheck
### Pre-commit
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
We use [pre-commit](https://pre-commit.com/) to ensure commits are formatted/linted.
To check spelling for this project:
#### Installing Pre-commit
First, install pre-commit:
```bash
make spell_check
# Option 1: Using uv (recommended)
uv tool install pre-commit
# Option 2: Using Homebrew (globally for macOS/Linux)
brew install pre-commit
# Option 3: Using pip
pip install pre-commit
```
To fix spelling in place:
Then install the git hook scripts:
```bash
make spell_fix
pre-commit install
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
@@ -124,6 +124,47 @@ start "" htmlcov/index.html || open htmlcov/index.html
```
## Snapshot Testing
Some tests use [syrupy](https://github.com/tophat/syrupy) for snapshot testing, which captures the output of functions and compares them to stored snapshots. This is particularly useful for testing JSON schema generation and other structured outputs.
### Updating Snapshots
To update snapshots when the expected output has legitimately changed:
```bash
uv run --group test pytest path/to/test.py --snapshot-update
```
### Pydantic Version Compatibility Issues
Pydantic generates different JSON schemas across versions, which can cause snapshot test failures in CI when tests run with different Pydantic versions than what was used to generate the snapshots.
**Symptoms:**
- CI fails with snapshot mismatches showing differences like missing or extra fields.
- Tests pass locally but fail in CI with different Pydantic versions
**Solution:**
Locally update snapshots using the same Pydantic version that CI uses:
1. **Identify the failing Pydantic version** from CI logs (e.g., `2.7.0`, `2.8.0`, `2.9.0`)
2. **Update snapshots with that version:**
```bash
uv run --with "pydantic==2.9.0" --group test pytest tests/unit_tests/path/to/test.py::test_name --snapshot-update
```
3. **Verify compatibility across supported versions:**
```bash
# Test with the version you used to update
uv run --with "pydantic==2.9.0" --group test pytest tests/unit_tests/path/to/test.py::test_name
# Test with other supported versions
uv run --with "pydantic==2.8.0" --group test pytest tests/unit_tests/path/to/test.py::test_name
```
**Note:** Some tests use `@pytest.mark.skipif` decorators to only run with specific Pydantic version ranges (e.g., `PYDANTIC_VERSION_AT_LEAST_210`). Make sure to understand these constraints when updating snapshots.
## Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface.\n",
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface.\n",
"If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n",
"\n",
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n",
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_errors`. \n",
"\n",
"When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n",
"\n",
"You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
"You can set `handle_tool_errors` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
"\n",
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`."
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_errors` of the tool because its default value is `False`."
]
},
{
@@ -777,7 +777,7 @@
"id": "9d93b217-1d44-4d31-8956-db9ea680ff4f",
"metadata": {},
"source": [
"Here's an example with the default `handle_tool_error=True` behavior."
"Here's an example with the default `handle_tool_errors=True` behavior."
"<table><thead><tr><th colspan=\"3\">able 1. LUllclll 1ayoul actCCLloll 1110AdCs 111 L1C LayoOulralsel 1110U4cl 200</th></tr><tr><th>Dataset</th><th>| Base Model\\'|</th><th>Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
"<table><thead><tr><th colspan=\"3\">Table 1: Current layout detection models in the LayoutParser model zoo</th></tr><tr><th>Dataset</th><th>Base Model1</th><th>Large Model Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
"1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n",
"2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n",
"3. [`Ollama`](https://ollama.ai/): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
"3. [`ollama`](https://github.com/ollama/ollama): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
"4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n",
"\n",
"In general, these frameworks will do a few things:\n",
@@ -74,12 +74,12 @@
"\n",
"## Quickstart\n",
"\n",
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
"[Ollama](https://ollama.com/) is one way to easily run inference on macOS.\n",
" \n",
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
"The instructions [here](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n",
"* From command line, fetch a model from this [list of options](https://ollama.com/search): e.g., `ollama pull gpt-oss:20b`\n",
"* When the app is running, all models are automatically served on `localhost:11434`\n"
"llm.invoke(\"The first man on the moon was ...\")"
"llm.invoke(\"The first man on the moon was ...\").content"
]
},
{
@@ -200,7 +200,7 @@
"\n",
"### Running Apple silicon GPU\n",
"\n",
"`Ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n",
"`ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n",
" \n",
"Other frameworks require the user to set up the environment to utilize the Apple GPU.\n",
"\n",
@@ -212,15 +212,15 @@
"\n",
"In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n",
"1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n",
"2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n",
"3. [`Ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
"3. [`ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
"\n",
"### Ollama\n",
"\n",
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
"\n",
"* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
"* See the full set of parameters on the [API reference page](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.ollama.Ollama.html)"
"With [Ollama](https://github.com/ollama/ollama), fetch a model via `ollama pull <model family>:<tag>`."
]
},
{
"cell_type": "code",
"execution_count": 42,
"execution_count": null,
"id": "8ecd2f78",
"metadata": {},
"outputs": [
@@ -265,7 +261,7 @@
}
],
"source": [
"llm = OllamaLLM(model=\"llama2:13b\")\n",
"llm = ChatOllama(model=\"gpt-oss:20b\")\n",
"llm.invoke(\"The first man on the moon was ... think step by step\")"
"# How deal with highcardinality categoricals when doing query analysis\n",
"# How to deal with high-cardinality categoricals when doing query analysis\n",
"\n",
"You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.\n",
"`with_structured_output()` internally uses tool calling to enforce the schema. When you bind additional tools afterward, it creates a conflict in the tool resolution system."
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what its arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"To keep the most recent messages, we set `strategy=\"last\"`. We'll also set `include_system=True` to include the `SystemMessage`, and `start_on=\"human\"` to make sure the resulting chat history is valid. \n",
"\n",
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case.\n",
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case. Keep in mind that new queries added to the chat history will be included in the token count unless you trim prior to adding the new query.\n",
"\n",
"Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:"
]
@@ -525,7 +525,7 @@
"id": "4d91d390-e7f7-467b-ad87-d100411d7a21",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r\n",
"Looking at [the LangSmith trace](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r) we can see that before the messages are passed to the model they are first trimmed.\n",
"\n",
"Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
@@ -620,7 +620,7 @@
"id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r"
"Looking at [the LangSmith trace](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r) we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message."
]
},
{
@@ -630,7 +630,7 @@
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html"
"For a complete description of all arguments head to the [API reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html)."
"You can, by default, use the `DeepEvalCallbackHandler` to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:\n",
"LLMResult(generations=[[Generation(text='\\n\\nQ: What did the fish say when he hit the wall? \\nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThe Moon \\n\\nThe moon is high in the midnight sky,\\nSparkling like a star above.\\nThe night so peaceful, so serene,\\nFilling up the air with love.\\n\\nEver changing and renewing,\\nA never-ending light of grace.\\nThe moon remains a constant view,\\nA reminder of life’s gentle pace.\\n\\nThrough time and space it guides us on,\\nA never-fading beacon of hope.\\nThe moon shines down on us all,\\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ. What did one magnet say to the other magnet?\\nA. \"I find you very attractive!\"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nThe world is charged with the grandeur of God.\\nIt will flame out, like shining from shook foil;\\nIt gathers to a greatness, like the ooze of oil\\nCrushed. Why do men then now not reck his rod?\\n\\nGenerations have trod, have trod, have trod;\\nAnd all is seared with trade; bleared, smeared with toil;\\nAnd wears man's smudge and shares man's smell: the soil\\nIs bare now, nor can foot feel, being shod.\\n\\nAnd for all this, nature is never spent;\\nThere lives the dearest freshness deep down things;\\nAnd though the last lights off the black West went\\nOh, morning, at the brown brink eastward, springs —\\n\\nBecause the Holy Ghost over the bent\\nWorld broods with warm breast and with ah! bright wings.\\n\\n~Gerard Manley Hopkins\", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ: What did one ocean say to the other ocean?\\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for you\\n\\nOn a field of green\\n\\nThe sky so blue\\n\\nA gentle breeze, the sun above\\n\\nA beautiful world, for us to love\\n\\nLife is a journey, full of surprise\\n\\nFull of joy and full of surprise\\n\\nBe brave and take small steps\\n\\nThe future will be revealed with depth\\n\\nIn the morning, when dawn arrives\\n\\nA fresh start, no reason to hide\\n\\nSomewhere down the road, there's a heart that beats\\n\\nBelieve in yourself, you'll always succeed.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(\n",
" temperature=0,\n",
" callbacks=[deepeval_callback],\n",
" verbose=True,\n",
" openai_api_key=\"<YOUR_API_KEY>\",\n",
")\n",
"output = llm.generate(\n",
" [\n",
" \"What is the best evaluation tool out there? (no bias at all)\",\n",
" ]\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then check the metric if it was successful by calling the `is_successful()` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"answer_relevancy_metric.is_successful()\n",
"# returns True/False"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have ran that, you should be able to see our dashboard below. \n",
"You can create your own custom metrics [here](https://docs.confident-ai.com/docs/quickstart/custom-metrics). \n",
"\n",
"DeepEval also offers other features such as being able to [automatically create unit tests](https://docs.confident-ai.com/docs/quickstart/synthetic-data-creation), [tests for hallucination](https://docs.confident-ai.com/docs/measuring_llm_performance/factual_consistency).\n",
"\n",
"If you are interested, check out our Github repository here [https://github.com/confident-ai/deepeval](https://github.com/confident-ai/deepeval). We welcome any PRs and discussions on how to improve LLM performance."
"This page will help you get started with AI/ML API [chat models](/docs/concepts/chat_models.mdx). For detailed documentation of all ChatAimlapi features and configurations, head to the [API reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n",
"\n",
"AI/ML API provides access to **300+ models** (Deepseek, Gemini, ChatGPT, etc.) via high-uptime and high-rate API."
]
},
{
"cell_type": "markdown",
"id": "512f94fa4bea2628",
"metadata": {
"collapsed": false
},
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"To access AI/ML API models, sign up at [aimlapi.com](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration), generate an API key, and set the `AIMLAPI_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b26280519672f194",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:16:58.837623Z",
"start_time": "2025-08-07T07:16:55.346214Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
]
},
{
"cell_type": "markdown",
"id": "fa131229e62dfd47",
"metadata": {
"collapsed": false
},
"source": [
"### Installation\n",
"Install the `langchain-aimlapi` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3777dc00d768299e",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:11.195741Z",
"start_time": "2025-08-07T07:17:02.288142Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-aimlapi"
]
},
{
"cell_type": "markdown",
"id": "d168108b0c4f9d7",
"metadata": {
"collapsed": false
},
"source": [
"## Instantiation\n",
"Now we can instantiate the `ChatAimlapi` model and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f29131e65e47bd16",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:23.499746Z",
"start_time": "2025-08-07T07:17:11.196747Z"
}
},
"outputs": [],
"source": [
"from langchain_aimlapi import ChatAimlapi\n",
"\n",
"llm = ChatAimlapi(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
" temperature=0.7,\n",
" max_tokens=512,\n",
" timeout=30,\n",
" max_retries=3,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "861b87289f8e146d",
"metadata": {
"collapsed": false
},
"source": [
"## Invocation\n",
"You can invoke the model with a list of messages:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "430b1cff2e6d77b4",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:30.586261Z",
"start_time": "2025-08-07T07:17:29.074409Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"messages = [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"\n",
"ai_msg = llm.invoke(messages)\n",
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "5463797524a19b2e",
"metadata": {
"collapsed": false
},
"source": [
"## Chaining\n",
"We can chain the model with a prompt template as follows:"
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "fcf0bf10a872355c",
"metadata": {
"collapsed": false
},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAimlapi features and configurations, visit the [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration)."
"This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n",
"\n",
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n",
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/about-claude/models/overview).\n",
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
"Claude supports a [search_result](https://docs.anthropic.com/en/docs/build-with-claude/search-results) content block representing citable results from queries against a knowledge base or other custom source. These content blocks can be passed to claude both top-line (as in the above example) and within a tool result. This allows Claude to cite elements of its response using the result of a tool call.\n",
@@ -998,8 +998,6 @@
" ]\n",
"```\n",
"\n",
"We also need to specify the `search-results-2025-06-09` beta when instantiating ChatAnthropic. You can see an end-to-end example below.\n",
"\n",
"<details>\n",
"<summary>End to end example with LangGraph</summary>\n",
"\n",
@@ -1193,6 +1191,40 @@
"response.content"
]
},
{
"cell_type": "markdown",
"id": "74247a07-b153-444f-9c56-77659aeefc88",
"metadata": {},
"source": [
"## Context management\n",
"\n",
"Anthropic supports a context editing feature that will automatically manage the model's context window (e.g., by clearing tool results).\n",
"\n",
"See [Anthropic documentation](https://docs.claude.com/en/docs/build-with-claude/context-editing) for details and configuration options.\n",
"response = llm_with_tools.invoke(\"Search for recent developments in AI\")"
]
},
{
"cell_type": "markdown",
"id": "cbfec7a9-d9df-4d12-844e-d922456dd9bf",
@@ -1200,7 +1232,7 @@
"source": [
"## Built-in tools\n",
"\n",
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
]
},
{
@@ -1210,7 +1242,7 @@
"source": [
"### Web search\n",
"\n",
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool) to run searches and ground its responses with citations."
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool) to run searches and ground its responses with citations."
]
},
{
@@ -1240,6 +1272,110 @@
"response = llm_with_tools.invoke(\"How do I update a web app to TypeScript 5.5?\")"
]
},
{
"cell_type": "markdown",
"id": "kloc4rvd1w",
"metadata": {},
"source": [
"#### Web search + structured output\n",
"\n",
"When combining web search tools with structured output, it's important to **bind the tools first and then apply structured output**:"
"# Now you can use both web search and get structured output\n",
"result = research_llm.invoke(\"Research the latest developments in quantum computing\")\n",
"print(f\"Topic: {result.topic}\")\n",
"print(f\"Summary: {result.summary}\")\n",
"print(f\"Key Points: {result.key_points}\")"
]
},
{
"cell_type": "markdown",
"id": "c580c20a",
"metadata": {},
"source": [
"### Web fetching\n",
"\n",
"Claude can use a [web fetching tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-fetch-tool) to run searches and ground its responses with citations."
]
},
{
"cell_type": "markdown",
"id": "5cf6ad08",
"metadata": {},
"source": [
":::info\n",
"Web search tool is supported since ``langchain-anthropic>=0.3.20``\n",
" \"Please analyze the content at https://example.com/article\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "088c41d0",
"metadata": {},
"source": [
":::warning\n",
"Note: you must add the `'web-fetch-2025-09-10'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "1478cdc6-2e52-4870-80f9-b4ddf88f2db2",
@@ -1249,14 +1385,14 @@
"\n",
"Claude can use a [code execution tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool) to execute Python code in a sandboxed environment.\n",
"\n",
":::info Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
"\n",
":::info\n",
"Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
"Note: you must add the `'code_execution_20250522'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "24076f91-3a3d-4e53-9618-429888197061",
@@ -1345,6 +1491,38 @@
"</details>"
]
},
{
"cell_type": "markdown",
"id": "29405da2-d2ef-415c-b674-6e29073cd05e",
"metadata": {},
"source": [
"### Memory tool\n",
"\n",
"Claude supports a memory tool for client-side storage and retrieval of context across conversational threads. See docs [here](https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool) for details.\n",
"response = llm_with_tools.invoke(\"What are my interests?\")"
]
},
{
"cell_type": "markdown",
"id": "040f381a-1768-479a-9a5e-aa2d7d77e0d5",
@@ -1354,14 +1532,14 @@
"\n",
"Claude can use a [MCP connector tool](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector) for model-generated calls to remote MCP servers.\n",
"\n",
":::info Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
"\n",
":::info\n",
"Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
"Note: you must add the `'mcp-client-2025-04-04'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "2fd5d545-a40d-42b1-ad0c-0a79e2536c9b",
@@ -1400,7 +1588,7 @@
"source": [
"### Text editor\n",
"\n",
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool) for details."
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool) for details."
"# Invoke the model with a query asking for structured information\n",
"result = structured_llm.invoke(\n",
"result = structured_llm_json.invoke(\n",
" \"Who was the 16th president of the USA, and how tall was he in meters?\"\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "g9w06ld1ggq",
"metadata": {},
"source": [
"### Structured Output Methods\n",
"\n",
"Two methods are supported for structured output:\n",
"\n",
"- **`method=\"function_calling\"` (default)**: Uses tool calling to extract structured data. Compatible with all Gemini models.\n",
"- **`method=\"json_mode\"`**: Uses Gemini's native structured output with `responseSchema`. More reliable but requires Gemini 1.5+ models.\n",
"\n",
"The `json_mode` method is **recommended for better reliability** as it constrains the model's generation process directly rather than relying on post-processing tool calls."
"Create an account on DigitalOcean, acquire a `DIGITALOCEAN_INFERENCE_KEY` API key from the Gradient Platform, and install the `langchain-gradient` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [DigitalOcean Login](https://cloud.digitalocean.com/login) \n",
"\n",
"1. Sign up/Login to DigitalOcean Cloud Console\n",
"2. Go to the Gradient Platform and navigate to Serverless Inference.\n",
"3. Click on Create model access key, enter a name, and create the key.\n",
"\n",
"Once you've done this set the `DIGITALOCEAN_INFERENCE_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DIGITALOCEAN_INFERENCE_KEY\"):\n",
" \"Enter your DIGITALOCEAN_INFERENCE_KEY API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The DigitalOcean Gradient integration lives in the `langchain-gradient` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip3.12 install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-gradient"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_gradient import ChatGradient\n",
"\n",
"llm = ChatGradient(\n",
" model=\"llama3.3-70b-instruct\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"...that had been hidden away for centuries, nestled amongst the twisted roots of an ancient tree. As soon as Mira's fingers made contact with the stone, she felt an sudden surge of energy course through her veins, like a river bursting its banks. The stone, which had been dull and lifeless just moments before, now pulsed with a soft, ethereal light, as if it had been awakened by Mira's touch.\\n\\nIntrigued, Mira turned the stone over in her hand, studying it from every angle. The light emanating from it cast eerie shadows on the trees around her, making her feel as though she was standing at the threshold of a secret world. As she gazed deeper into the stone, she began to notice that the glow was not just a random color, but a deep, rich blue that seemed to be calling to her.\\n\\nWithout thinking, Mira felt an overwhelming urge to follow the stone's gentle glow, which seemed to be leading her deeper into the mysterious forest. The trees loomed above her, their branches creaking and swaying in the wind, as if they too were urging her onward. The air was filled with the sweet scent of wildflowers and the soft hooting of owls, creating a sense of enchantment that was both exhilarating and unsettling.\\n\\nAs Mira wandered deeper into the forest, the stone's light grew brighter, illuminating a winding path that was all but invisible in the fading light of day. The trees grew taller and closer together here, forming a tunnel of foliage that seemed to be guiding her towards a hidden destination. Mira's heart pounded with excitement and a hint of fear, as she realized that she was being drawn into a world that was both magical and unknown.\\n\\nSuddenly, the trees parted, and Mira found herself standing at the edge of a clearing, surrounded by a ring of towering mushrooms that glowed with a soft, luminescent light. The air was filled with a faint humming noise, like the buzzing of a thousand bees, and the stone in her hand pulsed with an otherworldly energy. In the center of the clearing stood an enormous tree, its trunk twisted and gnarled with age, its branches reaching up towards the stars like a Nature's own cathedral.\\n\\nMira felt a sense of awe wash over her, as she approached the tree, the stone still clutched in her hand. She could feel the magic of the forest pulsing through her, calling to her, drawing her closer to the heart of the mystery. And as she reached out to touch the trunk of the tree, the stone's glow surged to a brilliant intensity, illuminating a doorway that had been hidden in the trunk all along...\", additional_kwargs={}, response_metadata={'finish_reason': 'stop'}, id='run--593a6940-4c76-413b-bed9-1fd94f91c6c1-0', usage_metadata={'input_tokens': 82, 'output_tokens': 555, 'total_tokens': 637})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a creative storyteller. Continue any story prompt you receive in an engaging and imaginative way.\",\n",
" ),\n",
" (\n",
" \"human\",\n",
" \"Once upon a time, in a village at the edge of a mysterious forest, a young girl named Mira found a glowing stone...\",\n",
" ),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"...that had been hidden away for centuries, nestled amongst the twisted roots of an ancient tree. As soon as Mira's fingers made contact with the stone, she felt an sudden surge of energy course through her veins, like a river bursting its banks. The stone, which had been dull and lifeless just moments before, now pulsed with a soft, ethereal light, as if it had been awakened by Mira's touch.\n",
"\n",
"Intrigued, Mira turned the stone over in her hand, studying it from every angle. The light emanating from it cast eerie shadows on the trees around her, making her feel as though she was standing at the threshold of a secret world. As she gazed deeper into the stone, she began to notice that the glow was not just a random color, but a deep, rich blue that seemed to be calling to her.\n",
"\n",
"Without thinking, Mira felt an overwhelming urge to follow the stone's gentle glow, which seemed to be leading her deeper into the mysterious forest. The trees loomed above her, their branches creaking and swaying in the wind, as if they too were urging her onward. The air was filled with the sweet scent of wildflowers and the soft hooting of owls, creating a sense of enchantment that was both exhilarating and unsettling.\n",
"\n",
"As Mira wandered deeper into the forest, the stone's light grew brighter, illuminating a winding path that was all but invisible in the fading light of day. The trees grew taller and closer together here, forming a tunnel of foliage that seemed to be guiding her towards a hidden destination. Mira's heart pounded with excitement and a hint of fear, as she realized that she was being drawn into a world that was both magical and unknown.\n",
"\n",
"Suddenly, the trees parted, and Mira found herself standing at the edge of a clearing, surrounded by a ring of towering mushrooms that glowed with a soft, luminescent light. The air was filled with a faint humming noise, like the buzzing of a thousand bees, and the stone in her hand pulsed with an otherworldly energy. In the center of the clearing stood an enormous tree, its trunk twisted and gnarled with age, its branches reaching up towards the stars like a Nature's own cathedral.\n",
"\n",
"Mira felt a sense of awe wash over her, as she approached the tree, the stone still clutched in her hand. She could feel the magic of the forest pulsing through her, calling to her, drawing her closer to the heart of the mystery. And as she reached out to touch the trunk of the tree, the stone's glow surged to a brilliant intensity, illuminating a doorway that had been hidden in the trunk all along...\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can chain our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The Eiffel Tower was designed by Gustave Eiffel\\'s engineering company and was completed in 1889. (Sentence: \"It was designed by Gustave Eiffel\\'s engineering company. The tower is one of the most recognizable structures in the world. ... The Eiffel Tower is located in Paris and was completed in 1889.\")', additional_kwargs={}, response_metadata={'finish_reason': 'stop'}, id='run--c23ffab6-06ae-4130-87b1-d5b2e7744906-0', usage_metadata={'input_tokens': 153, 'output_tokens': 74, 'total_tokens': 227})"
" 'You are a knowledgeable assistant. Carefully read the provided context and answer the user\\'s question. If the answer is present in the context, cite the relevant sentence. If not, reply with \"Not found in context.\"',\n",
"To access `langchain_huggingface` models you'll need to create a/an `Hugging Face` account, get an API key, and install the `langchain_huggingface` integration package.\n",
"To access `langchain_huggingface` models you'll need to create a `Hugging Face` account, get an API key, and install the `langchain-huggingface` integration package.\n",
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain_mistralai` integration package.\n",
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain-mistralai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
@@ -80,7 +80,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain Mistral integration lives in the `langchain_mistralai` package:"
"The LangChain Mistral integration lives in the `langchain-mistralai` package:"
]
},
{
@@ -90,7 +90,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_mistralai"
"%pip install -qU langchain-mistralai"
]
},
{
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.