Compare commits

..

352 Commits

Author SHA1 Message Date
ccurme
653dc77c7e release(standard-tests): 1.0.0a1 (#32978) 2025-09-16 13:32:53 -04:00
Chester Curme
f6ab75ba8b Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain/pyproject.toml
#	libs/langchain/uv.lock
2025-09-16 10:46:43 -04:00
ccurme
e63c1d7171 chore(langchain): drop cap on python version (#32974) 2025-09-16 10:44:21 -04:00
Mason Daugherty
b6af0d228c residual lint 2025-09-16 10:31:42 -04:00
Mason Daugherty
5800721fd4 fix lint 2025-09-16 10:28:23 -04:00
Chester Curme
ebfb938a68 Merge branch 'master' into wip-v1.0 2025-09-16 10:20:24 -04:00
ccurme
1a3e9e06ee chore(langchain): (v1) drop 3.9 support (#32970)
Will need to add back optional deps when we have releases of them that
are compatible with langchain-core 1.0 alphas.
2025-09-16 10:14:23 -04:00
Mason Daugherty
8180020b93 chore: restore commented out optional deps (#32971)
langchain & langchain_v1
2025-09-16 10:10:49 -04:00
Username46786
435194acf6 docs: add cross-links between summarization how-to pages (#32968)
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.

- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
  - /docs/how_to/summarize_map_reduce/
  - /docs/how_to/summarize_refine/

Type: docs-only (no code changes)
2025-09-16 09:59:03 -04:00
Mason Daugherty
c7ebbe5c8a Merge branch 'master' into wip-v1.0 2025-09-15 19:26:36 -04:00
Mason Daugherty
244c699551 refactor(cli): drop Python 3.9 (#32964) 2025-09-15 19:22:53 -04:00
Mason Daugherty
369858de19 chore(infra): fix codspeed (#32963)
Related to #32950

CodSpeed v4 runs pytest inside its own runner process, which does not
automatically inherit environment variables from the job
2025-09-15 21:52:46 +00:00
Mason Daugherty
76dfb7fbe8 fix(cli): lint 2025-09-15 17:32:11 -04:00
Mason Daugherty
f25133c523 fix: version equality 2025-09-15 17:31:16 -04:00
Mason Daugherty
fc7a07d6f1 . 2025-09-15 17:27:20 -04:00
Mason Daugherty
e8ff6f4db6 Merge branch 'master' into wip-v1.0 2025-09-15 17:27:08 -04:00
Ali Ismail
4ebce80fbb docs(langchain): add docstring for _load_map_reduce_chain (#32961)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.

Issue:
N/A (documentation improvement only)

Dependencies:
None
2025-09-15 17:19:20 -04:00
Mason Daugherty
8670b24c8e test(groq): xfail tool integration test (#32960)
Groq models have known issues with tool calling consistency,
[particularly with complex tools derived from
runnables](https://github.com/langchain-ai/langchain/discussions/19990).
[(more)](https://github.com/langchain-ai/langchain/discussions/24309)

xfail until we can dedicate time to wrangling their API/model handling
2025-09-15 14:23:22 -04:00
Ademílson Tonato
8d60ddba3a docs: update installation command for firecrawl-py package (#32958) 2025-09-15 14:10:08 -04:00
Mason Daugherty
9f6431924f feat(openai): add max_tokens to AzureChatOpenAI (#32959)
Fixes #32949

This pattern is [present in
`ChatOpenAI`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L2821)
but wasn't carried over to Azure.


[CI](https://github.com/langchain-ai/langchain/actions/runs/17741751797/job/50417180998)
2025-09-15 14:09:20 -04:00
Ali Ismail
569a3d9602 docs(langchain): add docstring for _load_stuff_chain (#32932)
**Description:**  
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.

**Issue:**  
N/A (documentation improvement only)

**Dependencies:**  
None
2025-09-15 10:02:50 -04:00
dependabot[bot]
8ef4df903f chore(infra): bump CodSpeedHQ/action from 3 to 4 (#32950)
Bumps [CodSpeedHQ/action](https://github.com/codspeedhq/action) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codspeedhq/action/releases">CodSpeedHQ/action's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>💥 BREAKING</h2>
<p>It's now required to explicitly set the runner mode to
<code>instrumentation</code> or <code>walltime</code> using either:</p>
<ul>
<li>the <code>mode</code> argument</li>
<li>or the <code>CODSPEED_RUNNER_MODE</code> environment variable</li>
</ul>
<blockquote>
<p>[!TIP]
Before, this variable was automatically set to
<code>instrumentation</code> on every runner except for <a
href="https://codspeed.io/docs/instruments/walltime">CodSpeed macro
runners</a> where it was set to <code>walltime</code> by default.</p>
</blockquote>
<p>Find more details in <a
href="https://codspeed.io/docs/instruments">the instruments
documentation</a>.</p>
<h2>Details</h2>
<h3><!-- raw HTML omitted -->🚀 Features</h3>
<ul>
<li>Make perf profiling enabled by default by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/110">#110</a></li>
<li>Make the runner mode argument required by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Use introspected node in walltime mode by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/108">#108</a></li>
<li>Add instrumented go shell script by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/102">#102</a></li>
</ul>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Compute proper load bias by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/107">#107</a></li>
<li>Increase timeout for first perf ping by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Prevent running with valgrind by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/106">#106</a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Change go-runner binary name by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/111">#111</a></li>
</ul>
<p><strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Don't show error when libpython is not found by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Improve conditional compilation in
<code>get_pipe_open_options</code> by <a
href="https://github.com/art049"><code>@​art049</code></a> in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/100">#100</a></li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Internals</h3>
<ul>
<li>Change log level to warn for venv_compat error by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/104">#104</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1">https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1</a>
<strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="653fdc30e6"><code>653fdc3</code></a>
Release v4.0.1 🚀</li>
<li><a
href="4da7be1bda"><code>4da7be1</code></a>
chore: bump runner version to 4.0.1</li>
<li><a
href="172d6c5630"><code>172d6c5</code></a>
chore: make the comment about input validation more discrete</li>
<li><a
href="d15e1ce813"><code>d15e1ce</code></a>
chore: improve the release script</li>
<li><a
href="6eeb021fd0"><code>6eeb021</code></a>
Release v4.0.0 🚀</li>
<li><a
href="74312dabbe"><code>74312da</code></a>
chore: improve the release script</li>
<li><a
href="8a17a350a8"><code>8a17a35</code></a>
ci: add modes to the matrix</li>
<li><a
href="8e3f02a649"><code>8e3f02a</code></a>
feat: make the mode argument required</li>
<li><a
href="97c7a6f5fc"><code>97c7a6f</code></a>
chore: bump runner version to 4.0.0</li>
<li><a
href="8a4cadd026"><code>8a4cadd</code></a>
chore: point the changelog to the runner</li>
<li>See full diff in <a
href="https://github.com/codspeedhq/action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=CodSpeedHQ/action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:56:46 +00:00
doubleinfinity
b944bbc766 docs: add ZeusDB vector store integration (#32822)
## Description

This PR adds documentation for the new ZeusDB vector store integration
with LangChain.

## Motivation

ZeusDB is a high-performance vector database (Python/Rust backend)
designed for AI applications that need fast similarity search and
real-time vector ops. This integration brings ZeusDB's capabilities to
the LangChain ecosystem, giving developers another production-oriented
option for vector storage and retrieval.

**Key Features:**
- **User-Friendly Python API**: Intuitive interface that integrates
seamlessly with Python ML workflows
- **High Performance**: Powered by a robust Rust backend for
lightning-fast vector operations
- **Enterprise Logging**: Comprehensive logging capabilities for
monitoring and debugging production systems
- **Advanced Features**: Includes product quantization and persistence
capabilities
- **AI-Optimized**: Purpose-built for modern AI applications and RAG
pipelines

## Changes

- Added provider documentation:
`docs/docs/integrations/providers/zeusdb.mdx` (installation, setup).

- Added vector store documentation:
`docs/docs/integrations/vectorstores/zeusdb.ipynb` (quickstart for
creating/querying a ZeusDBVectorStore).

- Registered langchain-zeusdb in `libs/packages.yml` for discovery.

## Target users

- AI/ML engineers building RAG pipelines

- Data scientists working with large document collections

- Developers needing high-throughput vector search

- Teams requiring near real-time vector operations

## Testing

- Followed LangChain's "How to add standard tests to an integration"
guidance.
- Code passes format, lint, and test checks locally.
- Tested with LangChain Core 0.3.74
- Works with Python 3.10 to 3.13

## Package Information
**PyPI:** https://pypi.org/project/langchain-zeusdb
**Github:** https://github.com/ZeusDB/langchain-zeusdb
2025-09-15 09:55:14 -04:00
Filip Makraduli
0be7515abc docs: add superlinked retriever integration (#32433)
# feat(superlinked): add superlinked retriever integration

**Description:** 
Add Superlinked as a custom retriever with full LangChain compatibility.
This integration enables users to leverage Superlinked's multi-modal
vector search capabilities including text similarity, categorical
similarity, recency, and numerical spaces with flexible weighting
strategies. The implementation provides a `SuperlinkedRetriever` class
that extends LangChain's `BaseRetriever` with comprehensive error
handling, parameter validation, and support for various vector databases
(in-memory, Qdrant, Redis, MongoDB).

**Key Features:**
- Full LangChain `BaseRetriever` compatibility with `k` parameter
support
- Multi-modal search spaces (text, categorical, numerical, recency)
- Flexible weighting strategies for complex search scenarios
- Vector database agnostic implementation
- Comprehensive validation and error handling
- Complete test coverage (unit tests, integration tests)
- Detailed documentation with 6 practical usage examples

**Issue:** N/A (new integration)

**Dependencies:** 
- `superlinked==33.5.1` (peer dependency, imported within functions)
- `pandas^2.2.0` (required by superlinked)

**Linkedin handle:** https://www.linkedin.com/in/filipmakraduli/

## Implementation Details

### Files Added/Modified:
- `libs/partners/superlinked/` - Complete package structure
- `libs/partners/superlinked/langchain_superlinked/retrievers.py` - Main
retriever implementation
- `libs/partners/superlinked/tests/unit_tests/test_retrievers.py` - unit
tests
- `libs/partners/superlinked/tests/integration_tests/test_retrievers.py`
- Integration tests with mocking
- `docs/docs/integrations/retrievers/superlinked.ipynb` - Documentation
a few usage examples

### Testing:
- `make format` - passing
- `make lint` - passing 
- `make test` - passing (16 unit tests, integration tests)
- Comprehensive test coverage including error handling, validation, and
edge cases

### Documentation:
- Example notebook with 6 practical scenarios:
  1. Simple text search
  2. Multi-space blog search (content + category + recency)
  3. E-commerce product search (price + brand + ratings)
  4. News article search (sentiment + topics + recency)
  5. LangChain RAG integration example
  6. Qdrant vector database integration

### Code Quality:
- Follows LangChain contribution guidelines
- Backwards compatible
- Optional dependencies imported within functions
- Comprehensive error handling and validation
- Type hints and docstrings throughout

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:54:04 +00:00
Sadiq Khan
cc9a97a477 docs(core): add type hints to BaseStore example code (#32946)
## Summary
- Add comprehensive type hints to the MyInMemoryStore example code in
BaseStore docstring
- Improve documentation quality and educational value for developers
- Align with LangChain's coding standards requiring type hints on all
Python code

## Changes Made
- Added return type annotations to all methods (__init__, mget, mset,
mdelete, yield_keys)
- Added parameter type annotations using proper generic types (Sequence,
Iterator)
- Added instance variable type annotation for the store attribute
- Used modern Python union syntax (str | None) for optional types

## Test Plan
- Verified Python syntax validity with ast.parse()
- No functional changes to actual code, only documentation improvements
- Example code now follows best practices and coding standards

This change improves the educational value of the example code and
ensures consistency with LangChain's requirement that "All Python code
MUST include type hints and return types" as specified in the
development guidelines.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:45:34 +00:00
Dmitry
ee17adb022 docs: add AI/ML API integration (#32430)
**Description:**
Introduces documentation notebooks for AI/ML API integration covering
the following use cases:
- Chat models (`ChatAimlapi`)
- Text completion models (`AimlapiLLM`)
- Provider usage examples
- Text embedding models (`AimlapiEmbeddings`)

Additionally, adds the `langchain-aimlapi` package entry to
`libs/packages.yml` for package management.

This PR aims to provide a comprehensive starting point for developers
integrating AI/ML API models with LangChain via the new
`langchain-aimlapi` package.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** @aimlapi

---

### **To-Do Before Submitting PR:**

* [x] Run `make format`
* [x] Run `make lint`
* [x] Confirm all documentation notebooks are in
`docs/docs/integrations/`
* [x] Double-check `libs/packages.yml` has the correct repo path
* [x] Confirm no `pyproject.toml` modifications were made unnecessarily

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 09:41:40 -04:00
Noraina
6a43f140bc docs: update SerpApi free searches amount in tool feature table (#32945)
**Description:** 
This PR updates the free searches per month from **100** to **250** and
renames SerpAPI to [SerpApi](https://serpapi.com/) to prevent confusion.
Add import API keys and enhance usage instructions in the Jupyter
notebook

**Issue:** N/A

**Dependencies:** N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
2025-09-14 21:42:59 -04:00
Youngho Kim
4619a2727f docs(anthropic): update documentation links (#32938)
**Description:**
This PR updated links to the latest Anthropic documentation. Changes
include revised links for model overview, tool usage, web search tool,
text editor tool, and more.

**Issue:**
N/A

**Dependencies:**
None

**Twitter handle:**
N/A
2025-09-14 21:38:51 -04:00
湛露先生
6487a7e2e5 chore(langchain): remove duplicate .pdf listing (#32929) 2025-09-14 21:33:40 -04:00
湛露先生
406ebc9141 chore(langchain): Fix typos in core docstrings (#32928)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-09-14 21:33:06 -04:00
Nikhil Chandrappa
e6b5ff213a docs: add YugabyteDB Distributed SQL database (#32571)
- **Description:** The `langchain-yugabytedb` package implementations of
core LangChain abstractions using `YugabyteDB` Distributed SQL Database.
  
YugabyteDB is a cloud-native distributed PostgreSQL-compatible database
that combines strong consistency with ultra-resilience, seamless
scalability, geo-distribution, and highly flexible data locality to
deliver business-critical, transactional applications.

[YugabyteDB](https://www.yugabyte.com/ai/) combines the power of the
`pgvector` PostgreSQL extension with an inherently distributed
architecture. This future-proofed foundation helps you build GenAI
applications using RAG retrieval that demands high-performance vector
search.

- [ ] **tests and docs**: 
1. `langchain-yugabytedb`
[github](https://github.com/yugabyte/langchain-yugabytedb) repo.
2. YugabyteDB VectorStore example notebook showing its use. It lives in
`langchain/docs/docs/integrations/vectorstores/yugabytedb.ipynb`
directory.
  3. Running `langchain-yugabytedb` unit tests 
  
- Setting up a Development Environment

This document details how to set up a local development environment that
will
allow you to contribute changes to the project.

Acquire sources and create virtualenv.
```shell
git clone https://github.com/yugabyte/langchain-yugabytedb
cd langchain-yugabytedb
uv venv --python=3.13
source .venv/bin/activate
```

Install package in editable mode.
```shell
uv pip install pipx  
pipx install poetry
poetry install
uv pip install pytest pytest_asyncio pytest-timeout langchain-core langchain_tests sqlalchemy psycopg psycopg-binary numpy pgvector
```

Start YugabyteDB RF-1 Universe.
```shell
docker run -d --name yugabyte_node01 --hostname yugabyte01 \
  -p 7000:7000 -p 9000:9000 -p 15433:15433 -p 5433:5433 -p 9042:9042 \
  yugabytedb/yugabyte:2.25.2.0-b359 bin/yugabyted start --background=false \
  --master_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true" \
  --tserver_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true"

docker exec -it yugabyte_node01 bin/ysqlsh -h yugabyte01 -c "CREATE extension vector;"
```

Invoke test cases.
```shell
pytest -vvv tests/unit_tests/yugabytedb_tests
```
2025-09-12 16:55:09 -04:00
Michael Yilma
03f0ebd93e docs: add Bigtable Key-value Store and Vector Store Docs (#32598)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **feat(docs)**: add Bigtable Key-value store doc
- [X] **feat(docs)**: add Bigtable Vector store doc 

This PR adds a doc for Bigtable and LangChain Key-value store
integration. It contains guides on how to add, delete, get, and yield
key-value pairs from Bigtable Key-value Store for LangChain.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:53:59 -04:00
Bar Cohen
c9eed530ce docs: add Timbr tools integration (#32862)
# feat(integrations): Add Timbr tools integration

## DESCRIPTION

This PR adds comprehensive documentation and integration support for
Timbr's semantic layer tools in LangChain.

[Timbr](https://timbr.ai/) provides an ontology-driven semantic layer
that enables natural language querying of databases through
business-friendly concepts. It connects raw data to governed business
measures for consistent access across BI, APIs, and AI applications.

[`langchain-timbr`](https://pypi.org/project/langchain-timbr/) is a
Python SDK that extends
[LangChain](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangChain)
and
[LangGraph](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangGraph)
with custom agents, chains, and nodes for seamless integration with the
Timbr semantic layer. It enables converting natural language prompts
into optimized semantic-SQL queries and executing them directly against
your data.

**What's Added:**
- Complete integration documentation for `langchain-timbr` package
- Tool documentation page with usage examples and API reference

**Integration Components:**
- `IdentifyTimbrConceptChain` - Identify relevant concepts from user
prompts
- `GenerateTimbrSqlChain` - Generate SQL queries from natural language
- `ValidateTimbrSqlChain` - Validate queries against knowledge graph
schemas
- `ExecuteTimbrQueryChain` - Execute queries against semantic databases
- `GenerateAnswerChain` - Generate human-readable answers from results

## Documentation Added

- `/docs/integrations/providers/timbr.mdx` - Provider overview and
configuration
- `/docs/integrations/tools/timbr.ipynb` - Comprehensive tool usage
examples

## Links

- [PyPI Package](https://pypi.org/project/langchain-timbr/)
- [GitHub Repository](https://github.com/WPSemantix/langchain-timbr)
- [Official
Documentation](https://docs.timbr.ai/doc/docs/integration/langchain-sdk/)

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:51:42 -04:00
tbice
e6c38a043f docs: add Qwen integration guide and update qwq documentation (#32817)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

**Description:**  
Add documentation for Qwen integration in LangChain, including setup
instructions, usage examples, and configuration details. Update related
qwq documentation to reflect current best practices and improve clarity
for users.

This PR enhances the documentation ecosystem by:
- Adding a new guide for integrating Qwen models
- Updating outdated or incomplete qwq documentation
- Improving structure and readability of relevant sections

**Issue:** N/A  
**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:49:20 -04:00
Elif Sema Balcioglu
dc47c2c598 docs: update langchain-oracledb documentation (#32805)
`Oracle AI Vector Search` integrations for LangChain have been moved to
a dedicated package, [langchain-oracledb
](https://pypi.org/project/langchain-oracledb/), and a new repository,
[langchain-oracle
](https://github.com/oracle/langchain-oracle/tree/main/libs/oracledb).
This PR updates the corresponding documentation, including installation
instructions and import statements, to reflect these changes.

This PR is complemented with:
https://github.com/langchain-ai/langchain-community/pull/283

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:47:10 -04:00
Yuvraj Chandra
3420ca1da2 docs: add ZenRows provider and tool integration docs (#31742)
**Description:** Adds documentation for ZenRows integration with
LangChain, including provider overview and detailed tool documentation.
ZenRows is an enterprise-grade web scraping solution that enables
LangChain agents to extract web content at scale with advanced features
like JavaScript rendering, anti-bot bypass, geo-targeting, and multiple
output formats.

This PR includes:
- Provider documentation
(`docs/docs/integrations/providers/zenrows.ipynb`)
- Tool documentation
(`docs/docs/integrations/tools/zenrows_universal_scraper.ipynb`)
- Complete usage examples and API reference links

**Issue:** N/A

**Dependencies:** 
- [langchain-zenrows](https://github.com/ZenRows-Hub/langchain-zenrows)
package (external, available on
[PyPI](https://pypi.org/project/langchain-zenrows/))
- No changes to core LangChain dependencies

**LinkedIn handle:** https://www.linkedin.com/company/zenrows/

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:37:49 -04:00
Vishal Karwande
f11dd177e9 docs: update oci documentation and examples. (#32749)
Adding Oracle Generative AI as one of the providers for langchain.
Updated the old examples in the documentation with the new working
examples.

---------

Co-authored-by: Vishal Karwande <vishalkarwande@Vishals-MacBook-Pro.local>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:28:03 -04:00
Ali Ismail
d5a4abf960 docs(core): remove duplicate 'the' in indexing/api.py (#32924)
**Description:** Fixes a small typo in `_get_document_with_hash` inside 
`libs/core/langchain_core/indexing/api.py`.

**Issue:** N/A (no related issue)

**Dependencies:** None
2025-09-12 15:49:54 -04:00
Eugene Yurtsev
b1497bcea1 chore(core): test that default values in tool calls are preserved in json schema representation (#32921)
Add unit test coverage for this issue:
https://github.com/langchain-ai/langchain/issues/32232
2025-09-12 12:50:54 -04:00
ccurme
b88115f6fc feat(openai): (v1) support pdfs passed via url in standard format (#32876) 2025-09-12 10:44:00 -04:00
Sydney Runkle
84f9824cc9 chore: use uv caches (#32919)
Especially helpful for the text splitters tests where we're installing
pytorch (expensive and slow slow slow). Should speed up CI by 5-10 mins.

w/o caches, CI taking 20 minutes 😨 
w/ caches, CI taking 3 minutes
2025-09-12 10:29:35 -04:00
Sydney Runkle
0814bfe5ed ci: use partial runs w/ codspeed (#32920)
Taking advantage of [partial
runs](https://codspeed.io/docs/features/partial-runs)!

This should save us minutes on every CI job, we only run codspeed for
libs w/ changes and this doesn't affect benchmarking drops
2025-09-12 09:46:01 -04:00
Christophe Bornet
cbaf97ada4 chore: bump mypy version to 1.18 (#32914) 2025-09-12 09:19:23 -04:00
Sydney Runkle
dc2da95ac0 release(langchain): v1.0.0a5 (#32917) 2025-09-12 08:36:44 -04:00
Sydney Runkle
9e78ff19ab fix(langchain): use messages from model request (#32908)
Oversight when moving back to basic function call for
`modify_model_request` rather than implementation as its own node.

Basic test right now failing on main, passing on this branch

Revealed a gap in testing. Will write up a more robust test suite for
basic middleware features.
2025-09-12 08:18:02 -04:00
Mason Daugherty
67aa37b144 Merge branch 'master' into wip-v1.0 2025-09-11 23:21:37 -04:00
Mason Daugherty
649d8a8223 test(anthropic): enable VCR for web fetch test (#32913)
The API issues have been resolved; no longer xfailing
2025-09-12 03:19:55 +00:00
Mason Daugherty
9f14714367 fix: anthropic tests (stale cassette from max dynamic tokens) 2025-09-11 22:44:16 -04:00
Mason Daugherty
338d3d2795 chore: remove infra tag from task issue template (#32912) 2025-09-11 22:02:14 -04:00
Mason Daugherty
31f641a11f chore(infra): issue template updates (#32911) 2025-09-11 22:00:44 -04:00
open-swe[bot]
91286b0b27 chore(infra): issue template updates (#32910)
Fixes: #32909

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:53:35 -04:00
dishaprakash
bea72bac3e docs: add hybrid search documentation to PGVectorStore (#32549)
Adding documentation for Hybrid Search in the PGVectorStore Notebook

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:12:58 -04:00
Mason Daugherty
7cc9312979 fix: anthropic test since new dynamic max tokens 2025-09-11 17:53:30 -04:00
Mason Daugherty
387d0f4edf fix: lint 2025-09-11 17:52:46 -04:00
Mason Daugherty
207ea46813 Merge branch 'master' into wip-v1.0 2025-09-11 17:24:45 -04:00
Caspar Broekhuizen
15d558ff16 fix(core): resolve mermaid node id collisions when special chars are used (#32857)
### Description

* Replace the Mermaid graph node label escaping logic
(`_escape_node_label`) with `_to_safe_id`, which converts a string into
a unique, Mermaid-compatible node id. Ensures nodes with special
characters always render correctly.

**Before**
* Invalid characters (e.g. `开`) replaced with `_`. Causes collisions
between nodes with names that are the same length and contain all
non-safe characters:
```python
_escape_node_label("开") # '_'
_escape_node_label("始") # '_'  same as above, but different character passed in. not a unique mapping.
```

**After**
```python
_to_safe_id("开") # \5f00
_to_safe_id("始") # \59cb  unique!
```

### Tests
* Rename `test_graph_mermaid_escape_node_label()` to
`test_graph_mermaid_to_safe_id()` and update function logic to use
`_to_safe_id`
* Add `test_graph_mermaid_special_chars()`

### Issue

Fixes langchain-ai/langgraph#6036
2025-09-11 14:15:17 -07:00
Hyunjoon Jeong
9cc85387d1 fix(text-splitters): add validation to prevent infinite loop and prevent empty token splitter (#32205)
### Description
1) Add validation to prevent infinite loop condition when
```tokenizer.tokens_per_chunk > tokenizer.chunk_overlap```
2) Avoid empty decoded chunk when splitter appends tokens

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:55:32 -04:00
Mason Daugherty
7e5180e2fa refactor: inline test release (#32903)
Reusable workflows are not currently supported by PyPI's Trusted
Publishing
functionality, and are subject to breakage. Users are strongly
encouraged
to avoid using reusable workflows for Trusted Publishing until support
becomes official. Please, do not report bugs if this breaks.
2025-09-11 16:20:07 -04:00
Mason Daugherty
bbb1b9085d release(prompty): 0.1.2 (#32907) 2025-09-11 16:19:07 -04:00
Vincent Min
ff9f17bc66 fix(core): preserve ordering in RunnableRetry batch/abatch results (#32526)
Description: Fixes a bug in RunnableRetry where .batch / .abatch could
return misordered outputs (e.g. inputs [0,1,2] yielding [1,1,2]) when
some items succeeded on an earlier attempt and others were retried. Root
cause: successful results were stored keyed by the index within the
shrinking “pending” subset rather than the original input index, causing
collisions and reordered/duplicated outputs after retries. Fix updates
_batch and _abatch to:

- Track remaining original indices explicitly.
- Call underlying batch/abatch only on remaining inputs.
- Map results back to original indices.
- Preserve final ordering by reconstructing outputs in original
positional order.

Issue: Fixes #21326

Tests:

- Added regression tests: test_retry_batch_preserves_order and
test_async_retry_batch_preserves_order asserting correct ordering after
a single controlled failure + retry.
- Existing retry tests still pass.

Dependencies:

- None added or changed.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:18:25 -04:00
Matthew Lapointe
b1f08467cd feat(core): allow overriding ls_model_name from kwargs (#32541) 2025-09-11 16:18:06 -04:00
Eugene Yurtsev
2903e08311 chore(docs): remove langchain_experimental from api reference (#32904)
This removes langchain-experimental from api reference.

We do not recommend it to users for production use cases, so let's also
deprecate it from documentation
2025-09-11 16:13:58 -04:00
Mason Daugherty
115e20a0bc release(ollama): 0.3.8 (#32906) 2025-09-11 16:00:41 -04:00
Mason Daugherty
0ea945d291 release(nomic): 0.1.5 (#32905) 2025-09-11 15:54:19 -04:00
Mason Daugherty
5795ec3c4d release(exa): 0.3.1 (#32902) 2025-09-11 15:53:13 -04:00
Mason Daugherty
bd765753ca release(chroma): 0.2.6 (#32901) 2025-09-11 15:52:19 -04:00
Christophe Bornet
5fd7962a78 fix(core): fix support of Pydantic v1 models in BaseTool.args (#32487)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:44:51 -04:00
Marcus Chia
c68796579e fix(core): resolve infinite recursion in _dereference_refs_helper with mixed $ref objects (#32578)
**Description:** Fixes infinite recursion issue in JSON schema
dereferencing when objects contain both $ref and other properties (e.g.,
nullable, description, additionalProperties). This was causing Apollo
MCP server schemas to hang indefinitely during tool binding.

**Problem:**
- Commit fb5da8384 changed the condition from `set(obj.keys()) ==
{"$ref"}` to `"$ref" in set(obj.keys())`
- This caused objects with $ref + other properties to be treated as pure
$ref nodes
- Result: other properties were lost and infinite recursion occurred
with complex schemas

**Solution:**
- Restore pure $ref detection for objects with only $ref key  
- Add proper handling for mixed $ref objects that preserves all
properties
- Merge resolved reference content with other properties
- Maintain cycle detection to prevent infinite recursion

**Impact:**
- Fixes Apollo MCP server schema integration
- Resolves tool binding infinite recursion with complex GraphQL schemas
- Preserves backward compatibility with existing functionality
- No performance impact - actually improves handling of complex schemas

**Issue:** Fixes #32511

**Dependencies:** None

**Testing:**
- Added comprehensive unit tests covering mixed $ref scenarios
- All existing tests pass (1326 passed, 0 failed)
- Tested with realistic Apollo GraphQL schemas
- Stress tested with 100 iterations of complex schemas

**Verification:**
-  `make format` - All files properly formatted
-  `make lint` - All linting checks pass  
-  `make test` - All 1326 unit tests pass
-  No breaking changes - full backwards compatibility maintained

---------

Co-authored-by: Marcus <marcus@Marcus-M4-MAX.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:21:31 -04:00
Mason Daugherty
255ad31955 release(anthropic): 0.3.20 (#32900) 2025-09-11 15:18:43 -04:00
Mason Daugherty
00e992a780 feat(anthropic): web fetch beta (#32894)
Note: citations are broken until Anthropic fixes their API
2025-09-11 15:14:06 -04:00
Mason Daugherty
83d938593b lint 2025-09-11 15:12:38 -04:00
Mason Daugherty
38afeddcb6 fix(groq): update docs due to model deprecation (#32899)
On Friday, October 10th, the moonshotai/kimi-k2-instruct model will be
decommissioned in favor of the latest version,
moonshotai/kimi-k2-instruct-0905.
 
Until then, requests to moonshotai/kimi-k2-instruct will automatically
be routed to moonshotai/kimi-k2-instruct-0905.
2025-09-11 15:00:24 -04:00
Yu Zhong
fca1aaa9b5 fix(core): force overwrite additionalProperties to False in strict mode (#32879)
# Description
This PR fixes a bug in _recursive_set_additional_properties_false used
in function_calling.convert_to_openai_function.

Previously, schemas with "additionalProperties=True" were not correctly
overridden when strict validation was expected, which could lead to
invalid OpenAI function schemas.

The updated implementation ensures that:
- Any schema with "additionalProperties" already set will now be forced
to False under strict mode.
- Recursive traversal of properties, items, and anyOf is preserved.
- Function signature remains unchanged for backward compatibility.

# Issue
When using tool calling in OpenAI structured output strict mode
(strict=True), 400: "Invalid schema for response_format XXXXX
'additionalProperties' is required to be supplied and to be false" error
raises for the parameter that contains dict type. OpenAI requires
additionalProperties to be set to False.
Some PRs try to resolved the issue.
- PR #25169 introduced _recursive_set_additional_properties_false to
recursively set additionalProperties=False.
- PR #26287 fixed handling of empty parameter tools for OpenAI function
generation.
- PR #30971 added support for Union type arguments in strict mode of
OpenAI function calling / structured output.

Despite these improvements, since Pydantic 2.11, it will always add
`additionalProperties: True` for arbitrary dictionary schemas dict or
Any (https://pydantic.dev/articles/pydantic-v2-11-release#changes).
Schemas that already had additionalProperties=True in such cases were
not being overridden, which this PR addresses to ensure strict mode
behaves correctly in all cases.

# Dependencies
No Changes

---------

Co-authored-by: Zhong, Yu <yzhong@freewheel.com>
2025-09-11 11:02:12 -04:00
Jonathan Paserman
af17774186 docs: add MLflow tracking and evaluation cookbook (#32667)
This PR adds a new cookbook demonstrating how to build a RAG pipeline
with LangChain and track + evaluate it using MLflow.
Currently not much documentation on LangChain MLflow integration, hope
this can help folks trying to monitor and evaluate their LangChain
applications.

- ArXiv document loader 
- In Memory vector store
- LCEL rag pipeline
- MLflow tracing
- MLflow evaluation

Issue:
N/A

Dependencies:
N/A
2025-09-10 22:55:28 -04:00
chen-assert
d72da29c0b docs: Fix classification notebook small mistake (#32636)
Fix some minor issues in the Classification Notebook.
While some code still using hardcoded OpenAI model instead of selected
chat model.

Specifically, on page [Classify Text into
Labels](https://python.langchain.com/docs/tutorials/classification/)

We selected chat model before and have init_chat_model with our chosen
mode.
<img width="1262" height="576" alt="image"
src="https://github.com/user-attachments/assets/14eb436b-d2ef-4074-96d8-71640a13c0f7"
/>

But the following sample code still uses the hard-coded OpenAI model,
which in my case is obviously unrunable (lack of openai api key)
<img width="1263" height="543" alt="image"
src="https://github.com/user-attachments/assets/d13846aa-1c4b-4dee-b9c1-c66570ba3461"
/>
2025-09-10 22:43:44 -04:00
Amit Biswas
653b0908af docs: update Confident callback integration and examples (#32458)
**Description:**
Updates the Confident AI integration documentation to use modern
patterns and improve code quality. This change:
- Replaces deprecated `DeepEvalCallbackHandler` with the new
`CallbackHandler` from `deepeval.integrations.langchain`
- Updates installation and authentication instructions to match current
best practices
- Adds modern integration examples using LangChain's latest patterns
- Removes deprecated metrics and outdated code examples
- Updates code samples to follow current best practices

The changes make the documentation more maintainable and ensure users
follow the recommended integration patterns.

**Issue:** Fixes #32444

**Dependencies:**
- deepeval
- langchain
- langchain-openai

**Twitter handle:** @Muwinuddin

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-10 22:43:31 -04:00
GDanksAnchor
eb77da7de5 docs: add name title for Anchor Browser (#32512)
# description
change the sidebar name to Anchor Browser from anchor_browser.

# Issue
Anchor Browser sidebar name looks unattractive.
2025-09-10 22:40:37 -04:00
Tianyu Chen
9c93439a01 docs: add Linux quick setup method for JaguarDB (#32520)
Description:
Added "Method Two: Quick Setup (Linux)" section to prerequisites,
providing a curl-based installation method for deploying JaguarDB
without Docker. Retained original Docker setup instructions for
flexibility.
2025-09-10 22:36:01 -04:00
Marco Vinciguerra
64fe1e9a80 docs: update scrapegraph.ipynb (#32617)
I updated ScrapeGraphAI for checking the new ScrapeGraphAI tool
2025-09-10 22:33:57 -04:00
chen-assert
e4a90490c3 docs: Fix agents tutorials parameter missing (#32639)
Fix a minor issue in the Agents Tutorials Notebook.
While a config parameter is missing.

Specifically, on page [Build an Agent#Streaming
tokens](https://python.langchain.com/docs/tutorials/agents/#streaming-tokens)

These pieces of code can not be run without the config parameter, which
seems to have been omitted by the author.
<img width="1318" height="691" alt="image"
src="https://github.com/user-attachments/assets/54ce2833-9499-41bb-9de0-d5f9beba9ef9"
/>
2025-09-10 22:27:24 -04:00
dwelch-spike
80776b80f0 docs: remove aerospike vector store (#32726)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- **Description:** Aerospike Vector Store has been retired. It is no
longer supported so It should no longer be documented on the Langchain
site.

- **Add tests and docs**: Removes docs for retired Aerospike vector
store.

- **Lint and test**: NA
2025-09-10 22:19:43 -04:00
may
2c2bab93fc docs: add example for reusing an existing collection (#32774)
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

### Description
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

### Issue
Fixes langchain-ai/langchain-weaviate#197

### Dependencies
None
2025-09-10 22:16:46 -04:00
Mateusz Świtała
221c96e7b4 docs: fix import path in WatsonxToolkit after releasing langchain-ibm 0.3.17 (#32746)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: 
- **Description:** Fixing the import path for `WatsonxToolkit` in
examples after releasing `lnagchain-ibm==0.3.17`

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-09-10 22:14:43 -04:00
Mason Daugherty
3ef1165c0c Merge branch 'master' into wip-v1.0 2025-09-10 22:07:40 -04:00
yrk111222
364465bd11 docs: update modelscope.mdx (#32823)
### Description
This PR is primarily aimed at updating some usage methods in the
`modelscope.mdx` file.
Specifically, it changes from `ModelScopeLLM` to `ModelScopeEndpoint`.
### Relevant PR
The relevant PR link is:
https://github.com/langchain-ai/langchain/pull/28941
2025-09-10 22:07:19 -04:00
Mason Daugherty
ffee5155b4 fix lint 2025-09-10 22:03:45 -04:00
Mason Daugherty
5ef7d42bf6 refactor(core): remove example attribute from AIMessage and HumanMessage (#32565) 2025-09-10 22:01:25 -04:00
Mason Daugherty
750a3ffda6 Merge branch 'master' into wip-v1.0 2025-09-10 21:51:38 -04:00
Mason Daugherty
7b874da9b2 fix(docs): text-embedding-004 -> gemini-embedding-001 (#32596)
`text-embedding-004` will be discontinued
2025-09-10 21:47:45 -04:00
Vadym Barda
8b1e25461b fix(core): add on_tool_error to _AstreamEventsCallbackHandler (#30709)
Fixes https://github.com/langchain-ai/langchain/issues/30708

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-10 21:34:05 -04:00
Mason Daugherty
ced9fc270f Merge branch 'master' into wip-v1.0 2025-09-10 21:32:03 -04:00
Mason Daugherty
8e213c9f1a fix(core): AsyncCallbackHandler docstring cleanup (#32897)
plus IDE warning fixes
2025-09-10 21:31:45 -04:00
Mason Daugherty
311aa94d69 Merge branch 'master' into wip-v1.0 2025-09-10 20:57:17 -04:00
Yash Vishwanath Tobre
a8828b1bda fix(core): raise OutputParserException for non-dict JSON outputs (#32236)
**Description:**
Raise a more descriptive OutputParserException when JSON parsing results
in a non-dict type. This improves debugging and aligns behavior with
expectations when using expected_keys.

**Issue:**
Fixes #32233

**Twitter handle:**
@yashvtobre

**Testing:**

- Ran make format and make lint from the root directory; both passed
cleanly.
- Attempted make test but no such target exists in the root Makefile.
- Executed tests directly via pytest targeting the relevant test file,
confirming all tests pass except for unrelated async test failures
outside the scope of this change.

**Notes:**

- No additional dependencies introduced.
- Changes are backward compatible and isolated within the output parser
module.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-10 20:57:09 -04:00
Mason Daugherty
7a158c7f1c revert: "chore: remove ruff target-version" (#32895)
Reverts langchain-ai/langchain#32880

Not needed at the moment, will do when finishing v1
2025-09-10 20:56:48 -04:00
Mason Daugherty
cb8598b828 Merge branch 'master' into wip-v1.0 2025-09-10 20:51:00 -04:00
Daniel Barker
25c34bd9b2 feat(core): allow custom Mermaid URL (#32831)
- **Description:** Currently,
`langchain_core.runnables.graph_mermaid.py` is hardcoded to use
mermaid.ink to render graph diagrams. It would be nice to allow users to
specify a custom URL, e.g. for self-hosted instances of the Mermaid
server.
- **Issue:** [Langchain Forum: allow custom mermaid API
URL](https://forum.langchain.com/t/feature-request-allow-custom-mermaid-api-url/1472)
  - **Dependencies:** None

- [X] **Add tests and docs**: Added unit tests using mock requests.
- [X] **Lint and test**: Run `make format`, `make lint` and `make test`.

Minimal example using the feature:

```python
import os
import operator
from pathlib import Path
from typing import Any, Annotated, TypedDict

from langgraph.graph import StateGraph

class State(TypedDict):
    messages: Annotated[list[dict[str, Any]], operator.add]

def hello_node(state: State) -> State:
    return {"messages": [{"role": "assistant", "content": "pong!"}]}

builder = StateGraph(State)
builder.add_node("hello_node", hello_node)
builder.add_edge("__start__", "hello_node")
builder.add_edge("hello_node", "__end__")

graph = builder.compile()

# Run graph
output = graph.invoke({"messages": [{"role": "user", "content": "ping?"}]})

# Draw graph
Path("graph.png").write_bytes(graph.get_graph().draw_mermaid_png(base_url="https://custom-mermaid.ink"))
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-10 17:14:50 -04:00
Mason Daugherty
38001699d5 fix(anthropic): remove unneeded beta flags (#32893)
- Beta isn't needed for search result tests anymore
- Add TODO for other tests to come back when generally available
- Regenerate remote MCP snapshot after some testing (now the same, but
fresher)
- Bump deps
2025-09-10 20:47:13 +00:00
Mason Daugherty
3da0377c02 fix(anthropic): update ChatAnthropic model in tests/doc (#32892)
from `'claude-3-5-sonnet-latest'` to `'claude-3-5-haiku-latest'` since
sonnet is deprecated
2025-09-10 16:44:04 -04:00
JADAVA VINEETH KUMAR RAO
0abf82a45a fix(openai): ainvoke uses async _aget_response; add async tests (#32459) 2025-09-10 15:52:15 -04:00
Jonathan Hill
2fed177d0b fix(core): preserve ToolMessage.status field in convert_to_messages (#32840) 2025-09-10 15:49:39 -04:00
Eugene Yurtsev
fded6c6b13 chore(core): remove beta namespace and context api (#32850)
* Remove the `beta` namespace from langchain_core
* Remove the context API (never documented and makes it easier to create
memory leaks). This API was a beta API.

Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
2025-09-10 15:04:29 -04:00
Aasish
9c7d262ff4 fix(openai): update AzureOpenAIEmbeddings validation logic for openai_api_base (#31782) 2025-09-10 14:53:30 -04:00
ccurme
67e651b592 fix(infra): fix min version check (#32891)
Should no longer require `langchain-core>=(version in monorepo)`
2025-09-10 14:04:26 -04:00
Shibayan003
f08dfb6f49 test: Add failing test for BaseCallbackManager.merge (#32040)
This pull request introduces a failing unit test to reproduce the bug
reported in issue #32028.
The test asserts the expected behavior: `BaseCallbackManager.merge()`
should combine `handlers` and `inheritable_handlers` independently,
without mixing them. This test will fail on the current codebase and is
intended to guide the fix and prevent future regressions.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-10 13:56:18 -04:00
ccurme
450870c9ac release(qdrant): 0.2.1 (#32889) 2025-09-10 13:21:16 -04:00
Zhou Jing
10dfeea110 fix(qdrant): allow as_retriever to work without embeddings in SPARSE mode (#32757) 2025-09-10 13:08:50 -04:00
ccurme
34ecb92178 release(openai): 0.3.33 (#32887) 2025-09-10 11:53:26 -04:00
ccurme
49b3918c26 fix(infra): update scheduled test workflow following uv migration in langchain-google (#32886) 2025-09-10 11:30:55 -04:00
Christophe Bornet
12921a94c5 test(core): reactivate commented tests in test_indexing (#32882)
* These tests now pass
* Commenting them is a [ruff
ERA](https://docs.astral.sh/ruff/rules/commented-out-code/) violation
2025-09-10 11:14:14 -04:00
Alexey Bondarenko
181bb91ce0 fix(ollama): Fix handling message content lists (#32881)
The Ollama chat model adapter does not support all of the possible
message content formats. That leads to Ollama model adapter crashing on
some messages from different models (e.g. Gemini 2.5 Flash).

These changes should fix one known scenario - when `content` is a list
containing a string.
2025-09-10 11:13:28 -04:00
Christophe Bornet
b274416441 chore: remove ruff target-version (#32880)
This is not needed anymore since `requires-python` was added when moving
to `uv`.
2025-09-10 11:12:30 -04:00
Mason Daugherty
544b08d610 Merge branch 'master' into wip-v1.0 2025-09-10 11:10:48 -04:00
ccurme
389a781aa0 fix(infra): exclude pre-releases from latest version checks in core release workflow (#32883) 2025-09-10 10:35:24 -04:00
William FH
443f0ccb0e release(core): 0.3.76 (#32877) 2025-09-10 14:10:44 +00:00
Lauren Hirata Singh
00e547c311 docs: update banner with docs deprecation notice (#32871) 2025-09-10 00:35:43 +00:00
Mason Daugherty
a48ace52ad fix: lint 2025-09-09 18:59:38 -04:00
Sydney Runkle
d464d3089b chore: redirect docs template -> docs repo (#32872) 2025-09-09 18:24:22 -04:00
William FH
f1d44d0f9d fix(core): honor enabled=false in nested tracing (#31986)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-09 13:12:17 -07:00
Mason Daugherty
20979d525c Merge branch 'master' into wip-v1.0 2025-09-09 15:01:51 -04:00
Christophe Bornet
a35ee49f37 chore(langchain): enable ruff docstring-code-format in langchain (#32858) 2025-09-09 15:00:38 -04:00
Christophe Bornet
352ff363ca chore(cli): remove ruff exclusion of templates (#32864) 2025-09-09 14:56:47 -04:00
Christophe Bornet
256a0b5f2f chore(langchain): add ruff rule BLE (#32868)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-09 18:52:53 +00:00
ccurme
937087a29c release(groq): 0.3.8 (#32870) 2025-09-09 14:39:02 -04:00
Jan Z
08bf4c321f feat(groq): add support for json_schema (#32396) 2025-09-09 18:30:07 +00:00
Mason Daugherty
4c6af2d1b2 fix(openai): structured output (#32551) 2025-09-09 11:37:50 -04:00
Christophe Bornet
ee268db1c5 feat(standard-tests): add a property to skip relevant tests if the vector store doesn't support get_by_ids() (#32633) 2025-09-09 11:37:23 -04:00
Zhou Jing
dcc517b187 fix(core): ensure InjectedToolCallId always overrides LLM-generated values (#32766) 2025-09-09 11:25:52 -04:00
Mason Daugherty
c124e67325 chore(docs): update package READMEs (#32869)
- Fix badges
- Focus on agents
- Cut down fluff
2025-09-09 14:50:32 +00:00
Christophe Bornet
699a5d06d1 chore(langchain): add ruff rule ERA (#32867) 2025-09-09 10:13:18 -04:00
Christophe Bornet
00f699c60d chore(core): cleanup pyproject.toml (#32865) 2025-09-09 10:12:18 -04:00
Christophe Bornet
e36e25fe2f feat(langchain): support PEP604 ( | union) in tool node error handlers (#32861)
This allows to use PEP604 syntax for `ToolNode` error handlers
```python
def error_handler(e: ValueError | ToolException) -> str:
    return "error"

ToolNode(my_tool, handle_tool_errors=error_handler).invoke(...)
```
Without this change, this fails with `AttributeError: 'types.UnionType'
object has no attribute '__mro__'`
2025-09-09 10:11:12 -04:00
Christophe Bornet
cc3b5afe52 fix(huggingface): fix typing in test_standard (#32863) 2025-09-09 10:05:41 -04:00
Gal Bloch
428c2ee6c5 fix(langchain): preserve supplied llm in FlareChain.from_llm (#32847) 2025-09-09 13:41:23 +00:00
Christophe Bornet
714f74a847 refactor(core): improve beta decorator (#32505)
This is better than using a subclass as returning a `property` works
with `ClassWithBetaMethods.beta_property.__doc__`

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 18:06:48 -04:00
Christophe Bornet
c3b28c769a chore(langchain): add ruff rules D (except D100 and D104) (#31994)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 21:47:22 +00:00
Christophe Bornet
017348b27c chore(langchain): add ruff rule E501 in langchain_v1 (#32812)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:28:14 -04:00
Christophe Bornet
1e101ae9a2 chore(langchain): add ruff rules N (#32098)
See https://docs.astral.sh/ruff/rules/#pep8-naming-n

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:27:43 -04:00
Christophe Bornet
fe6c415c9f chore(langchain): add ruff rule UP007 in langchain_v1 (#32811)
Done by autofix
2025-09-08 17:26:00 -04:00
Mason Daugherty
188c0154b3 Merge branch 'master' into wip-v1.0 2025-09-08 17:08:57 -04:00
Christophe Bornet
3c189f0393 chore(langchain): fix deprecation warnings (#32379)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:03:06 -04:00
Christophe Bornet
54c2419a4e chore(langchain): enable ruff docstring-code-format in langchain_v1 (#32855) 2025-09-08 16:51:18 -04:00
Mason Daugherty
35e9d36b0e fix(standard-tests): ensure non-negative token counts in usage metadata assertions (#32593) 2025-09-08 16:49:26 -04:00
Christophe Bornet
8b90eae455 chore(text-splitters): enable ruff docstring-code-format (#32854) 2025-09-08 16:40:11 -04:00
Christophe Bornet
05d14775f2 chore(standard-tests): enable ruff docstring-code-format (#32852) 2025-09-08 16:39:53 -04:00
PieterKok-jaam
33c7f230e0 feat(core): add id field to Document passed to filter for InMemoryVectorStore similarity search (#32688)
Added an id field to the Document passed to filter for
InMemoryVectorStore similarity search. This allows filtering by Document
id and brings the input to the filter in line with the result returned
by the vector similarity search.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 20:39:18 +00:00
Mason Daugherty
97dd7628d2 chore: update badges (#32851)
- stars badge redundant (look at the top of the page)
- remove version badge since we have many pkgs (and it was only showing
core) -- also, just look at the releases tab to the right of the readme
2025-09-08 20:06:59 +00:00
Adithya1617
f5bd00d1f1 feat(core): support AWS Bedrock document content blocks in msg_content_output (#32799) 2025-09-08 19:40:28 +00:00
Sadra Barikbin
3486d6c74d feat(core): support for adding PromptTemplates with formats other than f-string (#32253)
Allow adding`PromptTemplate`s with formats other than `f-string`. Fixes
#32151

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 19:16:54 +00:00
Mason Daugherty
8509efa6ad chore: remove erroneous pyversion specifiers 2025-09-08 15:03:08 -04:00
Mason Daugherty
b1a105f85f fix: huggingface lint 2025-09-08 14:59:38 -04:00
Mason Daugherty
9e54c5fa7f Merge branch 'master' into wip-v1.0 2025-09-08 14:44:28 -04:00
Mason Daugherty
0b8817c900 Merge branch 'master' into wip-v1.0 2025-09-08 14:43:58 -04:00
Stefano Lottini
390606c155 fix(standard-tests): standard vectorstore tests accept out-of-order get_by_ids (#32821)
- **Description:** The vectorstore standard-test mistakenly assumes that
the store's `get_by_ids` respects the order of the provided `ids`. This
is not the case (as the base class docstring states). This PR fixes
those tests that would fail otherwise (see issue #32820 for details,
repro and all). Fixes #32820
- **Issue:** Fixes #32820
- **Dependencies:** none

Co-authored-by: Stefano Lottini <stefano.lottini@ibm.com>
2025-09-08 14:22:14 -04:00
Christophe Bornet
cc98fb9bee chore(core): add ruff rule PLC0415 (#32351)
See https://docs.astral.sh/ruff/rules/import-outside-top-level/

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:15:04 -04:00
Christophe Bornet
16420cad71 chore(core): fix some pydocs to use google-style (#32764)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:52:17 +00:00
Christophe Bornet
01fdeede50 chore(core): fix some ruff preview rules (#32785)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:55:20 +00:00
Christophe Bornet
f4e83e0ad8 chore(core): fix some docstrings (from DOC preview rule) (#32833)
* Add `Raises` sections
* Add `Returns` sections
* Add `Yields` sections

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:44:15 +00:00
dependabot[bot]
4024d47412 chore(infra): bump actions/setup-python from 5 to 6 (#32842)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1164">actions/setup-python#1164</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Enhancements:</h3>
<ul>
<li>Add support for <code>pip-version</code> by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1129">actions/setup-python#1129</a></li>
<li>Enhance reading from .python-version by <a
href="https://github.com/krystof-k"><code>@​krystof-k</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li>Add version parsing from Pipfile by <a
href="https://github.com/aradkdj"><code>@​aradkdj</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Clarify pythonLocation behaviour for PyPy and GraalPy in environment
variables by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1183">actions/setup-python#1183</a></li>
<li>Change missing cache directory error to warning by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1182">actions/setup-python#1182</a></li>
<li>Add Architecture-Specific PATH Management for Python with --user
Flag on Windows by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1122">actions/setup-python#1122</a></li>
<li>Include python version in PyPy python-version output by <a
href="https://github.com/cdce8p"><code>@​cdce8p</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li>Update docs: clarification on pip authentication with setup-python
by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1156">actions/setup-python#1156</a></li>
</ul>
<h3>Dependency updates:</h3>
<ul>
<li>Upgrade idna from 2.9 to 3.7 in /<strong>tests</strong>/data by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/843">actions/setup-python#843</a></li>
<li>Upgrade form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1163">actions/setup-python#1163</a></li>
<li>Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIndex.download by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1165">actions/setup-python#1165</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1181">actions/setup-python#1181</a></li>
<li>Upgrade <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1095">actions/setup-python#1095</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/krystof-k"><code>@​krystof-k</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li><a href="https://github.com/cdce8p"><code>@​cdce8p</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li><a href="https://github.com/aradkdj"><code>@​aradkdj</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v6.0.0">https://github.com/actions/setup-python/compare/v5...v6.0.0</a></p>
<h2>v5.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Workflow updates related to Ubuntu 20.04 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1065">actions/setup-python#1065</a></li>
<li>Fix for Candidate Not Iterable Error by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1082">actions/setup-python#1082</a></li>
<li>Upgrade semver and <code>@​types/semver</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1091">actions/setup-python#1091</a></li>
<li>Upgrade prettier from 2.8.8 to 3.5.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1046">actions/setup-python#1046</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1081">actions/setup-python#1081</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v5.6.0">https://github.com/actions/setup-python/compare/v5...v5.6.0</a></p>
<h2>v5.5.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements:</h3>
<ul>
<li>Support free threaded Python versions like '3.13t' by <a
href="https://github.com/colesbury"><code>@​colesbury</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/973">actions/setup-python#973</a></li>
<li>Enhance Workflows: Include ubuntu-arm runners, Add e2e Testing for
free threaded and Upgrade <code>@​action/cache</code> from 4.0.0 to
4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1056">actions/setup-python#1056</a></li>
<li>Add support for .tool-versions file in setup-python by <a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1043">actions/setup-python#1043</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Fix architecture for pypy on Linux ARM64 by <a
href="https://github.com/mayeut"><code>@​mayeut</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1011">actions/setup-python#1011</a>
This update maps arm64 to aarch64 for Linux ARM64 PyPy
installations.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e797f83bcb"><code>e797f83</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/setup-python/issues/1164">#1164</a>)</li>
<li><a
href="3d1e2d2ca0"><code>3d1e2d2</code></a>
Revert &quot;Enhance cache-dependency-path handling to support files
outside the w...</li>
<li><a
href="65b071217a"><code>65b0712</code></a>
Clarify pythonLocation behavior for PyPy and GraalPy in environment
variables...</li>
<li><a
href="5b668cf765"><code>5b668cf</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-python/issues/1181">#1181</a>)</li>
<li><a
href="f62a0e252f"><code>f62a0e2</code></a>
Change missing cache directory error to warning (<a
href="https://redirect.github.com/actions/setup-python/issues/1182">#1182</a>)</li>
<li><a
href="9322b3ca74"><code>9322b3c</code></a>
Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIn...</li>
<li><a
href="fbeb884f69"><code>fbeb884</code></a>
Bump form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
(<a
href="https://redirect.github.com/actions/setup-python/issues/1163">#1163</a>)</li>
<li><a
href="03bb6152f4"><code>03bb615</code></a>
Bump idna from 2.9 to 3.7 in /<strong>tests</strong>/data (<a
href="https://redirect.github.com/actions/setup-python/issues/843">#843</a>)</li>
<li><a
href="36da51d563"><code>36da51d</code></a>
Add version parsing from Pipfile (<a
href="https://redirect.github.com/actions/setup-python/issues/1067">#1067</a>)</li>
<li><a
href="3c6f142cc0"><code>3c6f142</code></a>
update documentation (<a
href="https://redirect.github.com/actions/setup-python/issues/1156">#1156</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-python/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 11:27:54 -04:00
Christophe Bornet
083fbfb0d1 chore(core): add utf-8 encoding to Path read_text/write_text (#32784)
## Summary

This PR standardizes all text file I/O to use `UTF-8`. This eliminates
OS-specific defaults (e.g. Windows `cp1252`) and ensures consistent,
Unicode-safe behavior across platforms.

## Breaking changes

Users on systems with a default encoding which is not utf-8 may see
decoding errors from the following code paths:

* langchain_core.vectorstores.in_memroy.InMemoryVectorStore.load
* langchain_core.prompts.loading.load_prompt
* `from_template` in AIMessagePromptTemplate,
HumanMessagePromptTemplate, SystemMessagePromptTemplate

## Migration

Change the encoding of files that are encoded with a non utf-8 encoding to utf-8.
2025-09-08 11:27:15 -04:00
Christophe Bornet
f589168411 refactor(core): use pytest style in TestGetBufferString (#32786)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:16:13 +00:00
Christophe Bornet
5840dad40b chore(core): enable ruff docstring-code-format (#32834)
See https://docs.astral.sh/ruff/settings/#format_docstring-code-format

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:13:50 +00:00
Christophe Bornet
e3b6c9bb66 chore(core): fix some mypy warn_unreachable issues (#32560)
Found by setting `warn_unreachable: true` in mypy.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:02:08 +00:00
Christophe Bornet
c672590f42 chore(standard-tests): select ALL rules with exclusions (#31937)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:57:47 +00:00
Christophe Bornet
323729915a chore(standard-tests): add mypy strict checking (#32384)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 10:50:38 -04:00
Christophe Bornet
0c3e8ccd0e chore(text-splitters): select ALL rules with exclusions (#32325)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:46:09 +00:00
Christophe Bornet
20401df25d chore(cli): fix some DOC rules (preview) (#32839)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:36:22 +00:00
dependabot[bot]
e0aaaccb61 chore(infra): bump aws-actions/configure-aws-credentials from 4 to 5 (#32841)
Bumps
[aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/releases">aws-actions/configure-aws-credentials's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.1...v5.0.0">5.0.0</a>
(2025-09-03)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)
(<a
href="8c45f6b081">8c45f6b</a>)</li>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)
(<a
href="74b3e27aa8">74b3e27</a>)</li>
<li>support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)
(<a
href="c4be498953">c4be498</a>)</li>
</ul>
<h2>v4.3.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2>v4.3.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<p>NOTE: This release tag originally pointed to
59b441846ad109fa4a1549b73ef4e149c4bfb53b, but a critical bug was
discovered shortly after publishing. We updated this tag to
d0834ad3a60a024346910e522a81b0002bd37fea to prevent anyone using the
4.3.0 tag from encountering the bug, and we published 4.3.1 to allow
workflows to auto update correctly.</p>
<h3>Features</h3>
<ul>
<li>dependency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2>v4.2.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md">aws-actions/configure-aws-credentials's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.1...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<h3>Features</h3>
<ul>
<li>depenency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>ensure explicit inputs take precedence over environment variables
(<a
href="e56e6c4038">e56e6c4</a>)</li>
<li>prioritize explicit inputs over environment variables (<a
href="df9c8fed6b">df9c8fe</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.1.0...v4.2.0">4.2.0</a>
(2025-05-06)</h2>
<h3>Features</h3>
<ul>
<li>add Expiration field to Outputs (<a
href="a4f326760c">a4f3267</a>)</li>
<li>Document role-duration-seconds range (<a
href="5a0cf0167f">5a0cf01</a>)</li>
<li>support action inputs as environment variables (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1338">#1338</a>)
(<a
href="2c168adcae">2c168ad</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>make sure action builds, also fix dependabot autoapprove (<a
href="c401b8a98c">c401b8a</a>)</li>
<li>role chaning on mulitple runs (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1340">#1340</a>)
(<a
href="9e38641911">9e38641</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a03048d875"><code>a03048d</code></a>
chore(main): release 5.0.0 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1451">#1451</a>)</li>
<li><a
href="337f510212"><code>337f510</code></a>
chore: Fix markdown link formatting in README.md (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1466">#1466</a>)</li>
<li><a
href="f001d79eaa"><code>f001d79</code></a>
chore: update README with versioning (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1465">#1465</a>)</li>
<li><a
href="cf5f2acba3"><code>cf5f2ac</code></a>
chore: Update dist</li>
<li><a
href="b394bdd9f0"><code>b394bdd</code></a>
chore(deps-dev): bump <code>@​aws-sdk/credential-provider-env</code> (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1463">#1463</a>)</li>
<li><a
href="b632c0b5e4"><code>b632c0b</code></a>
chore(deps-dev): bump memfs from 4.38.1 to 4.38.2 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1462">#1462</a>)</li>
<li><a
href="978e44aa36"><code>978e44a</code></a>
chore: Update dist</li>
<li><a
href="c4be498953"><code>c4be498</code></a>
feat: support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)</li>
<li><a
href="c5a43c32e1"><code>c5a43c3</code></a>
chore: Update dist</li>
<li><a
href="8c45f6b081"><code>8c45f6b</code></a>
feat: add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aws-actions/configure-aws-credentials&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:18:01 -04:00
dependabot[bot]
9368ce6b07 chore(infra): bump google-github-actions/auth from 2 to 3 (#32777)
Bumps
[google-github-actions/auth](https://github.com/google-github-actions/auth)
from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/google-github-actions/auth/releases">google-github-actions/auth's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump to Node 24 and remove old parameters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/508">google-github-actions/auth#508</a></li>
<li>Remove hacky script by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/509">google-github-actions/auth#509</a></li>
<li>Release: v3.0.0 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/510">google-github-actions/auth#510</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2...v3.0.0">https://github.com/google-github-actions/auth/compare/v2...v3.0.0</a></p>
<h2>v2.1.13</h2>
<h2>What's Changed</h2>
<ul>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/506">google-github-actions/auth#506</a></li>
<li>Release: v2.1.13 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/507">google-github-actions/auth#507</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13">https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13</a></p>
<h2>v2.1.12</h2>
<h2>What's Changed</h2>
<ul>
<li>Add retries for getIDToken by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/502">google-github-actions/auth#502</a></li>
<li>Release: v2.1.12 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/503">google-github-actions/auth#503</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12">https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12</a></p>
<h2>v2.1.11</h2>
<h2>What's Changed</h2>
<ul>
<li>Update troubleshooting docs for Python by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/488">google-github-actions/auth#488</a></li>
<li>Add linters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/499">google-github-actions/auth#499</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/500">google-github-actions/auth#500</a></li>
<li>Release: v2.1.11 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/501">google-github-actions/auth#501</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11">https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11</a></p>
<h2>v2.1.10</h2>
<h2>What's Changed</h2>
<ul>
<li>Declare workflow permissions by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/482">google-github-actions/auth#482</a></li>
<li>Document that the OIDC token expires in 5min by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/483">google-github-actions/auth#483</a></li>
<li>Release: v2.1.10 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/484">google-github-actions/auth#484</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10">https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10</a></p>
<h2>v2.1.9</h2>
<h2>What's Changed</h2>
<ul>
<li>Use our custom boolean parsing by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/478">google-github-actions/auth#478</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/479">google-github-actions/auth#479</a></li>
<li>Release: v2.1.9 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/480">google-github-actions/auth#480</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7c6bc770da"><code>7c6bc77</code></a>
Release: v3.0.0 (<a
href="https://redirect.github.com/google-github-actions/auth/issues/510">#510</a>)</li>
<li><a
href="42e4997ee3"><code>42e4997</code></a>
Remove hacky script (<a
href="https://redirect.github.com/google-github-actions/auth/issues/509">#509</a>)</li>
<li><a
href="5ea4dc1147"><code>5ea4dc1</code></a>
Bump to Node 24 and remove old parameters (<a
href="https://redirect.github.com/google-github-actions/auth/issues/508">#508</a>)</li>
<li>See full diff in <a
href="https://github.com/google-github-actions/auth/compare/v2...v3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google-github-actions/auth&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:59 -04:00
dependabot[bot]
f8bcc98362 chore(infra): bump amannn/action-semantic-pull-request from 5 to 6 (#32585)
Bumps
[amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/releases">amannn/action-semantic-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.3...v6.0.0">6.0.0</a>
(2025-08-13)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)
(<a
href="bc0c9a79ab">bc0c9a7</a>)</li>
</ul>
<h2>v5.5.3</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.2...v5.5.3">5.5.3</a>
(2024-06-28)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump <code>braces</code> dependency (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/269">#269</a>.
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="2d952a1bf9">2d952a1</a>)</li>
</ul>
<h2>v5.5.2</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.1...v5.5.2">5.5.2</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump tar from 6.1.11 to 6.2.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/262">#262</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="9a90d5a5ac">9a90d5a</a>)</li>
</ul>
<h2>v5.5.1</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.0...v5.5.1">5.5.1</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump ip from 2.0.0 to 2.0.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/263">#263</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="5e7e9acca3">5e7e9ac</a>)</li>
</ul>
<h2>v5.5.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.4.0...v5.5.0">5.5.0</a>
(2024-04-23)</h2>
<h3>Features</h3>
<ul>
<li>Add outputs for <code>type</code>, <code>scope</code> and
<code>subject</code> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/261">#261</a>
by <a href="https://github.com/bcaurel"><code>@​bcaurel</code></a>) (<a
href="b05f5f6423">b05f5f6</a>)</li>
</ul>
<h2>v5.4.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.3.0...v5.4.0">5.4.0</a>
(2023-11-03)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md">amannn/action-semantic-pull-request's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.2.0...v5.3.0">5.3.0</a>
(2023-09-25)</h2>
<h3>Features</h3>
<ul>
<li>Use Node.js 20 in action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/240">#240</a>)
(<a
href="4c0d5a21fc">4c0d5a2</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.1.0...v5.2.0">5.2.0</a>
(2023-03-16)</h2>
<h3>Features</h3>
<ul>
<li>Update dependencies by <a
href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/229">#229</a>)
(<a
href="e797448a07">e797448</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.2...v5.1.0">5.1.0</a>
(2023-02-10)</h2>
<h3>Features</h3>
<ul>
<li>Add regex support to <code>scope</code> and
<code>disallowScopes</code> configuration (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/226">#226</a>)
(<a
href="403a6f8924">403a6f8</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.1...v5.0.2">5.0.2</a>
(2022-10-17)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade <code>@actions/core</code> to avoid deprecation warnings (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/208">#208</a>)
(<a
href="91f4126c9e">91f4126</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.0...v5.0.1">5.0.1</a>
(2022-10-14)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade GitHub Action to use Node v16 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/207">#207</a>)
(<a
href="6282ee339b">6282ee3</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.6.0...v5.0.0">5.0.0</a>
(2022-10-11)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)
(<a
href="c906fe1e5a">c906fe1</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.5.0...v4.6.0">4.6.0</a>
(2022-09-26)</h2>
<h3>Features</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fdd4d3ddf6"><code>fdd4d3d</code></a>
chore: Release 6.0.1 [skip ci]</li>
<li><a
href="58e4ab40f5"><code>58e4ab4</code></a>
fix: Actually execute action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/289">#289</a>)</li>
<li><a
href="04a8d177d9"><code>04a8d17</code></a>
chore: Release 6.0.0 [skip ci]</li>
<li><a
href="bc0c9a79ab"><code>bc0c9a7</code></a>
feat!: Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
<li><a
href="631ffdc028"><code>631ffdc</code></a>
build(deps): bump the github-action-workflows group with 2 updates (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/286">#286</a>)</li>
<li><a
href="c1807ceb58"><code>c1807ce</code></a>
build: configure Dependabot (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/231">#231</a>)</li>
<li><a
href="3352882559"><code>3352882</code></a>
docs: Remove <code>synchronize</code> trigger (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/281">#281</a>)</li>
<li><a
href="04501d43b5"><code>04501d4</code></a>
docs: More restrictive permissions (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/280">#280</a>)</li>
<li><a
href="40166f0081"><code>40166f0</code></a>
chore: Update actions in release workflow (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/276">#276</a>)</li>
<li><a
href="80c0371c57"><code>80c0371</code></a>
docs: Mention <code>reopened</code> trigger in README (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/272">#272</a>
by <a
href="https://github.com/garysassano"><code>@​garysassano</code></a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=amannn/action-semantic-pull-request&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:19 -04:00
dependabot[bot]
d8d93882f9 chore(infra): bump actions/checkout from 4 to 5 (#32584)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
<li>Prepare release v4.3.0 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2237">actions/checkout#2237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/motss"><code>@​motss</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li><a href="https://github.com/mouismail"><code>@​mouismail</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li><a href="https://github.com/benwells"><code>@​benwells</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.0">https://github.com/actions/checkout/compare/v4...v4.3.0</a></p>
<h2>v4.2.2</h2>
<h2>What's Changed</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.1...v4.2.2">https://github.com/actions/checkout/compare/v4.2.1...v4.2.2</a></p>
<h2>v4.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1919">actions/checkout#1919</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.0...v4.2.1">https://github.com/actions/checkout/compare/v4.2.0...v4.2.1</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:14:09 -04:00
Sadiq Khan
228fbac3a6 fix(openai): handle AIMessages without response_id in _get_last_messages (#32824) 2025-09-08 10:12:50 -04:00
JunHyungKang
6ea06ca972 fix(openai): Fix Azure OpenAI Responses API model field issue (#32649) 2025-09-08 10:08:35 -04:00
ccurme
5b0a55ad35 chore(openai): apply formatting changes to AzureChatOpenAI (#32848) 2025-09-08 09:54:20 -04:00
Sydney Runkle
6e2f46d04c feat(langchain): middleware support in create_agent (#32828)
## Overview

Adding new `AgentMiddleware` primitive that supports `before_model`,
`after_model`, and `prepare_model_request` hooks.

This is very exciting! It makes our `create_agent` prebuilt much more
extensible + capable. Still in alpha and subject to change.

This is different than the initial
[implementation](https://github.com/langchain-ai/langgraph/tree/nc/25aug/agent)
in that it:
* Fills in gaps w/ missing features, for ex -- new structured output,
optionality of tools + system prompt, sync and async model requests,
provider builtin tools
* Exposes private state extensions for middleware, enabling things like
model call tracking, etc
* Middleware can register tools
* Uses a `TypedDict` for `AgentState` -- dataclass subclassing is tricky
w/ required values + required decorators
* Addition of `model_settings` to `ModelRequest` so that we can pass
through things to bind (like cache kwargs for anthropic middleware)

## TODOs

### top prio
- [x] add middleware support to existing agent
- [x] top prio middlewares
  - [x] summarization node
  - [x] HITL
  - [x] prompt caching
 
other ones
- [x] model call limits
- [x] tool calling limits
- [ ] usage (requires output state)

### secondary prio
- [x] improve typing for state updates from middleware (not working
right now w/ simple `AgentUpdate` and `AgentJump`, at least in Python)
- [ ] add support for public state (input / output modifications via
pregel channel mods) -- to be tackled in another PR
- [x] testing!

### docs
See https://github.com/langchain-ai/docs/pull/390
- [x] high level docs about middleware
- [x] summarization node
- [x] HITL
- [x] prompt caching

## open questions

Lots of open questions right now, many of them inlined as comments for
the short term, will catalog some more significant ones here.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-09-08 01:10:57 +00:00
Christophe Bornet
5bf0b218c8 chore(cli): fix some ruff preview rules (#32803) 2025-09-07 16:53:19 -04:00
Mason Daugherty
4e39c164bb fix(anthropic): remove beta header warning for TTL (#32832)
No longer beta as of Aug 13
2025-09-05 14:28:58 -04:00
ScarletMercy
0b3af47335 fix(docs): resolve malformed character in tool_calling.ipynb (#32825)
**Description:**  
Remove a character in tool_calling.ipynb that causes a grammatical error
Verification: Local docs build passed after fix 
 
**Issue:**  
None (direct hotfix for rendering issue identified during documentation
review)
 
**Dependencies:**  
None
2025-09-05 11:28:56 -04:00
Mason Daugherty
bc91a4811c chore: update PR template (#32819) 2025-09-04 19:53:54 +00:00
Christophe Bornet
05a61f9508 fix(langchain): fix mypy versions in langchain_v1 (#32816) 2025-09-04 11:51:08 -04:00
Christophe Bornet
f98f7359d3 refactor(core): refactors for python 3.10+ (#32787)
* Remove `sys.version_info` checks no longer needed
* Use `typing` instead of `typing_extensions` where applicable (NB: keep
using `TypedDict` from `typing_extensions` as [Pydantic requires
it](https://docs.pydantic.dev/2.3/usage/types/dicts_mapping/#typeddict))
2025-09-03 15:32:06 -04:00
Christophe Bornet
aa63de9366 chore(langchain): cleanup langchain_v1 mypy config (#32809)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-03 19:28:06 +00:00
Christophe Bornet
86fa34f3eb chore(langchain): add ruff rules D for langchain_v1 (#32808) 2025-09-03 15:26:17 -04:00
JING
36037c9251 fix(docs): update Anthropic model name and add version warnings (#32807)
**Description:** This PR fixes the broken Anthropic model example in the
documentation introduction page and adds a comment field to display
model version warnings in code blocks. The changes ensure that users can
successfully run the example code and are reminded to check for the
latest model versions.

**Issue:** https://github.com/langchain-ai/langchain/issues/32806

**Changes made:**
- Update Anthropic model from broken "claude-3-5-sonnet-latest" to
working "claude-3-7-sonnet-20250219"
- Add comment field to display model version warnings in code blocks
- Improve user experience by providing working examples and version
guidance

**Dependencies:** None required
2025-09-03 15:25:13 -04:00
Martin Meier-Zavodsky
ad26c892ea docs(langchain): update evaluation tutorial link (#32796)
**Description**
This PR updates the evaluation tutorial link for LangSmith to the new
official docs location.

**Issue**
N/A

**Dependencies**
None
2025-09-03 15:22:46 -04:00
Shahroz Ahmad
4828a85ab0 feat(core): add web_search in OpenAI tools list (#32738) 2025-09-02 21:57:25 +00:00
ccurme
50b48fa1ff chore(openai): bump minimum core version (#32795) 2025-09-02 14:06:49 -04:00
Chester Curme
a54f4385f8 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain_v1/langchain/__init__.py
2025-09-02 13:17:00 -04:00
ccurme
b999f356e8 fix(langchain): update __init__ version (#32793) 2025-09-02 13:14:42 -04:00
ccurme
98e4e7d043 Merge branch 'master' into wip-v1.0 2025-09-02 13:06:35 -04:00
ccurme
2cf5c52c13 release(core): 1.0.0a2 (#32792) 2025-09-02 12:55:52 -04:00
Sydney Runkle
062196a7b3 release(langchain): v1.0.0a3 (#32791) 2025-09-02 12:29:14 -04:00
ccurme
bf41a75073 release(openai): 1.0.0a2 (#32790) 2025-09-02 12:22:57 -04:00
Sydney Runkle
dc9f941326 chore(langchain): rename create_react_agent -> create_agent (#32789) 2025-09-02 12:13:12 -04:00
ccurme
e15c41233d feat(openai): (v1) update default output_version (#32674) 2025-09-02 12:12:41 -04:00
Mason Daugherty
25d5db88d5 fix: ci 2025-09-01 23:36:38 -05:00
Mason Daugherty
1237f94633 Merge branch 'wip-v1.0' of github.com:langchain-ai/langchain into wip-v1.0 2025-09-01 23:27:40 -05:00
Mason Daugherty
5c8837ea5a fix some imports 2025-09-01 23:27:37 -05:00
Mason Daugherty
820e355f53 Merge branch 'master' into wip-v1.0 2025-09-01 23:27:29 -05:00
Mason Daugherty
9a3ba71636 fix: version equality CI check 2025-09-01 23:24:04 -05:00
Mason Daugherty
00def6da72 rfc: remove unused TypeGuards 2025-09-01 23:13:18 -05:00
Mason Daugherty
4f8cced3b6 chore: move convert_to_openai_data_block and convert_to_openai_image_block from content.py to openai block translators 2025-09-01 23:08:47 -05:00
Mason Daugherty
365d7c414b nit: OpenAI docstrings 2025-09-01 20:30:56 -05:00
Mason Daugherty
a4874123a0 chore: move _convert_openai_format_to_data_block from langchain_v0 to openai 2025-09-01 19:39:15 -05:00
Mason Daugherty
a5f92fdd9a fix: update some docstrings and typing 2025-09-01 19:25:08 -05:00
Adithya1617
238ecd09e0 docs(langchain): update redirect url of "this langsmith conceptual guide" in tracing.mdx (#32776)
…ge (issue : #32775)

- **Description: updated the redirect url of "this langsmith conceptual
guide" in tracing.mdx
  - **Issue:** fixes #32775

---------

Co-authored-by: Adithya <adithya.vardhan1617@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-01 19:02:21 +00:00
Mason Daugherty
431e6d6211 chore(standard-tests): drop python 3.9 (#32772) 2025-08-31 18:23:10 -05:00
Mason Daugherty
0f1afa178e chore(text-splitters): drop python 3.9 support (#32771) 2025-08-31 18:13:35 -05:00
Mason Daugherty
830d1a207c Merge branch 'master' into wip-v1.0 2025-08-31 18:01:24 -05:00
Mason Daugherty
6b5fdfb804 release(text-splitters): 0.3.11 (#32770)
Fixes #32747

SpaCy integration test fixture was trying to use pip to download the
SpaCy language model (`en_core_web_sm`), but uv-managed environments
don't include pip by default. Fail test if not installed as opposed to
downloading.
2025-08-31 23:00:05 +00:00
Ravirajsingh Sodha
b42dac5fe6 docs: standardize OllamaLLM and BaseOpenAI docstrings (#32758)
- Add comprehensive docstring following LangChain standards
- Include Setup, Key init args, Instantiate, Invoke, Stream, and Async
sections
- Provide detailed parameter descriptions and code examples
- Fix linting issues for code formatting compliance

Contributes to #24803

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-31 17:45:56 -05:00
Christophe Bornet
e0a4af8d8b docs(text-splitters): fix some docstrings (#32767) 2025-08-31 13:46:11 -05:00
Rémy HUBSCHER
fcf7175392 chore(langchain): improve PostgreSQL Manager upsert SQLAlchemy API calls. (#32748)
- Make explicit the `constraint` parameter name to avoid mixing it with
`index_elements`
[[Documentation](https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#sqlalchemy.dialects.postgresql.Insert.on_conflict_do_update)]
- ~Fallback on the existing `group_id` row value, to avoid setting it to
`None`.~
2025-08-30 14:13:24 -05:00
Kush Goswami
1f2ab17dff docs: fix typo and grammer in Conceptual guide (#32754)
fixed small typo and grammatical inconsistency in Conceptual guide
2025-08-30 13:48:55 -05:00
Mason Daugherty
b494a3c57b chore(cli): drop python 3.9 support (#32761) 2025-08-30 13:25:33 -05:00
Mason Daugherty
f088fac492 Merge branch 'master' into wip-v1.0 2025-08-30 14:21:37 -04:00
Mason Daugherty
2dc89a2ae7 release(cli): 0.0.37 (#32760)
It's been a minute. Final release prior to dropping Python 3.9 support.
2025-08-30 13:07:55 -05:00
Christophe Bornet
e3c4aeaea1 chore(cli): add mypy strict checking (#32386)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-30 13:02:45 -05:00
Vikas Shivpuriya
444939945a docs: fix punctuation in style guide (#32756)
Removed a period in bulleted list for consistency

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:56:17 -05:00
Vikas Shivpuriya
ae8db86486 docs: fixed typo in contributing guide (#32755)
Completed the sentence by adding a period ".", in sync with other points

>> Click "Propose changes"

to 

>> Click "Propose changes".

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:55:25 -05:00
Christophe Bornet
8a1419dad1 chore(cli): add ruff rules ANN401 and D1 (#32576) 2025-08-30 12:41:16 -05:00
Kush Goswami
840e4c8e9f docs: fix grammar and typo in Documentation style guide (#32741)
fixed grammer and one typo in the Documentation style guide
2025-08-29 14:22:54 -04:00
Caspar Broekhuizen
37aff0a153 chore: bump langchain-core minimum to 0.3.75 (#32753)
Update `langchain-core` dependency min from `>=0.3.63` to `>=0.3.75`.

### Motivation
- We located the `langchain-core` package locally in the monorepo and
need to align `langchain-tests` with the new minimum version.
2025-08-29 14:11:28 -04:00
Caspar Broekhuizen
a163d59988 chore(standard-tests): relax langchain-core bounds for langchain-tests 1.0.0a1 (#32752)
### Overview
Preparing the `1.0.0a1` release of `langchain-tests` to align with
`langchain-core` version `1.0.0a1`.

### Changes
- Bump package version to `1.0.0a1`
- Relax `langchain-core` requirement from `<1.0.0,>=0.3.63` to
`<2.0.0,>=0.3.63`

### Motivation
All main LangChain packages are now publishing `1.0.0a` prereleases.  
`langchain-tests` needs a matching prerelease so downstreams can install
tests alongside the 1.0 series without conflicts.

### Tests
- Verified installation and tests against both `0.3.75` and `1.0.0a1`.
2025-08-29 13:46:48 -04:00
Mason Daugherty
925ad65df9 fix(core): typo in content.py 2025-08-28 15:17:51 -04:00
Mason Daugherty
e09d90b627 Merge branch 'master' into wip-v1.0 2025-08-28 14:11:24 -04:00
Sydney Runkle
b26e52aa4d chore(text-splitters): bump version of core (#32740) 2025-08-28 13:14:57 -04:00
Sydney Runkle
38cdd7a2ec chore(text-splitters): relax max bound for langchain-core (#32739) 2025-08-28 13:05:47 -04:00
Sydney Runkle
26e5d1302b chore(langchain): remove upper bound at v1 for core (#32737) 2025-08-28 12:14:42 -04:00
Christopher Jones
107425c68d docs: fix basic Oracle example issues such as capitalization (#32730)
**Description:** fix capitalization and basic issues in
https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/

Signed-off-by: Christopher Jones <christopher.jones@oracle.com>
2025-08-28 10:32:45 -04:00
Tik1993
009cc3bf50 docs(docs): added content= keyword when creating SystemMessage and HumanMessage (#32734)
Description: 
Added the content= keyword when creating SystemMessage and HumanMessage
in the messages list, making it consistent with the API reference.
2025-08-28 10:31:46 -04:00
NOOR UL HUDA
6185558449 docs: replace smart quotes with straight quotes on How-to guides landing page (#32725)
### Summary

This PR updates the sentence on the "How-to guides" landing page to
replace smart (curly) quotes with straight quotes in the phrase:

> "How do I...?"

### Why This Change?

- Ensures formatting consistency across documentation
- Avoids encoding or rendering issues with smart quotes
- Matches standard Markdown and inline code formatting

This is a small change, but improves clarity and polish on a key landing
page.
2025-08-28 10:30:12 -04:00
Kush Goswami
0928ff5b12 docs: fix typo in LangGraph section of Introduction (#32728)
Change "Linkedin" to "LinkedIn" to be consistent with LinkedIn's
spelling.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-28 10:29:35 -04:00
ccurme
ddde1eff68 fix: openai, anthropic (v1) fix core lower bound (#32724) 2025-08-27 14:36:10 -04:00
ccurme
9b576440ed release: anthropic, openai 1.0.0a1 (#32723) 2025-08-27 14:12:28 -04:00
Sydney Runkle
7f9b0772fc chore(langchain): also bump text splitters (#32722) 2025-08-27 18:09:57 +00:00
Sydney Runkle
d6e618258f chore(langchain): use latest core (#32720) 2025-08-27 14:06:07 -04:00
Sydney Runkle
806bc593ab chore(langchain): revert back to static versioning for now (#32719) 2025-08-27 13:54:41 -04:00
Sydney Runkle
047bcbaa13 release(langchain): v1.0.0a1 (#32718)
Also removing globals usage + static version
2025-08-27 13:46:20 -04:00
Sydney Runkle
18db07c292 feat(langchain): revamped create_react_agent (#32705)
Adding `create_react_agent` and introducing `langchain.agents`!

## Enhanced Structured Output

`create_react_agent` supports coercion of outputs to structured data
types like `pydantic` models, dataclasses, typed dicts, or JSON schemas
specifications.

### Structural Changes

In langgraph < 1.0, `create_react_agent` implemented support for
structured output via an additional LLM call to the model after the
standard model / tool calling loop finished. This introduced extra
expense and was unnecessary.

This new version implements structured output support in the main loop,
allowing a model to choose between calling tools or generating
structured output (or both).

The same basic pattern for structured output generation works:

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather)
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

### Advanced Configuration

The new API exposes two ways to configure how structured output is
generated. Under the hood, LangChain will attempt to pick the best
approach if not explicitly specified. That is, if provider native
support is available for a given model, that takes priority over
artificial tool calling.

1. Artificial tool calling (the default for most models)

LangChain generates a tool (or tools) under the hood that match the
schema of your response format. When the model calls those tools,
LangChain coerces the args to the desired format. Note, LangChain does
not validate outputs adhering to JSON schema specifications.

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather, tool_message_content="Final Weather result generated"
    ),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_Gg933BMHMwck50Q39dtBjXm7)
 Call ID: call_Gg933BMHMwck50Q39dtBjXm7
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
Tool Calls:
  Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l)
 Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l
  Args:
    temperature: 70
    condition: sunny
================================= Tool Message =================================
Name: Weather

Final Weather result generated
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

</details>

2. Provider implementations (limited to OpenAI, Groq)

Some providers support structured output generating directly. For those
cases, we offer the `ProviderStrategy` hint:

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ProviderStrategy(Weather),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp)
 Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================

{"temperature":70,"condition":"sunny"}
Weather(temperature=70.0, condition='sunny')
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

Note! The final tool message has the custom content provided by the dev.

</details>

Prompted output was previously supported and is no longer supported via
the `response_format` argument to `create_react_agent`. If there's
significant demand for this, we'd be happy to engineer a solution.

## Error Handling

`create_react_agent` now exposes an API for managing errors associated
with structured output generation. There are two common problems with
structured output generation (w/ artificial tool calling):

1. **Parsing error** -- the model generates data that doesn't match the
desired structure for the output
2. **Multiple tool calls error** -- the model generates 2 or more tool
calls associated with structured output schemas

A developer can control the desired behavior for this via the
`handle_errors` arg to `ToolStrategy`.

<details>
<summary>Extended example</summary>

```py
from langchain_core.messages import HumanMessage
from pydantic import BaseModel

from langchain.agents import create_react_agent
from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"


def handle_validation_error(error: Exception) -> str:
    if isinstance(error, StructuredOutputValidationError):
        return (
            f"Please call the {error.tool_name} call again with the correct arguments. "
            f"Your mistake was: {error.source}"
        )
    raise error


agent = create_react_agent(
    "openai:gpt-5",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather,
        handle_errors=handle_validation_error,
    ),
)
```

</details>

## Error Handling for Tool Calling

Tools fail for two main reasons:

1. **Invocation failure** -- the args generated by the model for the
tool are incorrect (missing, incompatible data types, etc)
2. **Execution failure** -- the tool execution itself fails due to a
developer error, network error, or some other exception.

By default, when tool **invocation** fails, the react agent will return
an artificial `ToolMessage` to the model asking it to correct its
mistakes and retry.

Now, when tool **execution** fails, the react agent raises the
`ToolException` by default instead of asking the model to retry. This
helps to avoid looping that should be avoided due to the aforementioned
issues.

Developers can configure their desired behavior for retries / error
handling via the `handle_tool_errors` arg to `ToolNode`.

## Pre-Bound Models

`create_react_agent` no longer supports inputs to `model` that have been
pre-bound w/ tools or other configuration. To properly support
structured output generation, the agent itself needs the power to bind
tools + structured output kwargs.

This also makes the devx cleaner - it's always expected that `model` is
an instance of `BaseChatModel` (or `str` that we coerce into a chat
model instance).

Dynamic model functions can return a pre-bound model **IF** structured
output is not also used. Dynamic model functions can then bind tools /
structured output logic.

## Import Changes

Users should now use `create_react_agent` from `langchain.agents`
instead of `langgraph.prebuilts`.
Other imports have a similar migration path, `ToolNode` and `AgentState`
for example.

* `chat_agent_executor.py` -> `react_agent.py`

Some notes:
1. Disabled blockbuster + some linting in `langchain/agents` -- beyond
ideal, but necessary to get this across the line for the alpha. We
should re-enable before official release.
2025-08-27 17:32:21 +00:00
ccurme
a80fa1b25f chore(infra): drop anthropic from core test matrix (#32717) 2025-08-27 13:11:44 -04:00
ccurme
72b66fcca5 release(core): 1.0.0a1 (#32715) 2025-08-27 11:57:07 -04:00
ccurme
a47d993ddd release(core): 1.0.0dev0 (#32713) 2025-08-27 11:05:39 -04:00
ccurme
cb4705dfc0 chore: (v1) drop support for python 3.9 (#32712)
EOL in October

Will update ruff / formatting closer to 1.0 release to minimize merge
conflicts on branch
2025-08-27 10:42:49 -04:00
ccurme
9a9263a2dd fix(langchain): (v1) delete unused chains (#32711)
Merge conflict was not resolved correctly
2025-08-27 10:17:14 -04:00
Chester Curme
e4b69db4cf Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain_v1/langchain/chains/documents/map_reduce.py
#	libs/langchain_v1/langchain/chains/documents/stuff.py
2025-08-27 09:37:21 -04:00
Mason Daugherty
242881562b feat: standard content, IDs, translators, & normalization (#32569) 2025-08-27 09:31:12 -04:00
Sydney Runkle
1fe2c4084b chore(langchain): remove untested chains for first alpha (#32710)
Also removing globals.py file
2025-08-27 08:24:43 -04:00
Sydney Runkle
c6c7fce6c9 chore(langchain): drop Python 3.9 to prep for v1 (#32704)
Python 3.9 EOL is October 2025, so we're going to drop it for the v1
alpha release.
2025-08-26 23:16:42 +00:00
Mason Daugherty
a2322f68ba Merge branch 'master' into wip-v1.0 2025-08-26 15:51:57 -04:00
Mason Daugherty
3d08b6bd11 chore: adress pytest-asyncio deprecation warnings + other nits (#32696)
amongst some linting imcompatible rules
2025-08-26 15:51:38 -04:00
Matthew Farrellee
f2dcdae467 fix(standard-tests): update function_args to match my_adder_tool param types (#32689)
**Description:**

https://api.llama.com implements strong type checking, which results in
a false negative.

with type mismatch (expected integer, received string) -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": \"1\", \"b\": \"2\"}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"title":"Bad request","detail":"Unexpected param value `a`: \"1\"","status":400}
```

with correct type -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": 1, \"b\": 2}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"id":"AhMwBbuaa5payFr_xsOHzxX","model":"Llama-3.3-70B-Instruct","choices":[{"finish_reason":"stop","index":0,"message":{"refusal":"","role":"assistant","content":"The result of 1 + 2 is 3.","id":"AhMwBbuaa5payFr_xsOHzxX"},"logprobs":null}],"created":1756167668,"object":"chat.completions","usage":{"prompt_tokens":248,"completion_tokens":17,"total_tokens":265}}
```
2025-08-26 15:50:47 -04:00
ccurme
dbebe2ca97 release(core): 0.3.75 (#32693) 2025-08-26 11:12:03 -04:00
ccurme
008043977d release(openai): 0.3.32 (#32691) 2025-08-26 14:05:40 +00:00
Jacob Lee
1459d4f4ce fix(openai): Always add raw response object to OpenAI client errors for invoke (#32655) 2025-08-26 09:59:25 -04:00
ccurme
f33480c2cf feat(core): trace response body on error (#32653) 2025-08-25 14:28:19 -04:00
ccurme
7e9ae5df60 feat(openai): (v1) delete bind_functions and remove tool_calls from additional_kwargs (#32669) 2025-08-25 14:22:31 -04:00
Mason Daugherty
1c55536ec1 chore(core): add note about backward compatibility for tool_calls in additional_kwargs in JsonOutputKeyToolsParser 2025-08-25 10:30:41 -04:00
Maitrey Talware
622337a297 docs(docs): fixed typos in documentations (#32661)
Minor typo fixes. (Not linked to current open issues)
2025-08-25 10:02:53 -04:00
Shahroz Ahmad
1819c73d10 docs(docs): update Docker to ClickHouse 25.7 with vector_similarity support (#32659)
- **Description:** Updated Docker command to use ClickHouse 25.7 (has
`vector_similarity` index support). Added `CLICKHOUSE_SKIP_USER_SETUP=1`
env param to [bypass default user
setup](https://clickhouse.com/docs/install/docker#managing-default-user)
and allow external network access. There was also a bug where if you try
to access results using `similarity_search_with_relevance_scores`, they
need to unpacked first.

- **Issue:** Fixes #32094 if someone following tutorial with default
Clickhouse configurations.
2025-08-25 09:59:28 -04:00
Kim
8171403b4a docs(docs): rebranding of Azure AI Studio to Azure AI Foundry (#32658)
# Description
Updated documentation to reflect Microsoft’s rebranding of Azure AI
Studio to Azure AI Foundry. This ensures consistency with current Azure
terminology across the docs.

# Issue
N/A

# Dependencies
None
2025-08-25 09:58:31 -04:00
Chester Curme
7a108618ae Merge branch 'master' into wip-v1.0 2025-08-25 09:39:39 -04:00
Mason Daugherty
2d0713c2fc fix(infra): ollama CI 2025-08-22 16:40:03 -04:00
Mason Daugherty
8060b371bb fix(infra): ollama CI 2025-08-22 16:37:05 -04:00
Mason Daugherty
7851f66503 release(ollama): 0.3.7 (#32651) 2025-08-22 15:18:40 -04:00
Mason Daugherty
af3b88f58d feat(ollama): update reasoning type to support string values for custom intensity levels (e.g. gpt-oss) (#32650) 2025-08-22 15:11:32 -04:00
itaismith
1eb45d17fb feat(chroma): Add support for collection forking (#32627) 2025-08-21 17:57:55 -04:00
ccurme
8545d4731e release(openai): 0.3.31 (#32646) 2025-08-21 16:50:27 -04:00
Alex Naidis
21f7a9a9e5 fix(openai): allow temperature parameter for gpt-5-chat models (#32624) 2025-08-21 16:40:10 -04:00
sa411022
61bc1bf9cc fix(openai): construct responses api input (#32557) 2025-08-21 15:56:29 -04:00
Shahrukh Shaik
4ba222148d fix(openai): Chat Message Annotations defaults to [ ] if not list or None (#32614) 2025-08-21 15:30:12 -04:00
ccurme
6f058e7b9b fix(core): (v1) update BaseChatModel return type to AIMessage (#32626) 2025-08-21 14:02:24 -04:00
Christophe Bornet
b825f85bf2 fix(standard-tests): fix BaseStoreAsyncTests.test_set_values_is_idempotent (#32638)
The async version of the test should use the `ayield_keys` method
instead of `yield_keys`.
Otherwise tools such as `blockbuster` may trigger on a blocking call.
2025-08-21 10:07:46 -04:00
Mohammed Mohtasim .M.S
b5c44406eb docs(docs): fix typos in table in "How to load PDFs" documentation (#32635)
**Description:**
Fixed corrupted text in the code cell output of the documentation
notebook. The code cell itself was correct, but the saved output
contained garbage text.

**Issue:**
The saved output in the documentation notebook contained garbage/typo
text in the table name.

**Dependencies:**
None
2025-08-21 10:06:45 -04:00
Emmanuel Leroy
2ec63ca7da docs: migration to langchain_oci (#32619)
Doc update. I missed a couple mentions of the old package.
2025-08-21 10:03:44 -04:00
Christophe Bornet
f896bcdb1d chore(langchain): add mypy pydantic plugin (#32610) 2025-08-19 16:59:59 -04:00
Christophe Bornet
73a7de63aa chore(text-splitters): add mypy pydantic plugin (#32611) 2025-08-19 16:58:12 -04:00
Emmanuel Leroy
cd5f3ee364 docs: migrate from community package to langchain-oci (#32608)
Migrate package from langchain_community to langchain_oci
2025-08-19 16:57:37 -04:00
ccurme
dbc5a3b718 fix(anthropic): update cassette for streaming benchmark (#32609) 2025-08-19 11:18:36 -04:00
Christophe Bornet
02d6b9106b chore(core): add mypy pydantic plugin (#32604)
This helps to remove a bunch of mypy false positives.
2025-08-19 09:39:53 -04:00
William FH
b470c79f1d refactor(core): Use duck typing for _StreamingCallbackHandler (#32535)
It's used in langgraph and maybe elsewhere, so would be preferable if it
could just be duck-typed
2025-08-19 05:41:07 -07:00
Mason Daugherty
f0f1e28473 Merge branch 'master' of github.com:langchain-ai/langchain into wip-v1.0 2025-08-18 23:30:10 -04:00
Mason Daugherty
d204f0dd55 feat(infra): add skip-preview tag check in Vercel deployment script (#32600)
Having vercel attempt to deploy on each commit (even if unrelated to
docs) was getting annoying. Options:

- `[skip-preview]`
- `[no-preview]`
- `[skip-deploy]`

Full example: `fix(core): resolve memory leak [no-preview]`
2025-08-18 17:33:27 -04:00
Mohammad Mohtashim
00259b0061 fix(deepseek): Deep Seek Model for LS Tracing (#32575)
- **Description:** Fix for LS Tracing for Provider for DeepSeek.
  - **Issue:** #32484

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 18:48:30 +00:00
Mohammad Mohtashim
4fb1132e30 docs: Classification Notebook Update (#32357)
- **Description:** Updating the Classification notebook which was raised
[here](https://github.com/langchain-ai/langchain/issues/32354)
- **Issue:** Fixes #32354

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-18 18:45:03 +00:00
Mason Daugherty
a6690eb9fd release(anthropic): 0.3.19 (#32595) 2025-08-18 14:25:03 -04:00
Mason Daugherty
f69f9598f5 chore: update references to use the latest version of Claude-3.5 Sonnet (#32594) 2025-08-18 14:11:15 -04:00
Mason Daugherty
8d0fb2d04b fix(anthropic): correct input_token count for streaming (#32591)
* Create usage metadata on
[`message_delta`](https://docs.anthropic.com/en/docs/build-with-claude/streaming#event-types)
instead of at the beginning. Consequently, token counts are not included
during streaming but instead at the end. This allows for accurate
reporting of server-side tool usage (important for billing)
* Add some clarifying comments
* Fix some outstanding Pylance warnings
* Remove unnecessary `text` popping in thinking blocks
* Also now correctly reports `input_cache_read`/`input_cache_creation`
as a result
2025-08-18 17:51:47 +00:00
Mason Daugherty
8042b04da6 fix(anthropic): clean up null file_id fields in citations during message formatting (#32592)
When citations are returned from streaming, they include a `file_id:
null` field in their `content_block_location` structure.

When these citations are passed back to the API in subsequent messages,
the API rejects them with "Extra inputs are not permitted" for the
`file_id` field.
2025-08-18 13:01:52 -04:00
Daehwi Kim
fb74265175 fix(docs): update LangGraph guides link and add JS how-to link (#32583)
**Description:**  
Corrected LangGraph documentation link (changed to “guides”), and added
a link to LangGraph JS how-to guides for clarity.

**Issue:**  
N/A  

**Dependencies:**  
None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 14:27:37 +00:00
Oresztesz Margaritisz
21b61aaf9a fix(docs): Using appropriate argument name in ToolNode for error handling (#32586)
The appropriate `ToolNode` attribute for error handling is called
`handle_tool_errors` instead of `handle_tool_error`.

For further info see [ToolNode source code in
LangGraph](https://github.com/langchain-ai/langgraph/blob/main/libs/prebuilt/langgraph/prebuilt/tool_node.py#L255)

**Twitter handle:** gitaroktato

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-18 10:12:10 -04:00
Keyu Chen
03138f41a0 feat(text-splitters): add optional custom header pattern support (#31887)
## Description

This PR adds support for custom header patterns in
`MarkdownHeaderTextSplitter`, allowing users to define non-standard
Markdown header formats (like `**Header**`) and specify their hierarchy
levels.

**Issue:** Fixes #22738

**Dependencies:** None - this change has no new dependencies

**Key Changes:**
- Added optional `custom_header_patterns` parameter to support
non-standard header formats
- Enable splitting on patterns like `**Header**` and `***Header***`
- Maintain full backward compatibility with existing usage
- Added comprehensive tests for custom and mixed header scenarios

## Example Usage

```python
from langchain_text_splitters import MarkdownHeaderTextSplitter

headers_to_split_on = [
    ("**", "Chapter"),
    ("***", "Section"),
]

custom_header_patterns = {
    "**": 1,   # Level 1 headers
    "***": 2,  # Level 2 headers
}

splitter = MarkdownHeaderTextSplitter(
    headers_to_split_on=headers_to_split_on,
    custom_header_patterns=custom_header_patterns,
)

# Now **Chapter 1** is treated as a level 1 header
# And ***Section 1.1*** is treated as a level 2 header
```

## Testing

-  Added unit tests for custom header patterns
-  Added tests for mixed standard and custom headers
-  All existing tests pass (backward compatibility maintained)
-  Linting and formatting checks pass

---

The implementation provides a flexible solution while maintaining the
simplicity of the existing API. Users can continue using the splitter
exactly as before, with the new functionality being entirely opt-in
through the `custom_header_patterns` parameter.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-18 10:10:49 -04:00
Mason Daugherty
fd891ee3d4 revert(anthropic): streaming token counting to defer input tokens until completion (#32587)
Reverts langchain-ai/langchain#32518
2025-08-18 09:48:33 -04:00
ccurme
b8cdbc4eca fix(anthropic): sanitize tool use block when taking directly from content (#32574) 2025-08-18 09:06:57 -04:00
Christophe Bornet
791d309c06 chore(langchain): add mypy warn_unreachable setting (#32529)
See
https://mypy.readthedocs.io/en/stable/config_file.html#confval-warn_unreachable

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 23:03:53 +00:00
Mason Daugherty
d3d23e2372 fix(anthropic): streaming token counting to defer input tokens until completion (#32518)
Supersedes #32461

Fixed incorrect input token reporting during streaming when tools are
used. Previously, input tokens were counted at `message_start` before
tool execution, leading to inaccurate counts. Now input tokens are
properly deferred until `message_delta` (completion), aligning with
Anthropic's billing model and SDK expectations.

**Before Fix:**
- Streaming with tools: Input tokens = 0 
- Non-streaming with tools: Input tokens = 472 

**After Fix:**
- Streaming with tools: Input tokens = 472 
- Non-streaming with tools: Input tokens = 472 

Aligns with Anthropic's SDK expectations. The SDK handles input token
updates in `message_delta` events:

```python
# https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/streaming/_messages.py
if event.usage.input_tokens is not None:
      current_snapshot.usage.input_tokens = event.usage.input_tokens
```
2025-08-15 17:49:46 -04:00
Mason Daugherty
8bd2403518 fix: increase max_tokens limit to 64000 re: Anthropic dynamic tokens 2025-08-15 15:34:54 -04:00
Mason Daugherty
4dd9110424 Merge branch 'master' into wip-v1.0 2025-08-15 15:32:21 -04:00
Mohammad Mohtashim
174e685139 feat(anthropic): dynamic mapping of Max Tokens for Anthropic (#31946)
- **Description:** Dynamic mapping of `max_tokens` as per the choosen
anthropic model.
- **Issue:** Fixes #31605

@ccurme

---------

Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-15 11:33:51 -07:00
Mason Daugherty
2f32c444b8 docs: add details on message IDs and their assignment process (#32534) 2025-08-15 18:22:28 +00:00
Mason Daugherty
9721684501 Merge branch 'master' into wip-v1.0 2025-08-15 14:06:34 -04:00
Mason Daugherty
a4e135b508 fix: use .get() on image URL in ImagePromptValue.to_string() 2025-08-15 13:57:50 -04:00
Mason Daugherty
fe740a9397 fix(docs): chatbot.ipynb trimming regression (#32561)
Supersedes #32544

Changes to the `trimmer` behavior resulted in the call `"What math
problem was asked?"` to no longer see the relevant query due to the
number of the queries' tokens. Adjusted to not trigger trimming the
relevant part of the message history. Also, add print to the trimmer to
increase observability on what is leaving the context window.

Add note to trimming tut & format links as inline
2025-08-15 14:47:22 +00:00
Rostyslav Borovyk
b2b835cb36 docs(docs): add Oxylabs document loader (#32429)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 10:46:26 -04:00
Christophe Bornet
4656f727da chore(text-splitters): add mypy warn_unreachable (#32558) 2025-08-15 09:45:20 -04:00
Mason Daugherty
34800332bf chore: update integrations table (#32556)
Enhance the integrations table by adding the `js:
'@langchain/community'` reference for several packages and updating the
titles of specific integrations to avoid improper capitalization
2025-08-14 22:37:36 -04:00
Mason Daugherty
06ba80ff68 docs: formatting Tavily (#32555) 2025-08-14 23:41:37 +00:00
Mason Daugherty
2bd8096faa docs: add pre-commit setup instructions to the dev setup guide (#32553) 2025-08-14 20:35:57 +00:00
Mason Daugherty
a0331285d7 fix(core): Support no-args tools by defaulting args to empty dict (#32530)
Supersedes #32408

Description:  
This PR ensures that tool calls without explicitly provided `args` will
default to an empty dictionary (`{}`), allowing tools with no parameters
(e.g. `def foo() -> str`) to be registered and invoked without
validation errors. This change improves compatibility with agent
frameworks that may omit the `args` field when generating tool calls.

Issue:  
See
[langgraph#5722](https://github.com/langchain-ai/langgraph/issues/5722)
–
LangGraph currently emits tool calls without `args`, which leads to
validation errors
when tools with no parameters are invoked. This PR ensures compatibility
by defaulting
`args` to `{}` when missing.

Dependencies:  
None

---------

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Signed-off-by: jitokim <pigberger70@gmail.com>
Co-authored-by: jito <pigberger70@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 20:28:36 +00:00
Mason Daugherty
8f68a08528 chore: add Chat LangChain to README.md (#32545) 2025-08-14 16:15:27 -04:00
Lauren Hirata Singh
71651c4a11 docs: update banner (#32552) 2025-08-14 10:54:29 -07:00
Lauren Hirata Singh
44ec1f32b2 docs: banner for academy course (#32550)
Publish at 10AM PT
2025-08-14 10:05:00 -07:00
Yoon
0c81499243 docs(ollama): update API usage examples (#32547)
**Description**  
Corrected a typo in the Ollama chatbot example output in  
`docs/docs/integrations/chat/ollama.ipynb` where `"got-oss"` was  
mistakenly used instead of `"gpt-oss"`.

No functional changes to code; documentation-only update.  
All notebook outputs were cleared to keep the diff minimal.

**Issue**  
N/A

**Dependencies**  
None

**Twitter handle**  
N/A
2025-08-14 12:57:38 -04:00
Mason Daugherty
397cd89988 docs: update outdated README.md content (#32540) 2025-08-13 22:19:38 +00:00
mishraravibhushan
db438d8dcc docs(docs): fixed additional grammar and style issues in how-to index (#32533)
- Fix 'few shot' → 'few-shot' (add hyphen for consistency)
- Fix 'over the database' → 'over a database' (add missing article)
- Fix 'run time' → 'runtime' (more consistent terminology)
- Fix 'in-sync' → 'in sync' (remove unnecessary hyphen)
2025-08-13 14:10:58 -04:00
RecallIO
4f71c35eb0 docs(docs): Add RecallIO.AI as a memory provider (#32331)
Add requested files to add RecallIO as a memory provider.

---------

Co-authored-by: Frey <gfreyburger@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-13 15:09:56 +00:00
Mason Daugherty
156ae2e69b fix(docs): resolve langchain-azure-ai conflict with langchain-core (#32528) 2025-08-13 14:47:23 +00:00
Shenghang Tsai
f4f919768e docs(langchain): create SiliconFlow provider entry (#32342)
SiliconFlow's provider integration will be maintained at
https://github.com/siliconflow/langchain-siliconflow
This PR introduce the basic instruction to make use of the pip package
2025-08-13 10:41:23 -04:00
Mason Daugherty
7932e1edd1 feat(docs): clarify structured output with tools ordering (#32527) 2025-08-13 10:40:48 -04:00
Mason Daugherty
024422e9b0 chore: update to use new LGP docs url (#32522) 2025-08-13 03:38:39 +00:00
Mason Daugherty
d52036accc chore: update README.md to use pepy downloads badge (#32521) 2025-08-13 03:23:11 +00:00
Mason Daugherty
5b701b5189 fix(tests): add anthropic_proxy to configurable test parameters (for v1) 2025-08-12 18:33:21 -04:00
Mason Daugherty
8848b3e018 fix(tests): add anthropic_proxy to configurable test parameters 2025-08-12 18:27:35 -04:00
Mason Daugherty
80068432ed chore(core): bump lock 2025-08-12 17:32:24 -04:00
Jack
b9dcce95be fix(anthropic): Add proxy (#32409)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
fix #30146
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 21:21:26 +00:00
ccurme
be83ce74a7 feat(anthropic): support cache_control as a kwarg (#31523)
```python
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")
caching_llm = llm.bind(cache_control={"type": "ephemeral"})

caching_llm.invoke(
    [
        HumanMessage("..."),
        AIMessage("..."),
        HumanMessage("..."),  # <-- final message / content block gets cache annotation
    ]
)
```
Potentially useful given's Anthropic's [incremental
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation)
capabilities:
> During each turn, we mark the final block of the final message with
cache_control so the conversation can be incrementally cached. The
system will automatically lookup and use the longest previously cached
prefix for follow-up messages.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 16:18:24 -04:00
Mason Daugherty
1167e7458e fix(anthropic): update test model names and adjust token count assertions in integration tests (#32422) 2025-08-12 19:39:35 +00:00
Mason Daugherty
d5fd0bca35 docs(anthropic): add documentation for extended context windows in Claude Sonnet 4 (#32517) 2025-08-12 19:16:26 +00:00
Narasimha Badrinath
30d646b576 docs(docs): remove redundant integration details from ChatGradient page. (#32514)
This commit removes redundant integration info from details page,
additionally, changing reference from "DigitalOcean GradientAI" to
"DigitalOcean Gradient™ AI" and updating the setup instructions
accordingly.
2025-08-12 16:14:18 +00:00
Mason Daugherty
262c83763f release(openai): 0.3.30 (#32515) 2025-08-12 16:06:17 +00:00
Mason Daugherty
0024dffa68 feat(openai): officially support verbosity (#32470) 2025-08-12 16:00:30 +00:00
Brody
98797f367a docs: fix broken links (#32513)
**Description:**

Two broken links were reported by another LangChain employee. This PR
fixes those links.

Fixed and tested locally.
  
**Dependencies:**

None
2025-08-12 15:55:37 +00:00
Christophe Bornet
1563099f3f chore(langchain): select ALL rules with exclusions (#31930)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-12 11:51:31 -04:00
rishiraj
7f259863e1 feat(docs): add truefoundry ai gateway (#32362)
This PR adds documentation for integrating [TrueFoundry’s AI
Gateway](https://www.truefoundry.com/ai-gateway) with Langfuse using the
Langraph OpenAI SDK.
The integration sends requests through TrueFoundry’s AI Gateway for
unified governance, observability, and routing, while Langraph runs on
the client side to capture execution traces and telemetry.
- Issue: N/A
- Dependencies: None
- Twitter - https://x.com/truefoundry


tests - Not applicable

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-12 02:26:45 +00:00
Mason Daugherty
c8df6c7ec9 chore: update CONTRIBUTING.md to more clearly mention forum (#32509) 2025-08-11 23:02:21 +00:00
Christophe Bornet
cf2b4bbe09 chore(cli): select ALL rules with exclusions (#31936)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:43:11 +00:00
Christophe Bornet
09a616fe85 chore(standard-tests): add ruff rules D (#32347)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:26:11 +00:00
Christophe Bornet
46bbd52e81 chore(cli): add ruff rules D1 (#32350)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:25:30 +00:00
Christophe Bornet
8b663ed6c6 chore(text-splitters): bump mypy version to 1.17 (#32387)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:24:49 +00:00
Anderson
166c027434 docs: add scrapeless integration documentation (#32081)
Thank you for contributing to LangChain! 
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"

- **Description:** Integrated the Scrapeless package to enable Langchain
users to seamlessly incorporate Scrapeless into their agents.
- **Dependencies:** None
- **Twitter handle:** [Scrapelessteam](https://x.com/Scrapelessteam)

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 22:16:15 +00:00
GDanksAnchor
4a2a3fcd43 docs: add anchorbrowser (#32494)
# Description

This PR updates the docs for the
[langchain-anchorbrowser](https://pypi.org/project/langchain-anchorbrowser/)
package. It adds a few tools

[Anchor Browser](https://anchorbrowser.io/?utm=langchain) is the
platform for AI Agentic browser automation, which solves the challenge
of automating workflows for web applications that lack APIs or have
limited API coverage. It simplifies the creation, deployment, and
management of browser-based automations, transforming complex web
interactions into simple API endpoints.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:48:10 +00:00
Anubhav Dhawan
d46dcf4a60 docs: add Google partner guide for MCP Toolbox (#32356)
This PR introduces a new Google partner guide for MCP Toolbox. The
primary goal of this new documentation is to enhance the discoverability
of MCP Toolbox for developers working within the Google ecosystem,
providing them with a clear and direct path to using our tools.

> [!IMPORTANT]
> This PR contains link to a page which is added in #32344. This will
cause deployment failure until that PR is merged.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:34:12 +00:00
William Espegren
d2ac3b375c fix(docs): add Spider as a webpage loader (#32453)
[Spider](https://spider.cloud/) is a webpage loader and should be listed
under the
["Webpages"](https://python.langchain.com/docs/integrations/document_loaders/#webpages)
table on the Document loaders page.

Twitter: https://x.com/WilliamEspegren

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:23:03 +00:00
Anubhav Dhawan
1e38fd2ce3 docs: add integration guide for MCP Toolbox (#32344)
This PR introduces a new integration guide for MCP Toolbox. The primary
goal of this new documentation is to enhance the discoverability of MCP
Toolbox for developers working within the LangChain ecosystem, providing
them with a clear and direct path to using our tools.

This approach was chosen to provide users with a practical, hands-on
example that they can easily follow.

> [!NOTE]
> The page added in this PR is linked to from a section in Google
partners page added in #32356.

---------

Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:03:38 +00:00
Yasien Dwieb
155e3740bc fix(docs): handle collection not found error on RAG tutorial when qdrant is selected as vectorStore (#32099)
In [Rag Part 1
Tutorial](https://python.langchain.com/docs/tutorials/rag/), when QDrant
vector store is selected, the sample code does not work
It fails with error  `ValueError: Collection test not found`

So, this fix is creating that collection and ensuring its dimension size
is matching the selection the embedding size of the selected LLM Model

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:31:24 +00:00
Deepesh Dhakal
f9b4e501a8 fix(docs): update llamacpp.ipynb for installation options on Mac (#32341)
The previous code generated data invalid error.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:25:35 +00:00
prem-sagar123
5a50802c9a docs: update prompt_templates.mdx (#32405)
```messages_to_pass = [
    HumanMessage(content="What's the capital of France?"),
    AIMessage(content="The capital of France is Paris."),
    HumanMessage(content="And what about Germany?")
]
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
print(formatted_prompt)```

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 20:16:30 +00:00
Mohammad Mohtashim
9a7e66be60 docs: put standard-tests before other packages (#32424)
- **Description:** Moving `standard-tests` to main ordered section
- **Issue:** #32395

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 20:05:24 +00:00
Mason Daugherty
5597b277c5 feat(docs): add subsection on Tool Artifacts vs. Injected State (#32468)
Clarify the differences between tool artifacts and injected state in
LangChain and LangGraph
2025-08-11 19:53:33 +00:00
Soham Sharma
a1da5697c6 docs: clarify how to get LangSmith API key (#32402)
**Description:**
I've added a small clarification to the chatbot tutorial. The tutorial
mentions setting the `LANGSMITH_API_KEY`, but doesn't explain how a new
user can get the key from the website. This change adds a brief note to
guide them to the Settings page.

P.S. This is my first pull request, so I'm excited to learn and
contribute!

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@sohamactive

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:52:05 +00:00
Divyanshu Gupta
11a54b1f1a docs: clarify SystemMessage usage in LangGraph agent notebook (#32320) (#32346)
Closes #32320

This PR updates the `langgraph_agentic_rag.ipynb` notebook to clarify
that LangGraph does not automatically prepend a `SystemMessage`. A
markdown note and an inline Python comment have been added to guide
users to explicitly include a `SystemMessage` when needed.

This improves documentation for developers working with LangGraph-based
agents and avoids confusion about system-level behavior not being
applied.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:49:42 +00:00
Mason Daugherty
5ccdcd7b7b feat(ollama): docs updates (#32507) 2025-08-11 15:39:44 -04:00
854 changed files with 59678 additions and 52098 deletions

View File

@@ -7,4 +7,4 @@ To learn how to contribute to LangChain, please follow the [contribution guide h
## New features
For new features, please start a new [discussion](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.
For new features, please start a new [discussion on our forum](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.

View File

@@ -1,6 +1,7 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
body:
- type: markdown
attributes:
@@ -13,9 +14,7 @@ body:
if there's another way to solve your problem:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
@@ -25,7 +24,7 @@ body:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- label: This is a bug, not a usage question.
required: true
- label: I added a clear and descriptive title that summarizes this issue.
required: true
@@ -35,6 +34,8 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
@@ -118,3 +119,7 @@ body:
python -m langchain_core.sys_info
validations:
required: true

View File

@@ -1,6 +1,9 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: LangChain Forum
url: https://forum.langchain.com/
about: General community discussions, support, and feature requests
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support

View File

@@ -1,59 +0,0 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "docs: <Please write a comprehensive title after the 'docs: ' prefix>"
labels: [documentation]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use the [LangChain Forum](https://forum.langchain.com/).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -0,0 +1,118 @@
name: "✨ Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: Feature Description
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
- type: textarea
id: use-case
validations:
required: true
attributes:
label: Use Case
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
- type: textarea
id: proposed-solution
validations:
required: false
attributes:
label: Proposed Solution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...

View File

@@ -4,12 +4,7 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
- type: checkboxes
id: privileged
attributes:

91
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false

View File

@@ -1,3 +1,5 @@
(Replace this entire block of text)
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
@@ -9,14 +11,13 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
@@ -26,7 +27,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.

View File

@@ -1,12 +1,22 @@
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
name: uv-install
description: Set up Python and uv
description: Set up Python and uv with caching
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: 'true'
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ''
working-directory:
description: Working directory for cache glob scoping
required: false
default: '**'
env:
UV_VERSION: "0.5.25"
@@ -15,7 +25,13 @@ runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

View File

@@ -121,23 +121,27 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
elif dir_ == "libs/core":
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
py_versions = ["3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ == "libs/partners/milvus":
# milvus doesn't allow 3.12 because they declare deps in funny way
py_versions = ["3.9", "3.11"]
py_versions = ["3.10", "3.11"]
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
py_versions = ["3.10", "3.12"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
elif dir_ == "libs/langchain_v1":
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
py_versions = ["3.10", "3.12"]
else:
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
return [{"working-directory": dir_, "python-version": py_v} for py_v in py_versions]

View File

@@ -27,12 +27,14 @@ jobs:
timeout-minutes: 20
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
shell: bash

View File

@@ -28,12 +28,14 @@ jobs:
runs-on: ubuntu-latest
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
shell: bash

View File

@@ -33,12 +33,14 @@ jobs:
timeout-minutes: 20
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Lint & Typing Dependencies'
# Also installs dev/lint/test/typing dependencies, to ensure we have

View File

@@ -43,7 +43,7 @@ jobs:
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
@@ -92,7 +92,7 @@ jobs:
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain
path: langchain
@@ -183,13 +183,36 @@ jobs:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
pre-release-checks:
needs:
@@ -199,7 +222,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
@@ -289,7 +312,8 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -347,7 +371,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
partner: [openai, anthropic]
partner: [openai]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -362,7 +386,7 @@ jobs:
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
@@ -393,7 +417,7 @@ jobs:
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -Ev '==[^=]*(\.?dev[0-9]*|\.?rc[0-9]*)$' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| sort -Vr \
| head -n 1
)"
@@ -440,7 +464,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
@@ -479,7 +503,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"

View File

@@ -32,13 +32,16 @@ jobs:
name: 'Python ${{ inputs.python-version }}'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test --dev

View File

@@ -21,12 +21,14 @@ jobs:
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash

View File

@@ -34,12 +34,14 @@ jobs:
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash

View File

@@ -1,106 +0,0 @@
name: '🧪 Test Release Package'
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
jobs:
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: '📦 Build Project for Distribution'
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: '⬆️ Upload Build Artifacts'
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: '🔍 Extract Version Information'
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
publish:
needs:
- build
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v5
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false

View File

@@ -17,10 +17,10 @@ jobs:
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
@@ -72,7 +72,7 @@ jobs:
done
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
uses: actions/setup-python@v5
uses: actions/setup-python@v6
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}

View File

@@ -13,7 +13,7 @@ jobs:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v4
with:

View File

@@ -16,19 +16,34 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '✅ Verify pyproject.toml & version.py Match'
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Check core versions
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
# Compare core versions
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
echo "version.py version: $CORE_VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
echo "Core versions match: $CORE_PYPROJECT_VERSION"
fi
# Check langchain_v1 versions
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
# Compare langchain_v1 versions
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
exit 1
else
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
fi

View File

@@ -33,9 +33,9 @@ jobs:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: '🐍 Setup Python 3.11'
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: '3.11'
- name: '📂 Get Changed Files'
@@ -54,6 +54,7 @@ jobs:
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
needs: [ build ]
@@ -138,12 +139,14 @@ jobs:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '📦 Install Dependencies & Run Extended Tests'
shell: bash
@@ -166,10 +169,72 @@ jobs:
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: '⚡ CodSpeed Benchmarks'
needs: [ build ]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
if: |
always()
runs-on: ubuntu-latest

View File

@@ -22,8 +22,8 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: '3.10'
- id: files

View File

@@ -1,66 +0,0 @@
name: '⚡ CodSpeed'
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
permissions:
contents: read
env:
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
DEEPSEEK_API_KEY: foo
FIREWORKS_API_KEY: foo
jobs:
codspeed:
name: 'Benchmark'
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
strategy:
matrix:
include:
- working-directory: libs/core
mode: walltime
- working-directory: libs/partners/openai
- working-directory: libs/partners/anthropic
- working-directory: libs/partners/deepseek
- working-directory: libs/partners/fireworks
- working-directory: libs/partners/xai
- working-directory: libs/partners/mistralai
- working-directory: libs/partners/groq
fail-fast: false
steps:
- uses: actions/checkout@v4
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.working-directory }}'
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.working-directory }}
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.mode || 'instrumentation' }}

View File

@@ -19,7 +19,7 @@ jobs:
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# Ref: https://github.com/actions/runner/issues/2033
- name: '🔧 Fix Git Safe Directory in Container'
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig

View File

@@ -62,7 +62,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
uses: amannn/action-semantic-pull-request@v5
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -26,21 +26,23 @@ jobs:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: '📑 Test Documentation Notebooks'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
working-directory: ${{ github.event.inputs.working-directory || '**' }}
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v4
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View File

@@ -1,5 +1,5 @@
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.10, 3.13' }})"
on:
workflow_dispatch: # Allows maintainers to trigger the workflow manually in GitHub UI
@@ -9,7 +9,7 @@ on:
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
description: "Python version to use - defaults to 3.10 and 3.13 in matrix - example value: 3.11"
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
@@ -20,7 +20,7 @@ env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
POETRY_LIBS: ("libs/partners/aws")
jobs:
# Generate dynamic test matrix based on input parameters or defaults
@@ -40,9 +40,9 @@ jobs:
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# python-version should default to 3.10 and 3.13, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
python_version='["3.10", "3.13"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
@@ -68,14 +68,14 @@ jobs:
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
@@ -106,12 +106,12 @@ jobs:
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v4
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View File

@@ -8,16 +8,12 @@
<br>
</div>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain-core)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -45,7 +41,7 @@ interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
external/internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
@@ -60,7 +56,7 @@ applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
@@ -68,9 +64,8 @@ reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
@@ -85,3 +80,4 @@ concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.

View File

@@ -4,9 +4,9 @@ LangChain has a large ecosystem of integrations with various external resources
## Best practices
When building such applications developers should remember to follow good security practices:
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
@@ -67,8 +67,7 @@ All out of scope targets defined by huntr as well as:
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
* Code documented with security notices. This will be decided on a case by
case basis, but likely will not be eligible for a bounty as the code is already
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).

View File

@@ -64,3 +64,4 @@ Notebook | Description
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.

View File

@@ -79,6 +79,17 @@
"tool_executor = ToolExecutor(tools)"
]
},
{
"cell_type": "markdown",
"id": "168152fc",
"metadata": {},
"source": [
"📘 **Note on `SystemMessage` usage with LangGraph-based agents**\n",
"\n",
"When constructing the `messages` list for an agent, you *must* manually include any `SystemMessage`s.\n",
"Unlike some agent executors in LangChain that set a default, LangGraph requires explicit inclusion."
]
},
{
"cell_type": "markdown",
"id": "fe6e8f78-1ef7-42ad-b2bf-835ed5850553",

View File

@@ -47,10 +47,12 @@
"source": [
"### Prerequisites\n",
"\n",
"Please install the Oracle Database [python-oracledb driver](https://pypi.org/project/oracledb/) to use LangChain with Oracle AI Vector Search:\n",
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
"\n",
"```\n",
"$ python -m pip install --upgrade oracledb\n",
"$ python -m pip install -U langchain-oracledb\n",
"```"
]
},
@@ -217,7 +219,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
@@ -296,7 +298,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
@@ -354,7 +356,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
@@ -395,7 +397,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
@@ -452,7 +454,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
@@ -498,14 +500,14 @@
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
"from langchain_oracledb.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.vectorstores import oraclevs\n",
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
@@ -677,19 +679,19 @@
"outputs": [],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"db_filter = {\"document_id\": \"1\"}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
@@ -697,7 +699,7 @@
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
" )\n",
")"
]

View File

@@ -0,0 +1,455 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3716230e",
"metadata": {},
"source": [
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
"\n",
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
"\n",
"\n",
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
"- **Tracing**: Monitor inputs, outputs, retrieved documents, scores, prompts, and timings\n",
"- **Evaluation**: Use MLflow's built-in scorers to assess RAG performance\n",
"- **Best Practices**: Implement proper configuration management and reproducible experiments\n",
"\n",
"We'll build a RAG system that can answer questions about academic papers by:\n",
"1. Loading and chunking documents from ArXiv\n",
"2. Creating embeddings and a vector store\n",
"3. Setting up a retrieval-augmented generation chain\n",
"4. Tracking all experiments with MLflow\n",
"5. Evaluating the system's performance\n",
"\n",
"![System Diagram](https://miro.medium.com/v2/resize:fit:720/format:webp/1*eiw86PP4hrBBxhjTjP0JUQ.png)"
]
},
{
"cell_type": "markdown",
"id": "2f7561c4",
"metadata": {},
"source": [
"#### Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0814ebe9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain mlflow langchain-community arxiv pymupdf langchain-text-splitters langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "747399b6",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import mlflow\n",
"from mlflow.genai.scorers import RelevanceToQuery, Correctness, ExpectationsGuidelines\n",
"from langchain_community.document_loaders import ArxivLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4141ee05",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\"\n",
"\n",
"mlflow.set_experiment(\"LangChain-RAG-MLflow\")\n",
"mlflow.langchain.autolog()"
]
},
{
"cell_type": "markdown",
"id": "dd5eb41b",
"metadata": {},
"source": [
"Define all hyperparameters and configuration in a centralized dictionary. This makes it easy to:\n",
"- Track different experiment configurations\n",
"- Reproduce results\n",
"- Perform hyperparameter tuning\n",
"\n",
"**Key Parameters**:\n",
"- `chunk_size`: Size of text chunks for document splitting\n",
"- `chunk_overlap`: Overlap between consecutive chunks\n",
"- `retriever_k`: Number of documents to retrieve\n",
"- `embeddings_model`: OpenAI embedding model\n",
"- `llm`: Language model for generation\n",
"- `temperature`: Sampling temperature for the LLM"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6dcdc5d8",
"metadata": {},
"outputs": [],
"source": [
"CONFIG = {\n",
" \"chunk_size\": 400,\n",
" \"chunk_overlap\": 80,\n",
" \"retriever_k\": 3,\n",
" \"embeddings_model\": \"text-embedding-3-small\",\n",
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
" \"llm\": \"gpt-5-nano\",\n",
" \"temperature\": 0,\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "8a2985f1",
"metadata": {},
"source": [
"#### ArXiv Dcoument Loading and Processing"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f32aa36",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
]
}
],
"source": [
"# Load documents from ArXiv\n",
"loader = ArxivLoader(\n",
" query=\"1706.03762\",\n",
" load_max_docs=1,\n",
")\n",
"docs = loader.load()\n",
"print(docs[0].metadata)\n",
"\n",
"# Split documents into chunks\n",
"splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=CONFIG[\"chunk_size\"],\n",
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
")\n",
"chunks = splitter.split_documents(docs)\n",
"\n",
"\n",
"# Join chunks into a single string\n",
"def join_chunks(chunks):\n",
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
]
},
{
"cell_type": "markdown",
"id": "6e194ab4",
"metadata": {},
"source": [
"#### Vector Store and Retriever Setup"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "26dfbeaa",
"metadata": {},
"outputs": [],
"source": [
"# Create embeddings\n",
"embeddings = OpenAIEmbeddings(model=CONFIG[\"embeddings_model\"])\n",
"\n",
"# Create vector store from documents\n",
"vectorstore = InMemoryVectorStore.from_documents(\n",
" chunks,\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Create retriever\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": CONFIG[\"retriever_k\"]})"
]
},
{
"cell_type": "markdown",
"id": "bc1f181b",
"metadata": {},
"source": [
"#### RAG Chain Construction using [LCEL](https://python.langchain.com/docs/concepts/lcel/)\n",
"\n",
"Flow:\n",
"1. Query → Retriever (finds relevant chunks)\n",
"2. Chunks → join_chunks (creates context)\n",
"3. Context + Query → Prompt Template\n",
"4. Prompt → Language Model → Response\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6a810dc3",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the language model\n",
"llm = ChatOpenAI(model=CONFIG[\"llm\"], temperature=CONFIG[\"temperature\"])\n",
"\n",
"# Create the prompt template\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", CONFIG[\"system_prompt\"] + \"\\n\\nContext:\\n{context}\\n\\n\"),\n",
" (\"human\", \"\\n{question}\\n\"),\n",
" ]\n",
")\n",
"\n",
"# Construct the RAG chain\n",
"rag_chain = (\n",
" {\n",
" \"context\": retriever | RunnableLambda(join_chunks),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c04bd019",
"metadata": {},
"source": [
"#### Prediction Function with MLflow Tracing\n",
"\n",
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
"- Input queries\n",
"- Retrieved documents\n",
"- Generated responses\n",
"- Execution time\n",
"- Chain intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b45fc04",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What is the main idea of the paper?\n",
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
]
}
],
"source": [
"@mlflow.trace\n",
"def predict_fn(question: str) -> str:\n",
" return rag_chain.invoke(question)\n",
"\n",
"\n",
"# Test the prediction function\n",
"sample_question = \"What is the main idea of the paper?\"\n",
"response = predict_fn(sample_question)\n",
"print(f\"Question: {sample_question}\")\n",
"print(f\"Response: {response}\")"
]
},
{
"cell_type": "markdown",
"id": "421469de",
"metadata": {},
"source": [
"#### Evaluation Dataset and Scoring\n",
"\n",
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
"\n",
"<u>Evaluation Components:</u>\n",
"- **Dataset**: Questions with expected concepts and facts\n",
"- **Scorers**: \n",
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
" - `Correctness`: Evaluates factual accuracy of the response\n",
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
"\n",
"<u>Best Practices:</u>\n",
"- Create diverse test cases covering different query types\n",
"- Include expected concepts to guide evaluation\n",
"- Use multiple scoring metrics for comprehensive assessment"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c1dc4f2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "2b6c6687efa24796b39c7951d589d481",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Evaluating: 0%| | 0/3 [Elapsed: 00:00, Remaining: ?] "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"✨ Evaluation completed.\n",
"\n",
"Metrics and evaluation results are logged to the MLflow run:\n",
" Run name: \u001b[94mbaseline_eval\u001b[0m\n",
" Run ID: \u001b[94ma2218d9f24c9415f8040d3b77af103a9\u001b[0m\n",
"\n",
"To view the detailed evaluation results with sample-wise scores,\n",
"open the \u001b[93m\u001b[1mTraces\u001b[0m tab in the Run page in the MLflow UI.\n",
"\n"
]
}
],
"source": [
"# Define evaluation dataset\n",
"eval_dataset = [\n",
" {\n",
" \"inputs\": {\"question\": \"What is the main idea of the paper?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"attention mechanism\", \"transformer\", \"neural network\"],\n",
" \"expected_facts\": [\n",
" \"attention mechanism is a key component of the transformer model\"\n",
" ],\n",
" \"guidelines\": [\"The response must be factual and concise\"],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\n",
" \"question\": \"What's the difference between a transformer and a recurrent neural network?\"\n",
" },\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"sequential\", \"attention mechanism\", \"hidden state\"],\n",
" \"expected_facts\": [\n",
" \"transformer processes data in parallel while RNN processes data sequentially\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and focus on the difference between the two models\"\n",
" ],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\"question\": \"What does the attention mechanism do?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"query\", \"key\", \"value\", \"relationship\", \"similarity\"],\n",
" \"expected_facts\": [\n",
" \"attention allows the model to weigh the importance of different parts of the input sequence when processing it\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and explain the concept of attention\"\n",
" ],\n",
" },\n",
" },\n",
"]\n",
"\n",
"# Run evaluation with MLflow\n",
"with mlflow.start_run(run_name=\"baseline_eval\") as run:\n",
" # Log configuration parameters\n",
" mlflow.log_params(CONFIG)\n",
"\n",
" # Run evaluation\n",
" results = mlflow.genai.evaluate(\n",
" data=eval_dataset,\n",
" predict_fn=predict_fn,\n",
" scorers=[RelevanceToQuery(), Correctness(), ExpectationsGuidelines()],\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "52b137c7",
"metadata": {},
"source": [
"#### Launch MLflow UI to check out the results\n",
"\n",
"<u>What you'll see in the UI:</u>\n",
"- **Experiments**: Compare different RAG configurations\n",
"- **Runs**: Individual experiment runs with metrics and parameters\n",
"- **Traces**: Detailed execution traces showing retrieval and generation steps\n",
"- **Evaluation Results**: Scoring metrics and detailed comparisons\n",
"- **Artifacts**: Saved models, datasets, and other files\n",
"\n",
"Navigate to `http://localhost:5000` after running the command below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "817c3799",
"metadata": {},
"outputs": [],
"source": [
"!mlflow ui"
]
},
{
"cell_type": "markdown",
"id": "c75861e3",
"metadata": {},
"source": [
"You should see something like this\n",
"\n",
"![MLflow UI image](https://miro.medium.com/v2/resize:fit:720/format:webp/1*Cx7MMy53pAP7150x_hvztA.png)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -97,7 +97,7 @@ def skip_private_members(app, what, name, obj, skip, options):
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
# don't document default init
return True
return None

View File

@@ -217,11 +217,7 @@ def _load_package_modules(
# Get the full namespace of the module
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
# Keep only the top level namespace
# (but make special exception for content_blocks and v1.messages)
if namespace == "messages.content_blocks" or namespace == "v1.messages":
top_namespace = namespace # Keep full namespace for content_blocks
else:
top_namespace = namespace.split(".")[0]
top_namespace = namespace.split(".")[0]
try:
# If submodule is present, we need to construct the paths in a slightly
@@ -472,7 +468,7 @@ def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for building of documentation.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
package_name: Can be either "langchain" or "core"
"""
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
@@ -491,7 +487,7 @@ def _package_namespace(package_name: str) -> str:
"""Returns the package name used.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
package_name: Can be either "langchain" or "core"
Returns:
modified package_name: Can be either "langchain" or "langchain_{package_name}"
@@ -549,7 +545,13 @@ def _build_index(dirs: List[str]) -> None:
"ai21": "AI21",
"ibm": "IBM",
}
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
ordered = [
"core",
"langchain",
"text-splitters",
"community",
"standard-tests",
]
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
doc = """# LangChain Python API Reference

File diff suppressed because one or more lines are too long

View File

@@ -31,7 +31,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tools](/docs/concepts/tools).
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
- **[batch](/docs/concepts/runnables)**: Used to execute a runnable with batch inputs.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.

View File

@@ -147,7 +147,7 @@ An `AIMessage` has the following attributes. The attributes which are **standard
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. See [Message IDs](#message-ids) for details. |
| `response_metadata` | Raw | Response metadata, e.g., response headers, logprobs, token counts. |
#### content
@@ -243,3 +243,37 @@ At the moment, the output of the model will be in terms of LangChain messages, s
need OpenAI format for the output as well.
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
## Message IDs
LangChain messages include an optional `id` field that serves as a unique identifier. Understanding when and how these IDs are assigned can be helpful for debugging, tracing, and working with message history.
### When Messages Get IDs
Messages receive IDs in the following scenarios:
**Automatically assigned by LangChain:**
- When generated through chat model invocation (`.invoke()`, `.stream()`, `.astream()`) with an active run manager/tracing context
- IDs follow the format:
- `run-$RUN_ID` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-0`)
- `run-$RUN_ID-$IDX` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-1`) when there are multiple generations from a single chat model invocation.
**Provider-assigned IDs (highest priority):**
- When the model provider assigns its own ID to the message
- These take precedence over LangChain-generated run IDs
- Format varies by provider
### When Messages Don't Get IDs
Messages will **not** receive IDs in these situations:
- **Manual message creation**: Messages created directly (e.g., `AIMessage(content="hello")`) without going through chat models
- **No run manager context**: When there's no active callback/tracing infrastructure
### ID Priority System
LangChain follows a clear precedence system for message IDs:
1. **Provider-assigned IDs** (highest priority): IDs from the model provider
2. **LangChain run IDs** (medium priority): IDs starting with `run-`
3. **Manual IDs** (lowest priority): IDs explicitly set by users

View File

@@ -53,17 +53,29 @@ This is how you use MessagesPlaceholder.
```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage
from langchain_core.messages import HumanMessage, AIMessage
prompt_template = ChatPromptTemplate([
("system", "You are a helpful assistant"),
MessagesPlaceholder("msgs")
])
# Simple example with one message
prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
# More complex example with conversation history
messages_to_pass = [
HumanMessage(content="What's the capital of France?"),
AIMessage(content="The capital of France is Paris."),
HumanMessage(content="And what about Germany?")
]
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
print(formatted_prompt)
```
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.
This will produce a list of four messages total: the system message plus the three messages we passed in (two HumanMessages and one AIMessage).
If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).
This is useful for letting a list of messages be slotted into a particular spot.

View File

@@ -29,6 +29,22 @@ model_with_structure = model.with_structured_output(schema)
structured_output = model_with_structure.invoke(user_input)
```
:::warning[Tool Order Matters]
When combining structured output with additional tools, bind tools **first**, then apply structured output:
```python
# Correct
model_with_tools = model.bind_tools([tool1, tool2])
structured_model = model_with_tools.with_structured_output(schema)
# Incorrect - will cause tool resolution errors
structured_model = model.with_structured_output(schema)
broken_model = structured_model.bind_tools([tool1, tool2])
```
:::
## Schema definition
The central concept is that the output structure of model responses needs to be represented in some way.

View File

@@ -171,6 +171,26 @@ Please see the [InjectedState](https://langchain-ai.github.io/langgraph/referenc
Please see the [InjectedStore](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedStore) documentation for more details.
## Tool Artifacts vs. Injected State
Although similar conceptually, tool artifacts in LangChain and [injected state in LangGraph](https://langchain-ai.github.io/langgraph/reference/agents/#langgraph.prebuilt.tool_node.InjectedState) serve different purposes and operate at different levels of abstraction.
**Tool Artifacts**
- **Purpose:** Store and pass data between tool executions within a single chain/workflow
- **Scope:** Limited to tool-to-tool communication
- **Lifecycle:** Tied to individual tool calls and their immediate context
- **Usage:** Temporary storage for intermediate results that tools need to share
**Injected State (LangGraph)**
- **Purpose:** Maintain persistent state across the entire graph execution
- **Scope:** Global to the entire graph workflow
- **Lifecycle:** Persists throughout the entire graph execution and can be saved/restored
- **Usage:** Long-term state management, conversation memory, user context, workflow checkpointing
Tool artifacts are ephemeral data passed between tools, while injected state is persistent workflow-level state that survives across multiple steps, tool calls, and even execution sessions in LangGraph.
## Best practices
When designing tools to be used by models, keep the following in mind:

View File

@@ -7,4 +7,4 @@ Traces contain individual steps called `runs`. These can be individual calls fro
tool, or sub-chains.
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing).
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.langchain.com/langsmith/observability-quickstart).

View File

@@ -3,9 +3,9 @@
Here are some things to keep in mind for all types of contributions:
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
- Fill out the checked-in pull request template when opening pull requests. Note related issues.
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
- If you would like comments or feedback on your current progress, please open an issue or discussion.
- See the sections on [Testing](setup.mdx#testing) and [Formatting and Linting](setup.mdx#formatting-and-linting) for how to run these checks locally.
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
- Look for duplicate PRs or issues that have already been opened before opening a new one.

View File

@@ -223,6 +223,49 @@ If codespell is incorrectly flagging a word, you can skip spellcheck for that wo
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
### Pre-commit
We use [pre-commit](https://pre-commit.com/) to ensure commits are formatted/linted.
#### Installing Pre-commit
First, install pre-commit:
```bash
# Option 1: Using uv (recommended)
uv tool install pre-commit
# Option 2: Using Homebrew (globally for macOS/Linux)
brew install pre-commit
# Option 3: Using pip
pip install pre-commit
```
Then install the git hook scripts:
```bash
pre-commit install
```
#### How Pre-commit Works
Once installed, pre-commit will automatically run on every `git commit`. Hooks are specified in `.pre-commit-config.yaml` and will:
- Format code using `ruff` for the specific library/package you're modifying
- Only run on files that have changed
- Prevent commits if formatting fails
#### Skipping Pre-commit
In exceptional cases, you can skip pre-commit hooks with:
```bash
git commit --no-verify
```
However, this is discouraged as the CI system will still enforce the same formatting rules.
## Working with optional dependencies
`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.

View File

@@ -79,7 +79,7 @@ Here are some high-level tips on writing a good how-to guide:
### Conceptual guide
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
LangChain's conceptual guides fall under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
in a more abstract way than how-to guides or tutorials, targeting curious users interested in
gaining a deeper understanding and insights of the framework. Try to avoid excessively large code examples as the primary goal is to
provide perspective to the user rather than to finish a practical project. These guides should cover **why** things work the way they do.
@@ -105,7 +105,7 @@ Here are some high-level tips on writing a good conceptual guide:
### References
References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
In LangChain, these are mainly our API reference pages, which are populated from docstrings within code.
References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
how to use something specific.
@@ -119,7 +119,7 @@ but here are some high-level tips on writing a good docstring:
- Be concise
- Discuss special cases and deviations from a user's expectations
- Go into detail on required inputs and outputs
- Light details on when one might use the feature are fine, but in-depth details belong in other sections.
- Light details on when one might use the feature are fine, but in-depth details belong in other sections
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
@@ -127,17 +127,17 @@ Each category serves a distinct purpose and requires a specific approach to writ
Here are some other guidelines you should think about when writing and organizing documentation.
We generally do not merge new tutorials from outside contributors without an actue need.
We generally do not merge new tutorials from outside contributors without an acute need.
We welcome updates as well as new integration docs, how-tos, and references.
### Avoid duplication
Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
be only one (very rarely two) canonical pages for a given concept or feature. Instead, you should link to other guides.
### Link to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently,
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently
to allow a developer to learn more about an unfamiliar topic within the flow of reading.
This includes linking to the API references and conceptual sections!

View File

@@ -33,7 +33,7 @@ Sometimes you want to make a small change, like fixing a typo, and the easiest w
- Click the "Commit changes..." button at the top-right corner of the page.
- Give your commit a title like "Fix typo in X section."
- Optionally, write an extended commit description.
- Click "Propose changes"
- Click "Propose changes".
5. **Submit a pull request (PR):**
- GitHub will redirect you to a page where you can create a pull request.

View File

@@ -159,7 +159,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
"metadata": {
"execution": {
@@ -183,7 +183,7 @@
],
"source": [
"configurable_model.invoke(\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}}\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}}\n",
")"
]
},
@@ -234,7 +234,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
"metadata": {
"execution": {
@@ -261,7 +261,7 @@
" \"what's your name\",\n",
" config={\n",
" \"configurable\": {\n",
" \"first_model\": \"claude-3-5-sonnet-20240620\",\n",
" \"first_model\": \"claude-3-5-sonnet-latest\",\n",
" \"first_temperature\": 0.5,\n",
" \"first_max_tokens\": 100,\n",
" }\n",
@@ -336,7 +336,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
"metadata": {
"execution": {
@@ -368,14 +368,14 @@
"source": [
"llm_with_tools.invoke(\n",
" \"what's bigger in 2024 LA or NYC\",\n",
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}},\n",
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}},\n",
").tool_calls"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "langchain-monorepo",
"language": "python",
"name": "python3"
},
@@ -389,7 +389,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.12.11"
}
},
"nbformat": 4,

View File

@@ -741,13 +741,13 @@
"\n",
"If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n",
"\n",
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n",
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_errors`. \n",
"\n",
"When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n",
"\n",
"You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
"You can set `handle_tool_errors` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
"\n",
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`."
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_errors` of the tool because its default value is `False`."
]
},
{
@@ -777,7 +777,7 @@
"id": "9d93b217-1d44-4d31-8956-db9ea680ff4f",
"metadata": {},
"source": [
"Here's an example with the default `handle_tool_error=True` behavior."
"Here's an example with the default `handle_tool_errors=True` behavior."
]
},
{
@@ -807,7 +807,7 @@
"source": [
"get_weather_tool = StructuredTool.from_function(\n",
" func=get_weather,\n",
" handle_tool_error=True,\n",
" handle_tool_errors=True,\n",
")\n",
"\n",
"get_weather_tool.invoke({\"city\": \"foobar\"})"
@@ -818,7 +818,7 @@
"id": "f91d6dc0-3271-4adc-a155-21f2e62ffa56",
"metadata": {},
"source": [
"We can set `handle_tool_error` to a string that will always be returned."
"We can set `handle_tool_errors` to a string that will always be returned."
]
},
{
@@ -848,7 +848,7 @@
"source": [
"get_weather_tool = StructuredTool.from_function(\n",
" func=get_weather,\n",
" handle_tool_error=\"There is no such city, but it's probably above 0K there!\",\n",
" handle_tool_errors=\"There is no such city, but it's probably above 0K there!\",\n",
")\n",
"\n",
"get_weather_tool.invoke({\"city\": \"foobar\"})"
@@ -893,7 +893,7 @@
"\n",
"get_weather_tool = StructuredTool.from_function(\n",
" func=get_weather,\n",
" handle_tool_error=_handle_error,\n",
" handle_tool_errors=_handle_error,\n",
")\n",
"\n",
"get_weather_tool.invoke({\"city\": \"foobar\"})"

View File

@@ -565,7 +565,7 @@
"id": "3ac2c37a-06a1-40d3-a192-9078eb83994b",
"metadata": {},
"source": [
"<table><thead><tr><th colspan=\"3\">able 1. LUllclll 1ayoul actCCLloll 1110AdCs 111 L1C LayoOulralsel 1110U4cl 200</th></tr><tr><th>Dataset</th><th>| Base Model\\'|</th><th>Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
"<table><thead><tr><th colspan=\"3\">Table 1: Current layout detection models in the LayoutParser model zoo</th></tr><tr><th>Dataset</th><th>Base Model1</th><th>Large Model Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
]
},
{

View File

@@ -122,13 +122,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4-turbo\")\n",

View File

@@ -5,7 +5,7 @@ sidebar_class_name: hidden
# How-to guides
Here youll find answers to How do I….? types of questions.
Here youll find answers to "How do I….?" types of questions.
These guides are *goal-oriented* and *concrete*; they're meant to help you complete a specific task.
For conceptual explanations see the [Conceptual guide](/docs/concepts/).
For end-to-end walkthroughs see [Tutorials](/docs/tutorials).
@@ -47,7 +47,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting)
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: few-shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
@@ -64,8 +64,8 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model.
- [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: use few-shot examples](/docs/how_to/few_shot_examples)
- [How to: use few-shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
@@ -168,7 +168,7 @@ See [supported integrations](/docs/integrations/vectorstores/) for details on ge
Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
- [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing)
- [How to: reindex data to keep your vectorstore in sync with the underlying data source](/docs/how_to/indexing)
### Tools
@@ -178,7 +178,7 @@ LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pa
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
- [How to: use chat models to call tools](/docs/how_to/tool_calling)
- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
- [How to: pass runtime values to tools](/docs/how_to/tool_runtime)
- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
- [How to: handle tool errors](/docs/how_to/tools_error)
- [How to: force models to call a tool](/docs/how_to/tool_choice)
@@ -297,7 +297,7 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: add a semantic layer over a database](/docs/how_to/graph_semantic)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
### Summarization
@@ -345,7 +345,7 @@ LangGraph is an extension of LangChain aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph documentation is currently hosted on a separate site.
You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
You can find the [LangGraph guides here](https://langchain-ai.github.io/langgraph/guides/).
## [LangSmith](https://docs.smith.langchain.com/)

View File

@@ -61,7 +61,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `AzureSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `AzureSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -45,7 +45,7 @@
"A few frameworks for this have emerged to support inference of open-source LLMs on various devices:\n",
"\n",
"1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n",
"2. [`gpt4all`](https://github.com/nomic-ai/gpt4all): Optimized C backend for inference\n",
"2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n",
"3. [`ollama`](https://github.com/ollama/ollama): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
"4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n",
"\n",
@@ -74,12 +74,12 @@
"\n",
"## Quickstart\n",
"\n",
"[Ollama](https://ollama.ai/) is one way to easily run inference on macOS.\n",
"[Ollama](https://ollama.com/) is one way to easily run inference on macOS.\n",
" \n",
"The instructions [here](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
" \n",
"* [Download and run](https://ollama.ai/download) the app\n",
"* From command line, fetch a model from this [list of options](https://ollama.com/search): e.g., `ollama pull llama3.1:8b`\n",
"* From command line, fetch a model from this [list of options](https://ollama.com/search): e.g., `ollama pull gpt-oss:20b`\n",
"* When the app is running, all models are automatically served on `localhost:11434`\n"
]
},
@@ -95,7 +95,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "86178adb",
"metadata": {},
"outputs": [
@@ -113,7 +113,7 @@
"source": [
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(model=\"gpt-oss:20b\")\n",
"llm = ChatOllama(model=\"gpt-oss:20b\", validate_model_on_init=True)\n",
"\n",
"llm.invoke(\"The first man on the moon was ...\").content"
]
@@ -149,7 +149,40 @@
],
"source": [
"for chunk in llm.stream(\"The first man on the moon was ...\"):\n",
" print(chunk.text(), end=\"|\", flush=True)"
" print(chunk, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "e5731060",
"metadata": {},
"source": [
"Ollama also includes a chat model wrapper that handles formatting conversation turns:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f14a778a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The answer is a historic one!\\n\\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\\n\\n\"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nArmstrong was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\\n\\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_ollama import ChatOllama\n",
"\n",
"chat_model = ChatOllama(model=\"llama3.1:8b\")\n",
"\n",
"chat_model.invoke(\"Who was the first man on the moon?\")"
]
},
{
@@ -179,7 +212,7 @@
"\n",
"In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n",
"\n",
"e.g.,\n",
"e.g., for me:\n",
"\n",
"```shell\n",
"conda activate /Users/rlm/miniforge3/envs/llama\n",
@@ -201,18 +234,18 @@
"\n",
"There are various ways to gain access to quantized model weights.\n",
"\n",
"1. [HuggingFace](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n",
"2. [gpt4all](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n",
"3. [ollama](https://github.com/ollama/ollama) - Several models can be accessed directly via `pull`\n",
"1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n",
"2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n",
"3. [`ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
"\n",
"### Ollama\n",
"\n",
"With [Ollama](https://github.com/ollama/ollama), fetch a model via `ollama pull <model family>:<tag>`:"
"With [Ollama](https://github.com/ollama/ollama), fetch a model via `ollama pull <model family>:<tag>`."
]
},
{
"cell_type": "code",
"execution_count": 42,
"execution_count": null,
"id": "8ecd2f78",
"metadata": {},
"outputs": [
@@ -647,11 +680,17 @@
"\n",
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs."
]
},
{
"cell_type": "markdown",
"id": "14c2c170",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -665,7 +704,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.12.11"
}
},
"nbformat": 4,

View File

@@ -58,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "1fcf7b27-1cc3-420a-b920-0420b5892e20",
"metadata": {},
"outputs": [
@@ -102,7 +102,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -133,7 +133,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "99d27f8f-ae78-48bc-9bf2-3cef35213ec7",
"metadata": {},
"outputs": [
@@ -163,7 +163,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -176,7 +176,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "325fb4ca",
"metadata": {},
"outputs": [
@@ -198,7 +198,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -234,7 +234,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "6c1455a9-699a-4702-a7e0-7f6eaec76a21",
"metadata": {},
"outputs": [
@@ -284,7 +284,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -312,7 +312,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "55e1d937-3b22-4deb-b9f0-9e688f0609dc",
"metadata": {},
"outputs": [
@@ -342,7 +342,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -417,7 +417,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -443,7 +443,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "83593b9d-a8d3-4c99-9dac-64e0a9d397cb",
"metadata": {},
"outputs": [
@@ -488,13 +488,13 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())\n",
"print(response.text)\n",
"response.usage_metadata"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "9bbf578e-794a-4dc0-a469-78c876ccd4a3",
"metadata": {},
"outputs": [
@@ -530,7 +530,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message, response, next_message])\n",
"print(response.text())\n",
"print(response.text)\n",
"response.usage_metadata"
]
},
@@ -600,7 +600,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "ae076c9b-ff8f-461d-9349-250f396c9a25",
"metadata": {},
"outputs": [
@@ -641,7 +641,7 @@
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
"print(response.text)"
]
},
{

View File

@@ -54,7 +54,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "5df2e558-321d-4cf7-994e-2815ac37e704",
"metadata": {},
"outputs": [
@@ -75,7 +75,7 @@
"\n",
"chain = prompt | llm\n",
"response = chain.invoke({\"image_url\": url})\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -117,7 +117,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "25e4829e-0073-49a8-9669-9f43e5778383",
"metadata": {},
"outputs": [
@@ -144,7 +144,7 @@
" \"cache_type\": \"ephemeral\",\n",
" }\n",
")\n",
"print(response.text())"
"print(response.text)"
]
},
{

View File

@@ -74,12 +74,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "a88ff70c",
"metadata": {},
"outputs": [],
"source": [
"# from langchain_experimental.text_splitter import SemanticChunker\n",
"from langchain_experimental.text_splitter import SemanticChunker\n",
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
"\n",
"text_splitter = SemanticChunker(OpenAIEmbeddings())"

View File

@@ -612,11 +612,56 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 18,
"id": "35ea904e-795f-411b-bef8-6484dbb6e35c",
"metadata": {},
"outputs": [],
"source": "from langchain_experimental.agents import create_pandas_dataframe_agent\n\nagent = create_pandas_dataframe_agent(\n llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n)\nagent.invoke(\n {\n \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n }\n)"
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `python_repl_ast` with `{'query': \"df[['Age', 'Fare']].corr().iloc[0,1]\"}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m0.11232863699941621\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `python_repl_ast` with `{'query': \"df[['Fare', 'Survived']].corr().iloc[0,1]\"}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m0.2561785496289603\u001b[0m\u001b[32;1m\u001b[1;3mThe correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\n",
"\n",
"Therefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\",\n",
" 'output': 'The correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\\n\\nTherefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).'}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_experimental.agents import create_pandas_dataframe_agent\n",
"\n",
"agent = create_pandas_dataframe_agent(\n",
" llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n",
")\n",
"agent.invoke(\n",
" {\n",
" \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
@@ -741,4 +786,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -998,6 +998,91 @@
"\n",
"chain.invoke({\"query\": query})"
]
},
{
"cell_type": "markdown",
"id": "xfejabhtn2",
"metadata": {},
"source": [
"## Combining with Additional Tools\n",
"\n",
"When you need to use both structured output and additional tools (like web search), note the order of operations:\n",
"\n",
"**Correct Order**:\n",
"```python\n",
"# 1. Bind tools first\n",
"llm_with_tools = llm.bind_tools([web_search_tool, calculator_tool])\n",
"\n",
"# 2. Apply structured output\n",
"structured_llm = llm_with_tools.with_structured_output(MySchema)\n",
"```\n",
"\n",
"**Incorrect Order**:\n",
"\n",
"```python\n",
"# This will fail with \"Tool 'MySchema' not found\" error\n",
"structured_llm = llm.with_structured_output(MySchema)\n",
"broken_llm = structured_llm.bind_tools([web_search_tool])\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "653798ca",
"metadata": {},
"source": [
"**Why Order Matters:**\n",
"`with_structured_output()` internally uses tool calling to enforce the schema. When you bind additional tools afterward, it creates a conflict in the tool resolution system."
]
},
{
"cell_type": "markdown",
"id": "1345f4a4",
"metadata": {},
"source": [
"**Complete Example:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0835637b",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"class SearchResult(BaseModel):\n",
" \"\"\"Structured search result.\"\"\"\n",
"\n",
" query: str = Field(description=\"The search query\")\n",
" findings: str = Field(description=\"Summary of findings\")\n",
"\n",
"\n",
"# Define tools\n",
"search_tool = {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"web_search\",\n",
" \"description\": \"Search the web for information\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\"query\": {\"type\": \"string\", \"description\": \"Search query\"}},\n",
" \"required\": [\"query\"],\n",
" },\n",
" },\n",
"}\n",
"\n",
"# Correct approach\n",
"llm = ChatOpenAI()\n",
"llm_with_search = llm.bind_tools([search_tool])\n",
"structured_search_llm = llm_with_search.with_structured_output(SearchResult)\n",
"\n",
"# Now you can use both search and get structured output\n",
"result = structured_search_llm.invoke(\"Search for latest AI research and summarize\")"
]
}
],
"metadata": {

View File

@@ -39,6 +39,16 @@
"/>\n"
]
},
{
"cell_type": "markdown",
"id": "ecc06359",
"metadata": {},
"source": [
"See also: [How to summarize through parallelization](/docs/how_to/summarize_map_reduce/) and\n",
"[How to summarize through iterative refinement](/docs/how_to/summarize_refine/).\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,

View File

@@ -147,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "74de0286-b003-4b48-9cdd-ecab435515ca",
"metadata": {},
"outputs": [],
@@ -157,7 +157,7 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
]
},
{

View File

@@ -55,7 +55,7 @@
"source": [
"## Defining tool schemas\n",
"\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what its arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"\n",
"### Python functions\n",
"Our tool schemas can be Python functions:"

View File

@@ -38,7 +38,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
"model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
]
},
{

View File

@@ -53,7 +53,7 @@
"\n",
"To keep the most recent messages, we set `strategy=\"last\"`. We'll also set `include_system=True` to include the `SystemMessage`, and `start_on=\"human\"` to make sure the resulting chat history is valid. \n",
"\n",
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case.\n",
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case. Keep in mind that new queries added to the chat history will be included in the token count unless you trim prior to adding the new query.\n",
"\n",
"Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:"
]
@@ -525,7 +525,7 @@
"id": "4d91d390-e7f7-467b-ad87-d100411d7a21",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r\n",
"Looking at [the LangSmith trace](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r) we can see that before the messages are passed to the model they are first trimmed.\n",
"\n",
"Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
@@ -620,7 +620,7 @@
"id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r"
"Looking at [the LangSmith trace](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r) we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message."
]
},
{
@@ -630,7 +630,7 @@
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html"
"For a complete description of all arguments head to the [API reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html)."
]
}
],

View File

@@ -7,10 +7,7 @@
"source": [
"# Confident\n",
"\n",
">[DeepEval](https://confident-ai.com) package for unit testing LLMs.\n",
"> Using Confident, everyone can build robust language models through faster iterations\n",
"> using both unit testing and integration testing. We provide support for each step in the iteration\n",
"> from synthetic data creation to testing.\n"
">[DeepEval](https://confident-ai.com) package for unit testing LLMs."
]
},
{
@@ -42,7 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai langchain-community deepeval langchain-chroma"
"!pip install deepeval langchain langchain-openai"
]
},
{
@@ -64,11 +61,29 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">🎉🥳 Congratulations! You've successfully logged in! 🙌 \n",
"</pre>\n"
],
"text/plain": [
"🎉🥳 Congratulations! You've successfully logged in! 🙌 \n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"!deepeval login"
"import os\n",
"import deepeval\n",
"\n",
"api_key = os.getenv(\"DEEPEVAL_API_KEY\")\n",
"deepeval.login(api_key)"
]
},
{
@@ -76,12 +91,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup DeepEval\n",
"### Setup Confident AI Callback (Modern)\n",
"\n",
"You can, by default, use the `DeepEvalCallbackHandler` to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:\n",
"- [Answer Relevancy](https://docs.confident-ai.com/docs/measuring_llm_performance/answer_relevancy)\n",
"- [Bias](https://docs.confident-ai.com/docs/measuring_llm_performance/debias)\n",
"- [Toxicness](https://docs.confident-ai.com/docs/measuring_llm_performance/non_toxic)"
"The previous DeepEvalCallbackHandler and metric tracking are deprecated. Please use the new integration below."
]
},
{
@@ -90,10 +102,15 @@
"metadata": {},
"outputs": [],
"source": [
"from deepeval.metrics.answer_relevancy import AnswerRelevancy\n",
"from deepeval.integrations.langchain import CallbackHandler\n",
"\n",
"# Here we want to make sure the answer is minimally relevant\n",
"answer_relevancy_metric = AnswerRelevancy(minimum_score=0.5)"
"handler = CallbackHandler(\n",
" name=\"My Trace\",\n",
" tags=[\"production\", \"v1\"],\n",
" metadata={\"experiment\": \"A/B\"},\n",
" thread_id=\"thread-123\",\n",
" user_id=\"user-456\",\n",
")"
]
},
{
@@ -103,186 +120,11 @@
"source": [
"## Get Started"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the `DeepEvalCallbackHandler`, we need the `implementation_name`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.callbacks.confident_callback import DeepEvalCallbackHandler\n",
"\n",
"deepeval_callback = DeepEvalCallbackHandler(\n",
" implementation_name=\"langchainQuickstart\", metrics=[answer_relevancy_metric]\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Scenario 1: Feeding into LLM\n",
"\n",
"You can then feed it into your LLM with OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[Generation(text='\\n\\nQ: What did the fish say when he hit the wall? \\nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThe Moon \\n\\nThe moon is high in the midnight sky,\\nSparkling like a star above.\\nThe night so peaceful, so serene,\\nFilling up the air with love.\\n\\nEver changing and renewing,\\nA never-ending light of grace.\\nThe moon remains a constant view,\\nA reminder of lifes gentle pace.\\n\\nThrough time and space it guides us on,\\nA never-fading beacon of hope.\\nThe moon shines down on us all,\\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ. What did one magnet say to the other magnet?\\nA. \"I find you very attractive!\"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nThe world is charged with the grandeur of God.\\nIt will flame out, like shining from shook foil;\\nIt gathers to a greatness, like the ooze of oil\\nCrushed. Why do men then now not reck his rod?\\n\\nGenerations have trod, have trod, have trod;\\nAnd all is seared with trade; bleared, smeared with toil;\\nAnd wears man's smudge and shares man's smell: the soil\\nIs bare now, nor can foot feel, being shod.\\n\\nAnd for all this, nature is never spent;\\nThere lives the dearest freshness deep down things;\\nAnd though the last lights off the black West went\\nOh, morning, at the brown brink eastward, springs —\\n\\nBecause the Holy Ghost over the bent\\nWorld broods with warm breast and with ah! bright wings.\\n\\n~Gerard Manley Hopkins\", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ: What did one ocean say to the other ocean?\\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for you\\n\\nOn a field of green\\n\\nThe sky so blue\\n\\nA gentle breeze, the sun above\\n\\nA beautiful world, for us to love\\n\\nLife is a journey, full of surprise\\n\\nFull of joy and full of surprise\\n\\nBe brave and take small steps\\n\\nThe future will be revealed with depth\\n\\nIn the morning, when dawn arrives\\n\\nA fresh start, no reason to hide\\n\\nSomewhere down the road, there's a heart that beats\\n\\nBelieve in yourself, you'll always succeed.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(\n",
" temperature=0,\n",
" callbacks=[deepeval_callback],\n",
" verbose=True,\n",
" openai_api_key=\"<YOUR_API_KEY>\",\n",
")\n",
"output = llm.generate(\n",
" [\n",
" \"What is the best evaluation tool out there? (no bias at all)\",\n",
" ]\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then check the metric if it was successful by calling the `is_successful()` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"answer_relevancy_metric.is_successful()\n",
"# returns True/False"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have ran that, you should be able to see our dashboard below. \n",
"\n",
"![Dashboard](https://docs.confident-ai.com/assets/images/dashboard-screenshot-b02db73008213a211b1158ff052d969e.png)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Scenario 2: Tracking an LLM in a chain without callbacks\n",
"\n",
"To track an LLM in a chain without callbacks, you can plug into it at the end.\n",
"\n",
"We can start by defining a simple chain as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"text_file_url = \"https://raw.githubusercontent.com/hwchase17/chat-your-data/master/state_of_the_union.txt\"\n",
"\n",
"openai_api_key = \"sk-XXX\"\n",
"\n",
"with open(\"state_of_the_union.txt\", \"w\") as f:\n",
" response = requests.get(text_file_url)\n",
" f.write(response.text)\n",
"\n",
"loader = TextLoader(\"state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)\n",
"docsearch = Chroma.from_documents(texts, embeddings)\n",
"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=OpenAI(openai_api_key=openai_api_key),\n",
" chain_type=\"stuff\",\n",
" retriever=docsearch.as_retriever(),\n",
")\n",
"\n",
"# Providing a new question-answering pipeline\n",
"query = \"Who is the president?\"\n",
"result = qa.run(query)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"After defining a chain, you can then manually check for answer similarity."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"answer_relevancy_metric.measure(result, query)\n",
"answer_relevancy_metric.is_successful()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### What's next?\n",
"\n",
"You can create your own custom metrics [here](https://docs.confident-ai.com/docs/quickstart/custom-metrics). \n",
"\n",
"DeepEval also offers other features such as being able to [automatically create unit tests](https://docs.confident-ai.com/docs/quickstart/synthetic-data-creation), [tests for hallucination](https://docs.confident-ai.com/docs/measuring_llm_performance/factual_consistency).\n",
"\n",
"If you are interested, check out our Github repository here [https://github.com/confident-ai/deepeval](https://github.com/confident-ai/deepeval). We welcome any PRs and discussions on how to improve LLM performance."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -296,12 +138,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"vscode": {
"interpreter": {
"hash": "a53ebf4a859167383b364e7e7521d0add3c2dbbdecce4edf676e8c4634ff3fbb"
}
"version": "3.12.11"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,288 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fbeb3f1eb129d115",
"metadata": {
"collapsed": false
},
"source": [
"---\n",
"sidebar_label: AI/ML API\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "6051ba9cfc65a60a",
"metadata": {
"collapsed": false
},
"source": [
"# ChatAimlapi\n",
"\n",
"This page will help you get started with AI/ML API [chat models](/docs/concepts/chat_models.mdx). For detailed documentation of all ChatAimlapi features and configurations, head to the [API reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n",
"\n",
"AI/ML API provides access to **300+ models** (Deepseek, Gemini, ChatGPT, etc.) via high-uptime and high-rate API."
]
},
{
"cell_type": "markdown",
"id": "512f94fa4bea2628",
"metadata": {
"collapsed": false
},
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatAimlapi | langchain-aimlapi | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aimlapi?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aimlapi?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"id": "7163684608502d37",
"metadata": {
"collapsed": false
},
"source": [
"### Model features\n",
"| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |\n",
"|:------------:|:-----------------:|:---------:|:-----------:|:-----------:|:-----------:|:---------------------:|:------------:|:-----------:|:--------:|\n",
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\n"
]
},
{
"cell_type": "markdown",
"id": "bb9345d5b24a7741",
"metadata": {
"collapsed": false
},
"source": [
"## Setup\n",
"To access AI/ML API models, sign up at [aimlapi.com](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration), generate an API key, and set the `AIMLAPI_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b26280519672f194",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:16:58.837623Z",
"start_time": "2025-08-07T07:16:55.346214Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
]
},
{
"cell_type": "markdown",
"id": "fa131229e62dfd47",
"metadata": {
"collapsed": false
},
"source": [
"### Installation\n",
"Install the `langchain-aimlapi` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3777dc00d768299e",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:11.195741Z",
"start_time": "2025-08-07T07:17:02.288142Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-aimlapi"
]
},
{
"cell_type": "markdown",
"id": "d168108b0c4f9d7",
"metadata": {
"collapsed": false
},
"source": [
"## Instantiation\n",
"Now we can instantiate the `ChatAimlapi` model and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f29131e65e47bd16",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:23.499746Z",
"start_time": "2025-08-07T07:17:11.196747Z"
}
},
"outputs": [],
"source": [
"from langchain_aimlapi import ChatAimlapi\n",
"\n",
"llm = ChatAimlapi(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
" temperature=0.7,\n",
" max_tokens=512,\n",
" timeout=30,\n",
" max_retries=3,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "861b87289f8e146d",
"metadata": {
"collapsed": false
},
"source": [
"## Invocation\n",
"You can invoke the model with a list of messages:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "430b1cff2e6d77b4",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:30.586261Z",
"start_time": "2025-08-07T07:17:29.074409Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"messages = [\n",
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"\n",
"ai_msg = llm.invoke(messages)\n",
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "5463797524a19b2e",
"metadata": {
"collapsed": false
},
"source": [
"## Chaining\n",
"We can chain the model with a prompt template as follows:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bf6defc12a0c5d78",
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:17:36.368436Z",
"start_time": "2025-08-07T07:17:34.770581Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ich liebe das Programmieren.\n"
]
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "fcf0bf10a872355c",
"metadata": {
"collapsed": false
},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAimlapi features and configurations, visit the [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -19,7 +19,7 @@
"\n",
"This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n",
"\n",
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n",
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/about-claude/models/overview).\n",
"\n",
"\n",
":::info AWS Bedrock and Google VertexAI\n",
@@ -124,7 +124,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -132,7 +132,7 @@
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" model=\"claude-3-5-sonnet-latest\",\n",
" temperature=0,\n",
" max_tokens=1024,\n",
" timeout=None,\n",
@@ -840,7 +840,7 @@
"source": [
"## Token-efficient tool use\n",
"\n",
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
]
},
{
@@ -970,8 +970,8 @@
"source": [
"### In tool results (agentic RAG)\n",
"\n",
":::info Requires ``langchain-anthropic>=0.3.17``\n",
"\n",
":::info\n",
"Requires ``langchain-anthropic>=0.3.17``\n",
":::\n",
"\n",
"Claude supports a [search_result](https://docs.anthropic.com/en/docs/build-with-claude/search-results) content block representing citable results from queries against a knowledge base or other custom source. These content blocks can be passed to claude both top-line (as in the above example) and within a tool result. This allows Claude to cite elements of its response using the result of a tool call.\n",
@@ -998,8 +998,6 @@
" ]\n",
"```\n",
"\n",
"We also need to specify the `search-results-2025-06-09` beta when instantiating ChatAnthropic. You can see an end-to-end example below.\n",
"\n",
"<details>\n",
"<summary>End to end example with LangGraph</summary>\n",
"\n",
@@ -1200,7 +1198,7 @@
"source": [
"## Built-in tools\n",
"\n",
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
]
},
{
@@ -1210,7 +1208,7 @@
"source": [
"### Web search\n",
"\n",
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool) to run searches and ground its responses with citations."
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool) to run searches and ground its responses with citations."
]
},
{
@@ -1240,6 +1238,110 @@
"response = llm_with_tools.invoke(\"How do I update a web app to TypeScript 5.5?\")"
]
},
{
"cell_type": "markdown",
"id": "kloc4rvd1w",
"metadata": {},
"source": [
"#### Web search + structured output\n",
"\n",
"When combining web search tools with structured output, it's important to **bind the tools first and then apply structured output**:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "rjjergy6ef",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"\n",
"# Define structured output schema\n",
"class ResearchResult(BaseModel):\n",
" \"\"\"Structured research result from web search.\"\"\"\n",
"\n",
" topic: str = Field(description=\"The research topic\")\n",
" summary: str = Field(description=\"Summary of key findings\")\n",
" key_points: list[str] = Field(description=\"List of important points discovered\")\n",
"\n",
"\n",
"# Configure web search tool\n",
"websearch_tools = [\n",
" {\n",
" \"type\": \"web_search_20250305\",\n",
" \"name\": \"web_search\",\n",
" \"max_uses\": 10,\n",
" }\n",
"]\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20241022\")\n",
"\n",
"# Correct order: bind tools first, then structured output\n",
"llm_with_search = llm.bind_tools(websearch_tools)\n",
"research_llm = llm_with_search.with_structured_output(ResearchResult)\n",
"\n",
"# Now you can use both web search and get structured output\n",
"result = research_llm.invoke(\"Research the latest developments in quantum computing\")\n",
"print(f\"Topic: {result.topic}\")\n",
"print(f\"Summary: {result.summary}\")\n",
"print(f\"Key Points: {result.key_points}\")"
]
},
{
"cell_type": "markdown",
"id": "c580c20a",
"metadata": {},
"source": [
"### Web fetching\n",
"\n",
"Claude can use a [web fetching tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-fetch-tool) to run searches and ground its responses with citations."
]
},
{
"cell_type": "markdown",
"id": "5cf6ad08",
"metadata": {},
"source": [
":::info\n",
"Web search tool is supported since ``langchain-anthropic>=0.3.20``\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4804be1",
"metadata": {},
"outputs": [],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-5-haiku-latest\",\n",
" betas=[\"web-fetch-2025-09-10\"], # Enable web fetch beta\n",
")\n",
"\n",
"tool = {\"type\": \"web_fetch_20250910\", \"name\": \"web_fetch\", \"max_uses\": 3}\n",
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"response = llm_with_tools.invoke(\n",
" \"Please analyze the content at https://example.com/article\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "088c41d0",
"metadata": {},
"source": [
":::warning\n",
"Note: you must add the `'web-fetch-2025-09-10'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "1478cdc6-2e52-4870-80f9-b4ddf88f2db2",
@@ -1249,14 +1351,14 @@
"\n",
"Claude can use a [code execution tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool) to execute Python code in a sandboxed environment.\n",
"\n",
":::info Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
"\n",
":::info\n",
"Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "2ce13632-a2da-439f-a429-f66481501630",
"metadata": {},
"outputs": [],
@@ -1265,7 +1367,7 @@
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-sonnet-4-20250514\",\n",
" betas=[\"code-execution-2025-05-22\"],\n",
" betas=[\"code-execution-2025-05-22\"], # Enable code execution beta\n",
")\n",
"\n",
"tool = {\"type\": \"code_execution_20250522\", \"name\": \"code_execution\"}\n",
@@ -1276,6 +1378,16 @@
")"
]
},
{
"cell_type": "markdown",
"id": "a6b5e15a",
"metadata": {},
"source": [
":::warning\n",
"Note: you must add the `'code_execution_20250522'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "24076f91-3a3d-4e53-9618-429888197061",
@@ -1354,14 +1466,14 @@
"\n",
"Claude can use a [MCP connector tool](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector) for model-generated calls to remote MCP servers.\n",
"\n",
":::info Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
"\n",
":::info\n",
"Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "22fc4a89-e6d8-4615-96cb-2e117349aebf",
"metadata": {},
"outputs": [],
@@ -1373,17 +1485,17 @@
" \"type\": \"url\",\n",
" \"url\": \"https://mcp.deepwiki.com/mcp\",\n",
" \"name\": \"deepwiki\",\n",
" \"tool_configuration\": { # optional configuration\n",
" \"tool_configuration\": { # Optional configuration\n",
" \"enabled\": True,\n",
" \"allowed_tools\": [\"ask_question\"],\n",
" },\n",
" \"authorization_token\": \"PLACEHOLDER\", # optional authorization\n",
" \"authorization_token\": \"PLACEHOLDER\", # Optional authorization\n",
" }\n",
"]\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-sonnet-4-20250514\",\n",
" betas=[\"mcp-client-2025-04-04\"],\n",
" betas=[\"mcp-client-2025-04-04\"], # Enable MCP beta\n",
" mcp_servers=mcp_servers,\n",
")\n",
"\n",
@@ -1393,6 +1505,16 @@
")"
]
},
{
"cell_type": "markdown",
"id": "0d6d7197",
"metadata": {},
"source": [
":::warning\n",
"Note: you must add the `'mcp-client-2025-04-04'` beta header to use this tool.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "2fd5d545-a40d-42b1-ad0c-0a79e2536c9b",
@@ -1400,12 +1522,12 @@
"source": [
"### Text editor\n",
"\n",
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool) for details."
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool) for details."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "30a0af36-2327-4b1d-9ba5-e47cb72db0be",
"metadata": {},
"outputs": [
@@ -1441,7 +1563,7 @@
"response = llm_with_tools.invoke(\n",
" \"There's a syntax error in my primes.py file. Can you help me fix it?\"\n",
")\n",
"print(response.text())\n",
"print(response.text)\n",
"response.tool_calls"
]
},

View File

@@ -129,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -137,7 +137,7 @@
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
" model_id=\"anthropic.claude-3-5-sonnet-latest-v1:0\",\n",
" # region_name=...,\n",
" # aws_access_key_id=...,\n",
" # aws_secret_access_key=...,\n",
@@ -243,12 +243,12 @@
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"You can filter to text using the [.text()](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.text) method on the output:"
"You can filter to text using the [.text](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.text) property on the output:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
@@ -262,7 +262,7 @@
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk.text(), end=\"|\")"
" print(chunk.text, end=\"|\")"
]
},
{

View File

@@ -261,7 +261,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"colab": {
@@ -286,7 +286,7 @@
],
"source": [
"async for token in llm.astream(\"Hello, please explain how antibiotics work\"):\n",
" print(token.text(), end=\"\")"
" print(token.text, end=\"\")"
]
},
{

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatOCIGenAI\n",
"\n",
"This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html).\n",
"This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://pypi.org/project/langchain-oci/).\n",
"\n",
"Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n",
"Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n",
@@ -26,9 +26,9 @@
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ |\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
"| :--- |:---------------------------------------------------------------------------------| :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-oci](https://github.com/oracle/langchain-oracle) | ❌ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | [JSON mode](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs) | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
@@ -37,7 +37,7 @@
"\n",
"## Setup\n",
"\n",
"To access OCIGenAI models you'll need to install the `oci` and `langchain-community` packages.\n",
"To access OCIGenAI models you'll need to install the `oci` and `langchain-oci` packages.\n",
"\n",
"### Credentials\n",
"\n",
@@ -53,7 +53,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain OCIGenAI integration lives in the `langchain-community` package and you will also need to install the `oci` package:"
"The LangChain OCIGenAI integration lives in the `langchain-oci` package and you will also need to install the `oci` package:"
]
},
{
@@ -63,7 +63,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community oci"
"%pip install -qU langchain-oci"
]
},
{
@@ -83,14 +83,16 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI\n",
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
"from langchain_oci.chat_models import ChatOCIGenAI\n",
"\n",
"chat = ChatOCIGenAI(\n",
" model_id=\"cohere.command-r-16k\",\n",
" model_id=\"cohere.command-r-plus-08-2024\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
" model_kwargs={\"temperature\": 0.7, \"max_tokens\": 500},\n",
" compartment_id=\"compartment_id\",\n",
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
" auth_type=\"SECURITY_TOKEN\",\n",
" auth_profile=\"auth_profile_name\",\n",
" auth_file_location=\"auth_file_location\",\n",
")"
]
},
@@ -110,14 +112,7 @@
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" SystemMessage(content=\"your are an AI assistant.\"),\n",
" AIMessage(content=\"Hi there human!\"),\n",
" HumanMessage(content=\"tell me a joke.\"),\n",
"]\n",
"response = chat.invoke(messages)"
]
"source": "response = chat.invoke(\"Tell me one fact about Earth\")"
},
{
"cell_type": "code",
@@ -146,13 +141,22 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_oci.chat_models import ChatOCIGenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat\n",
"\n",
"response = chain.invoke({\"topic\": \"dogs\"})\n",
"print(response.content)"
"llm = ChatOCIGenAI(\n",
" model_id=\"cohere.command-r-plus-08-2024\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"compartment_id\",\n",
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
" auth_type=\"SECURITY_TOKEN\",\n",
" auth_profile=\"auth_profile_name\",\n",
" auth_file_location=\"auth_file_location\",\n",
")\n",
"prompt = PromptTemplate(input_variables=[\"query\"], template=\"{query}\")\n",
"llm_chain = prompt | llm\n",
"response = llm_chain.invoke(\"what is the capital of france?\")\n",
"print(response)"
]
},
{
@@ -162,7 +166,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOCIGenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html"
"For detailed documentation of all ChatOCIGenAI features and configurations head to the API reference: https://pypi.org/project/langchain-oci/"
]
}
],

View File

@@ -23,16 +23,12 @@
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.com/search).\n",
"\n",
":::warning\n",
"This page is for the new v1 `ChatOllama` class with standard content block output. If you are looking for the legacy v0 `Ollama` class, see the [v0.3 documentation](https://python.langchain.com/v0.3/docs/integrations/chat/ollama/).\n",
":::\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/ollama/) | Package downloads | Package latest |\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/ollama) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOllama](https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html#chatollama) | [langchain-ollama](https://python.langchain.com/api_reference/ollama/index.html) | ✅ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ollama?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ollama?style=flat-square&label=%20) |\n",
"\n",
@@ -56,7 +52,7 @@
">\n",
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull gpt-oss:20b`\n",
"* Specify the exact version of the model of interest as such `ollama pull gpt-oss:20b` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
"* To view all pulled models, use `ollama list`\n",
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
"* View the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/README.md) for more commands. You can run `ollama help` in the terminal to see available commands.\n"
@@ -107,7 +103,7 @@
"metadata": {},
"source": [
":::warning\n",
"Make sure you're using the latest Ollama client version!\n",
"Make sure you're using the latest Ollama version!\n",
":::\n",
"\n",
"Update by running:"
@@ -135,16 +131,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 9,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ollama.v1 import ChatOllama\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(\n",
" model=\"gpt-oss:20b\",\n",
" validate_model_on_init=True,\n",
" model=\"llama3.1\",\n",
" temperature=0,\n",
" # other params...\n",
")"
@@ -167,56 +162,46 @@
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"AIMessage(type='ai', name=None, id='lc_run--5521db11-a5eb-4e46-956c-1455151cdaa3-0', lc_version='v1', content=[{'type': 'text', 'text': 'The translation of \"I love programming\" to French is:\\n\\n\"Je aime le programmation\"\\n\\nHowever, a more common and idiomatic way to express this in French would be:\\n\\n\"J\\'aime programmer\"\\n\\nThis phrase uses the verb \"aimer\" (to love) in the present tense, which is more suitable for expressing a general feeling or preference.'}], usage_metadata={'input_tokens': 34, 'output_tokens': 73, 'total_tokens': 107}, response_metadata={'model_name': 'llama3.2', 'created_at': '2025-08-08T23:07:44.439483Z', 'done': True, 'done_reason': 'stop', 'total_duration': 1410566833, 'load_duration': 28419542, 'prompt_eval_count': 34, 'prompt_eval_duration': 141642125, 'eval_count': 73, 'eval_duration': 1240075000}, parsed=None)\n",
"\n",
"Content:\n",
"The translation of \"I love programming\" to French is:\n",
"\n",
"\"Je aime le programmation\"\n",
"\n",
"However, a more common and idiomatic way to express this in French would be:\n",
"\n",
"\"J'aime programmer\"\n",
"\n",
"This phrase uses the verb \"aimer\" (to love) in the present tense, which is more suitable for expressing a general feeling or preference.\n"
]
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" in French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:00.483666Z', 'done': True, 'done_reason': 'stop', 'total_duration': 619971208, 'load_duration': 27793125, 'prompt_eval_count': 35, 'prompt_eval_duration': 36354583, 'eval_count': 22, 'eval_duration': 555182667, 'model_name': 'llama3.1'}, id='run--348bb5ef-9dd9-4271-bc7e-a9ddb54c28c1-0', usage_metadata={'input_tokens': 35, 'output_tokens': 22, 'total_tokens': 57})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg = llm.invoke(\"Translate 'I love programming' to French.\")\n",
"print(f\"{ai_msg}\\n\")\n",
"print(f\"Content:\\n{ai_msg.text}\")"
]
},
{
"cell_type": "markdown",
"id": "ede35e47",
"metadata": {},
"source": [
"## Streaming"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "77474829",
"execution_count": 11,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hi| there|!| I|'m| just| a| chat|bot|,| so| I| don|'t| have| feelings|,| but| I|'m| here| and| ready| to| help| you| with| anything| you| need|!| How| can| I| assist| you| today|?| 😊|"
"The translation of \"I love programming\" in French is:\n",
"\n",
"\"J'adore le programmation.\"\n"
]
}
],
"source": [
"for chunk in llm.stream(\"How are you doing?\"):\n",
" if chunk.text:\n",
" print(chunk.text, end=\"|\", flush=True)"
"print(ai_msg.content)"
]
},
{
@@ -238,10 +223,10 @@
{
"data": {
"text/plain": [
"'Ich liebe Programmierung.'"
"AIMessage(content='\"Programmieren ist meine Leidenschaft.\"\\n\\n(I translated \"programming\" to the German word \"Programmieren\", and added \"ist meine Leidenschaft\" which means \"is my passion\")', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:29.350032Z', 'done': True, 'done_reason': 'stop', 'total_duration': 1194744459, 'load_duration': 26982500, 'prompt_eval_count': 30, 'prompt_eval_duration': 117043458, 'eval_count': 41, 'eval_duration': 1049892167, 'model_name': 'llama3.1'}, id='run--efc6436e-2346-43d9-8118-3c20b3cdf0d0-0', usage_metadata={'input_tokens': 30, 'output_tokens': 41, 'total_tokens': 71})"
]
},
"execution_count": 19,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -260,15 +245,13 @@
")\n",
"\n",
"chain = prompt | llm\n",
"result = chain.invoke(\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")\n",
"\n",
"result.text"
")"
]
},
{
@@ -289,7 +272,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "f767015f",
"metadata": {},
"outputs": [
@@ -297,16 +280,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[{'type': 'tool_call', 'id': 'f365489e-1dc4-4d60-aaff-e56290ae4f99', 'name': 'validate_user', 'args': {'addresses': ['123 Fake St in Boston MA', '234 Pretend Boulevard in Houston TX'], 'user_id': 123}}]\n"
"[{'name': 'validate_user', 'args': {'addresses': ['123 Fake St, Boston, MA', '234 Pretend Boulevard, Houston, TX'], 'user_id': '123'}, 'id': 'aef33a32-a34b-4b37-b054-e0d85584772f', 'type': 'tool_call'}]\n"
]
}
],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.v1.messages import AIMessage\n",
"from langchain_core.messages import AIMessage\n",
"from langchain_core.tools import tool\n",
"from langchain_ollama.v1 import ChatOllama\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"\n",
"@tool\n",
@@ -336,50 +319,6 @@
" print(result.tool_calls)"
]
},
{
"cell_type": "markdown",
"id": "4321b6a8",
"metadata": {},
"source": [
"## Structured output"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "20f8ae70",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Name: Alice, Age: 28, Job: Software Engineer\n"
]
}
],
"source": [
"from langchain_ollama.v1 import ChatOllama\n",
"from pydantic import BaseModel, Field\n",
"\n",
"llm = ChatOllama(model=\"llama3.2\", validate_model_on_init=True, temperature=0)\n",
"\n",
"\n",
"class Person(BaseModel):\n",
" \"\"\"Information about a person.\"\"\"\n",
"\n",
" name: str = Field(description=\"The person's full name\")\n",
" age: int = Field(description=\"The person's age in years\")\n",
" occupation: str = Field(description=\"The person's job or profession\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Person)\n",
"response: Person = structured_llm.invoke(\n",
" \"Tell me about a fictional software engineer named Alice who is 28 years old.\"\n",
")\n",
"print(f\"Name: {response.name}, Age: {response.age}, Job: {response.occupation}\")"
]
},
{
"cell_type": "markdown",
"id": "4c5e0197",
@@ -387,9 +326,9 @@
"source": [
"## Multi-modal\n",
"\n",
"Ollama has limited support for multi-modal LLMs, such as [gemma3](https://ollama.com/library/gemma3).\n",
"Ollama has limited support for multi-modal LLMs, such as [gemma3](https://ollama.com/library/gemma3)\n",
"\n",
"### Image input"
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
]
},
{
@@ -472,15 +411,15 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Based on the image, the dollar-based gross retention rate is **90%**.\n"
"90%\n"
]
}
],
"source": [
"from langchain_core.v1.messages import HumanMessage\n",
"from langchain_ollama.v1 import ChatOllama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(model=\"gemma3:4b\", validate_model_on_init=True, temperature=0)\n",
"llm = ChatOllama(model=\"bakllava\", temperature=0)\n",
"\n",
"\n",
"def prompt_func(data):\n",
@@ -488,9 +427,8 @@
" image = data[\"image\"]\n",
"\n",
" image_part = {\n",
" \"type\": \"image\",\n",
" \"base64\": f\"data:image/jpeg;base64,{image}\",\n",
" \"mime_type\": \"image/jpeg\",\n",
" \"type\": \"image_url\",\n",
" \"image_url\": f\"data:image/jpeg;base64,{image}\",\n",
" }\n",
"\n",
" content_parts = []\n",
@@ -500,7 +438,7 @@
" content_parts.append(image_part)\n",
" content_parts.append(text_part)\n",
"\n",
" return [HumanMessage(content_parts)]\n",
" return [HumanMessage(content=content_parts)]\n",
"\n",
"\n",
"from langchain_core.output_parsers import StrOutputParser\n",
@@ -519,9 +457,11 @@
"id": "fb6a331f-1507-411f-89e5-c4d598154f3c",
"metadata": {},
"source": [
"## Reasoning models\n",
"## Reasoning models and custom message roles\n",
"\n",
"Many models support outputting their reasoning process in addition to the final answer. This is useful for debugging and understanding how the model arrived at its conclusion. This train of thought reasoning is available in models such as `gpt-oss`, `qwen3:8b`, and `deepseek-r1`. To enable reasoning output, set the `reasoning` parameter to `True` either when instantiating the model or during invocation."
"Some models, such as IBM's [Granite 3.2](https://ollama.com/library/granite3.2), support custom message roles to enable thinking processes.\n",
"\n",
"To access Granite 3.2's thinking features, pass a message with a `\"control\"` role with content set to `\"thinking\"`. Because `\"control\"` is a non-standard message role, we can use a [ChatMessage](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.chat.ChatMessage.html) object to implement it:"
]
},
{
@@ -534,25 +474,30 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Response including reasoning: [{'type': 'reasoning', 'reasoning': \"Okay, so I need to figure out what 3^3 is. Let me start by recalling what exponents mean. From what I remember, when you have a number raised to a power, like a^b, it means you multiply the number by itself b times. So, for example, 2^3 would be 2 multiplied by itself three times: 2 × 2 × 2. Let me check if that's right. Yeah, I think that's correct. So applying that to 3^3, it should be 3 multiplied by itself three times.\\n\\nWait, let me make sure I'm not confusing the base and the exponent. The base is the number being multiplied, and the exponent is how many times it's multiplied. So in 3^3, the base is 3 and the exponent is 3. That means I need to multiply 3 by itself three times. Let me write that out step by step.\\n\\nFirst, multiply the first two 3s: 3 × 3. What's 3 times 3? That's 9. Okay, so the first multiplication gives me 9. Now, I need to multiply that result by the third 3. So 9 × 3. Let me calculate that. 9 times 3 is... 27. So putting it all together, 3 × 3 × 3 equals 27. \\n\\nWait, let me verify that again. Maybe I should do it in a different way to make sure I didn't make a mistake. Let's break it down. 3^3 is the same as 3 × 3 × 3. Let me compute 3 × 3 first, which is 9, and then multiply that by 3. 9 × 3 is indeed 27. Hmm, that seems right. \\n\\nAlternatively, I can think of exponents as repeated multiplication. So 3^1 is 3, 3^2 is 3 × 3 = 9, and 3^3 is 3 × 3 × 3 = 27. Yeah, that progression makes sense. Each time the exponent increases by 1, you multiply by the base again. So starting from 3^1 = 3, then 3^2 is 3 × 3 = 9, then 3^3 is 9 × 3 = 27. \\n\\nIs there another way to check this? Maybe using exponent rules. For example, if I know that 3^2 is 9, then multiplying by another 3 would give me 3^3. Since 9 × 3 is 27, that confirms it again. \\n\\nAlternatively, maybe I can use logarithms or something else, but that might be overcomplicating. Since exponents are straightforward multiplication, I think my initial calculation is correct. \\n\\nWait, just to be thorough, maybe I can use a calculator to verify. Let me imagine pressing 3, then the exponent key, then 3. If I do that, it should give me 27. Yeah, that's what I remember. So all methods point to 27. \\n\\nI think I've checked it multiple ways: breaking down the multiplication step by step, using the exponent progression, and even considering a calculator verification. All of them lead to the same answer. Therefore, I'm confident that 3^3 equals 27.\\n\"}, {'type': 'text', 'text': 'To determine the value of $3^3$, we start by understanding what an exponent represents. The expression $a^b$ means multiplying the base $a$ by itself $b$ times. \\n\\n### Step-by-Step Calculation:\\n1. **Identify the base and exponent**: \\n In $3^3$, the base is **3**, and the exponent is **3**. This means we multiply 3 by itself three times.\\n\\n2. **Perform the multiplication**: \\n - First, multiply the first two 3s: \\n $3 \\\\times 3 = 9$ \\n - Next, multiply the result by the third 3: \\n $9 \\\\times 3 = 27$\\n\\n3. **Verify the result**: \\n - $3^1 = 3$ \\n - $3^2 = 3 \\\\times 3 = 9$ \\n - $3^3 = 3 \\\\times 3 \\\\times 3 = 27$ \\n This progression confirms the calculation.\\n\\n### Final Answer:\\n$$\\n3^3 = \\\\boxed{27}\\n$$'}]\n",
"Response without reasoning: [{'type': 'text', 'text': \"Sure! Let's break down what **3³** means and how to calculate it step by step.\\n\\n---\\n\\n### Step 1: Understand the notation\\nThe expression **3³** means **3 multiplied by itself three times**. The small number (3) is called the **exponent**, and it tells us how many times the base number (3) is used as a factor.\\n\\nSo:\\n$$\\n3^3 = 3 \\\\times 3 \\\\times 3\\n$$\\n\\n---\\n\\n### Step 2: Perform the multiplication step by step\\n\\n1. Multiply the first two 3s:\\n $$\\n 3 \\\\times 3 = 9\\n $$\\n\\n2. Now multiply the result by the third 3:\\n $$\\n 9 \\\\times 3 = 27\\n $$\\n\\n---\\n\\n### Step 3: Final Answer\\n\\n$$\\n3^3 = 27\\n$$\\n\\n---\\n\\n### Summary\\n- **3³** means **3 × 3 × 3**\\n- **3 × 3 = 9**\\n- **9 × 3 = 27**\\n- So, **3³ = 27**\\n\\nLet me know if you'd like to explore exponents further!\"}]\n"
"Here is my thought process:\n",
"The user is asking for the value of 3 raised to the power of 3, which is a basic exponentiation operation.\n",
"\n",
"Here is my response:\n",
"\n",
"3^3 (read as \"3 to the power of 3\") equals 27. \n",
"\n",
"This calculation is performed by multiplying 3 by itself three times: 3*3*3 = 27.\n"
]
}
],
"source": [
"from langchain_ollama.v1 import ChatOllama\n",
"from langchain_core.messages import ChatMessage, HumanMessage\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"# All outputs from `llm` will include reasoning unless overridden during invocation\n",
"llm = ChatOllama(model=\"qwen3:8b\", validate_model_on_init=True, reasoning=True)\n",
"llm = ChatOllama(model=\"granite3.2:8b\")\n",
"\n",
"response_a = llm.invoke(\"What is 3^3? Explain your reasoning step by step.\")\n",
"print(f\"Response including reasoning: {response_a.content}\")\n",
"messages = [\n",
" ChatMessage(role=\"control\", content=\"thinking\"),\n",
" HumanMessage(\"What is 3^3?\"),\n",
"]\n",
"\n",
"# Test override; note no ReasoningContentBlock in the response\n",
"response_b = llm.invoke(\n",
" \"What is 3^3? Explain your reasoning step by step.\", reasoning=False\n",
")\n",
"print(f\"Response without reasoning: {response_b.content}\")"
"response = llm.invoke(messages)\n",
"print(response.content)"
]
},
{
@@ -560,7 +505,7 @@
"id": "6271d032-da40-44d4-9b52-58370e164be3",
"metadata": {},
"source": [
"Note that the model exposes its thought process as a `ReasoningContentBlock` addition to its final response."
"Note that the model exposes its thought process in addition to its final response."
]
},
{
@@ -576,7 +521,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -590,7 +535,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.12.11"
}
},
"nbformat": 4,

View File

@@ -814,7 +814,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "1f758726-33ef-4c04-8a54-49adb783bbb3",
"metadata": {},
"outputs": [
@@ -860,7 +860,7 @@
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"response = llm_with_tools.invoke(\"What is deep research by OpenAI?\")\n",
"print(response.text())"
"print(response.text)"
]
},
{
@@ -1151,7 +1151,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "073f6010-6b0e-4db6-b2d3-7427c8dec95b",
"metadata": {},
"outputs": [
@@ -1167,7 +1167,7 @@
}
],
"source": [
"response_2.text()"
"response_2.text"
]
},
{
@@ -1198,7 +1198,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "b6da5bd6-a44a-4c64-970b-30da26b003d6",
"metadata": {},
"outputs": [
@@ -1214,7 +1214,7 @@
}
],
"source": [
"response_2.text()"
"response_2.text"
]
},
{
@@ -1404,7 +1404,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "51d3e4d3-ea78-426c-9205-aecb0937fca7",
"metadata": {},
"outputs": [
@@ -1428,13 +1428,13 @@
"messages = [{\"role\": \"user\", \"content\": first_query}]\n",
"\n",
"response = llm_with_tools.invoke(messages)\n",
"response_text = response.text()\n",
"response_text = response.text\n",
"print(f\"{response_text[:100]}... {response_text[-100:]}\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "b248bedf-2050-4c17-a90e-3a26eeb1b055",
"metadata": {},
"outputs": [
@@ -1460,7 +1460,7 @@
" ]\n",
")\n",
"second_response = llm_with_tools.invoke(messages)\n",
"print(second_response.text())"
"print(second_response.text)"
]
},
{
@@ -1482,7 +1482,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "009e541a-b372-410e-b9dd-608a8052ce09",
"metadata": {},
"outputs": [
@@ -1502,12 +1502,12 @@
" output_version=\"responses/v1\",\n",
")\n",
"response = llm.invoke(\"Hi, I'm Bob.\")\n",
"print(response.text())"
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "393a443a-4c5f-4a07-bc0e-c76e529b35e3",
"metadata": {},
"outputs": [
@@ -1524,7 +1524,7 @@
" \"What is my name?\",\n",
" previous_response_id=response.response_metadata[\"id\"],\n",
")\n",
"print(second_response.text())"
"print(second_response.text)"
]
},
{
@@ -1589,7 +1589,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "8d322f3a-0732-45ab-ac95-dfd4596e0d85",
"metadata": {},
"outputs": [
@@ -1616,7 +1616,7 @@
"response = llm.invoke(\"What is 3^3?\")\n",
"\n",
"# Output\n",
"response.text()"
"response.text"
]
},
{

View File

@@ -0,0 +1,408 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Qwen\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatQwen\n",
"\n",
"This will help you get started with Qwen [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatQwen features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatQwen](https://pypi.org/project/langchain-qwq/) | [langchain-qwq](https://pypi.org/project/langchain-qwq/) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-qwq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-qwq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ |✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Qwen models you'll need to create an Alibaba Cloud account, get an API key, and install the `langchain-qwq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [Alibaba's API Key page](https://account.alibabacloud.com/login/login.htm?oauth_callback=https%3A%2F%2Fbailian.console.alibabacloud.com%2F%3FapiKey%3D1&lang=en#/api-key) to sign up to Alibaba Cloud and generate an API key. Once you've done this set the `DASHSCOPE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DASHSCOPE_API_KEY\"):\n",
" os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"Enter your Dashscope API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain QwQ integration lives in the `langchain-qwq` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-qwq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today? 😊', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--62798a20-d425-48ab-91fc-8e62e37c6084-0', usage_metadata={'input_tokens': 9, 'output_tokens': 11, 'total_tokens': 20, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_qwq import ChatQwen\n",
"\n",
"llm = ChatQwen(model=\"qwen-flash\")\n",
"response = llm.invoke(\"Hello\")\n",
"\n",
"response"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--33f905e0-880a-4a67-ab83-313fd7a06369-0', usage_metadata={'input_tokens': 32, 'output_tokens': 8, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French.\"\n",
" \"Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmierung.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--9d8bab6d-d6fe-4b9f-95f2-c30c3ff0a50e-0', usage_metadata={'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates\"\n",
" \"{input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8d1b3ef3",
"metadata": {},
"source": [
"## Tool Calling\n",
"ChatQwen supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
]
},
{
"cell_type": "markdown",
"id": "6db1a355",
"metadata": {},
"source": [
"### Use with `bind_tools`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "15fb6a6d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_f0c2cc49307f480db78a45', 'function': {'arguments': '{\"first_int\": 5, \"second_int\": 42}', 'name': 'multiply'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwen-flash'} id='run--27c5aafb-9710-42f5-ab78-5a2ad1d9050e-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_f0c2cc49307f480db78a45', 'type': 'tool_call'}] usage_metadata={'input_tokens': 166, 'output_tokens': 27, 'total_tokens': 193, 'input_token_details': {}, 'output_token_details': {}}\n"
]
}
],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"from langchain_qwq import ChatQwen\n",
"\n",
"\n",
"@tool\n",
"def multiply(first_int: int, second_int: int) -> int:\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
" return first_int * second_int\n",
"\n",
"\n",
"llm = ChatQwen(model=\"qwen-flash\")\n",
"\n",
"llm_with_tools = llm.bind_tools([multiply])\n",
"\n",
"msg = llm_with_tools.invoke(\"What's 5 times forty two\")\n",
"\n",
"print(msg)"
]
},
{
"cell_type": "markdown",
"id": "cc8ffd89-c474-45a7-a123-e0b1d362f54f",
"metadata": {},
"source": [
"### vision Support"
]
},
{
"cell_type": "markdown",
"id": "3e8a7d46-d1f6-4ae8-835a-266ca47e4daf",
"metadata": {},
"source": [
"#### Image"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "54f69db3-fa51-4b9a-885c-1353968066e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This image depicts a cozy, rustic Christmas scene set against a wooden backdrop. The arrangement features a variety of festive decorations that evoke a warm, holiday atmosphere:\n",
"\n",
"- **Centerpiece**: A decorative reindeer figurine with large antlers stands prominently in the background.\n",
"- **Miniature Trees**: Two small, snow-dusted artificial Christmas trees flank the reindeer, adding to the wintry feel.\n",
"- **Candles**: Three log-shaped candle holders made from birch bark are lit, casting a soft, warm glow. Two are in the foreground, and one is slightly behind them.\n",
"- **\"Merry Christmas\" Sign**: A wooden cutout sign spelling \"MERRY CHRISTMAS\" is placed on the left, decorated with a tiny golden gift box and a small reindeer silhouette.\n",
"- **Holiday Elements**: Pinecones, red berries, greenery, and fairy lights are scattered throughout, enhancing the natural, festive theme.\n",
"- **Other Details**: A white sack with \"SANTA\" written on it is partially visible on the left, along with a large glass ornament and twinkling string lights.\n",
"\n",
"The overall aesthetic is warm, inviting, and traditional, emphasizing natural materials like wood, pine, and birch bark. It captures the essence of a rustic, homemade Christmas celebration.\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"model = ChatQwen(model=\"qwen-vl-max-latest\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"https://example.com/image/image.png\"},\n",
" },\n",
" {\"type\": \"text\", \"text\": \"What do you see in this image?\"},\n",
" ]\n",
" )\n",
"]\n",
"\n",
"response = model.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "b1faea19-932f-4dc8-b0af-60e3507eee08",
"metadata": {},
"source": [
"#### Video"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59355c38-d3e2-4051-811a-2b99286ea01b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This video features a young woman with a warm and cheerful expression, standing outdoors in a well-lit environment. She has short, neatly styled brown hair with bangs and is wearing a soft pink knitted cardigan over a white top. A delicate necklace adorns her neck, adding a subtle touch of elegance to her outfit.\n",
"\n",
"Throughout the video, she maintains eye contact with the camera, smiling gently and occasionally opening her mouth as if speaking or laughing. Her facial expressions are natural and engaging, suggesting a friendly and approachable demeanor. The background is softly blurred, indicating a shallow depth of field, which keeps the focus on her. It appears to be an urban setting with modern buildings, possibly a residential or commercial area.\n",
"\n",
"The lighting is bright and natural, likely from sunlight, casting a soft glow on her face and highlighting her features. The overall tone of the video is pleasant and inviting, evoking a sense of warmth and positivity.\n",
"\n",
"In the top right corner of the frames, there is a watermark that reads \"通义·AI合成,\" which indicates that this video was generated using AI technology by Tongyi Lab, a company known for its advancements in artificial intelligence and digital content creation. This suggests that the video may be a demonstration of AI-generated human-like avatars or synthetic media.\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"model = ChatQwen(model=\"qwen-vl-max-latest\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"video_url\",\n",
" \"video_url\": {\"url\": \"https://example.com/video/1.mp4\"},\n",
" },\n",
" {\"type\": \"text\", \"text\": \"Can you tell me about this video?\"},\n",
" ]\n",
" )\n",
"]\n",
"\n",
"response = model.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatQwen features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
]
},
{
"cell_type": "markdown",
"id": "ce1026e3",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -34,7 +34,7 @@
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | | ❌ | | ✅ | ✅ | ✅ | ❌ | \n",
"| ✅ | ✅ | ✅ | | ❌ | | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
@@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
@@ -91,7 +91,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -117,7 +117,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -126,10 +126,10 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let\\'s start by breaking down the sentence. The subject is \"I\", which in French is \"Je\". The verb is \"love\", which in this context is present tense, so \"aime\". The object is \"programming\". Now, \"programming\" in French can be \"la programmation\". \\n\\nWait, should it be \"programmation\" or \"programmation\"? Let me confirm the spelling. Yes, \"programmation\" is correct. Now, putting it all together: \"Je aime la programmation.\" Hmm, but in French, there\\'s a tendency to contract \"je\" and \"aime\". Wait, actually, \"je\" followed by a vowel sound usually takes \"j\\'\". So it should be \"J\\'aime la programmation.\" \\n\\nLet me double-check. \"J\\'aime\" is the correct contraction for \"I love\". The definite article \"la\" is needed because \"programmation\" is a feminine noun. Yes, \"programmation\" is a feminine noun, so \"la\" is correct. \\n\\nIs there any other way to say it? Maybe \"J\\'adore la programmation\" for \"I love\" in a stronger sense, but the user didn\\'t specify the intensity. Since the original is straightforward, \"J\\'aime la programmation.\" is the direct translation. \\n\\nI think that\\'s it. No mistakes there. So the final translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-5045cd6a-edbd-4b2f-bf24-b7bdf3777fb9-0', usage_metadata={'input_tokens': 32, 'output_tokens': 326, 'total_tokens': 358, 'input_token_details': {}, 'output_token_details': {}})"
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let me start by recalling the basic translation. The verb \"love\" in French is \"aimer\", and \"programming\" is \"la programmation\". So the literal translation would be \"J\\'aime la programmation.\" But wait, I should check if there\\'s any context or nuances I need to consider. The user mentioned they\\'re a helpful assistant, so maybe they want a more natural or commonly used phrase. Sometimes in French, people might use \"adorer\" instead of \"aimer\" for stronger emphasis, but \"aimer\" is more standard here. Also, the structure \"J\\'aime\" is correct for \"I love\". No need for any articles if it\\'s a general statement, but \"la programmation\" is a feminine noun, so the article is necessary. Let me confirm the gender of \"programmation\"—yes, it\\'s feminine. So \"la\" is correct. I think that\\'s it. The translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run--396edf0f-ab92-4317-99be-cc9f5377c312-0', usage_metadata={'input_tokens': 32, 'output_tokens': 229, 'total_tokens': 261, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -159,17 +159,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into German. Let me think. The verb \"love\" is \"lieben\" or \"mögen\" in German, but \"lieben\" is more like love, while \"mögen\" is prefer. Since it\\'s about programming, which is a strong affection, \"lieben\" is better. The subject is \"I\", which is \"ich\". Then \"programming\" is \"Programmierung\" or \"Coding\". But \"Programmierung\" is more formal. Alternatively, sometimes people say \"ich liebe es zu programmieren\" which is \"I love to program\". Hmm, maybe the direct translation would be \"Ich liebe die Programmierung.\" But maybe the more natural way is \"Ich liebe es zu programmieren.\" Let me check. Both are correct, but the second one might sound more natural in everyday speech. The user might prefer the concise version. Alternatively, maybe \"Ich liebe die Programmierung.\" is better. Wait, the original is \"programming\" as a noun. So using the noun form would be appropriate. So \"Ich liebe die Programmierung.\" But sometimes people also use \"Coding\" in German, like \"Ich liebe das Coding.\" But that\\'s more anglicism. Probably better to stick with \"Programmierung\". Alternatively, \"Programmieren\" as a noun. Oh right! \"Programmieren\" can be a noun when used in the accusative case. So \"Ich liebe das Programmieren.\" That\\'s correct and natural. Yes, that\\'s the best translation. So the answer is \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-2c418451-51d8-4319-8269-2ce129363a1a-0', usage_metadata={'input_tokens': 28, 'output_tokens': 341, 'total_tokens': 369, 'input_token_details': {}, 'output_token_details': {}})"
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants to translate \"I love programming.\" into German. Let\\'s start by breaking down the sentence. The subject is \"I,\" which translates to \"Ich\" in German. The verb \"love\" is \"liebe\" in present tense for the first person singular. Then \"programming\" is a noun. Now, in German, the word for programming, especially in the context of computer programming, is \"Programmierung.\" However, sometimes people might use \"Programmieren\" as well. Wait, but \"Programmierung\" is the noun form, so \"die Programmierung.\" The structure in German would be \"Ich liebe die Programmierung.\" Alternatively, could it be \"Programmieren\" as the verb in a nominalized form? Let me think. If you say \"Ich liebe das Programmieren,\" that\\'s also correct because \"das Programmieren\" is the gerundive form, which is commonly used for activities. So both are possible. Which one is more natural? Hmm. \"Das Programmieren\" might be more common in everyday language. Let me check some examples. For instance, \"I love cooking\" would be \"Ich liebe das Kochen.\" So following that pattern, \"Ich liebe das Programmieren\" would be the equivalent. Therefore, maybe \"Programmieren\" with the article \"das\" is better here. But the user might just want a direct translation without the article. Wait, the original sentence is \"I love programming,\" which is a noun, so in German, you need an article. So the correct translation would include \"das\" before the noun. So the correct sentence is \"Ich liebe das Programmieren.\" Alternatively, if they want to use the noun without an article, maybe in a more abstract sense, but I think \"das\" is necessary here. Let me confirm. Yes, in German, when using the noun form of a verb like this, you need the article. So the best translation is \"Ich liebe das Programmieren.\" I think that\\'s the most natural way to say it. Alternatively, \"Programmierung\" is more formal, but \"Programmieren\" is more commonly used in such contexts. So I\\'ll go with \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run--0ceaba8a-7842-48fb-8bec-eb96d2c83ed4-0', usage_metadata={'input_tokens': 28, 'output_tokens': 466, 'total_tokens': 494, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -217,7 +217,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "15fb6a6d",
"metadata": {},
"outputs": [
@@ -225,12 +225,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. First, I need to identify the numbers involved. The first number is 5, which is straightforward. The second number is forty two, which is 42 in digits. The operation they want is multiplication.\\n\\nLooking at the tools provided, there\\'s a function called multiply that takes two integers. So I should use that. The parameters are first_int and second_int. \\n\\nI need to convert \"forty two\" to 42. Since the function requires integers, both numbers should be in integer form. So 5 and 42. \\n\\nNow, I\\'ll structure the tool call. The function name is multiply, and the arguments should be first_int: 5 and second_int: 42. I\\'ll make sure the JSON is correctly formatted without any syntax errors. Let me double-check the parameters to ensure they\\'re required and of the right type. Yep, both are required and integers. \\n\\nNo examples were provided, but the function\\'s purpose is clear. So the correct tool call should be to multiply those two numbers. I think that\\'s all. No other functions are needed here.'} response_metadata={'model_name': 'qwq-plus'} id='run-638895aa-fdde-4567-bcfa-7d8e5d4f24af-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_d088275851c140529ed2ad', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 277, 'total_tokens': 453, 'input_token_details': {}, 'output_token_details': {}}\n"
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. They want the product of 5 and 42. The function provided is called multiply, which takes two integers. First, I need to parse the numbers from the question. The first integer is 5, straightforward. The second is forty two, which is 42 in numeric form. So I should call the multiply function with first_int=5 and second_int=42. Let me double-check the parameters: both are required and of type integer. Yep, that\\'s correct. No examples given, but the function should handle these numbers. Alright, time to format the tool call.'} response_metadata={'model_name': 'qwq-plus'} id='run--3c5ff46c-3fc8-4caf-a665-2405aeef2948-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_33fb94c6662d44928e56ec', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 173, 'total_tokens': 349, 'input_token_details': {}, 'output_token_details': {}}\n"
]
}
],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"from langchain_qwq import ChatQwQ\n",
"\n",
"\n",
@@ -249,6 +250,170 @@
"print(msg)"
]
},
{
"cell_type": "markdown",
"id": "88aa9980-1bd6-4cc9-aeac-4c9011e617fc",
"metadata": {},
"source": [
"### vision Support"
]
},
{
"cell_type": "markdown",
"id": "79e372e3-7050-4038-bf88-d1e8f5ddae09",
"metadata": {},
"source": [
"#### Image"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c2372365-7208-42f9-a147-deffdc390313",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a charming Christmas-themed arrangement set against a rustic wooden backdrop, creating a warm and festive atmosphere. Here's a detailed breakdown:\n",
"\n",
"### **Background & Setting**\n",
"- **Wooden Wall**: A horizontally paneled wooden wall forms the backdrop, giving a cozy, cabin-like feel.\n",
"- **Foreground Surface**: The decorations rest on a smooth wooden surface (likely a table or desk), enhancing the natural, earthy tone of the scene.\n",
"\n",
"### **Key Elements**\n",
"1. **Snow-Covered Trees**:\n",
" - Two miniature evergreen trees dusted with artificial snow flank the sides of the arrangement, evoking a wintry landscape.\n",
"\n",
"2. **String Lights**:\n",
" - A strand of white bulb lights stretches across the back, weaving through the decor and adding a soft, glowing ambiance.\n",
"\n",
"3. **Ornamental Sphere**:\n",
" - A reflective gold sphere with striped patterns sits near the center-left, catching and dispersing light.\n",
"\n",
"4. **\"Merry Christmas\" Sign**:\n",
" - A wooden cutout spelling \"MERRY CHRISTMAS\" in capital letters serves as the focal point. The letters feature star-shaped cutouts, allowing light to shine through.\n",
"\n",
"5. **Reindeer Figurine**:\n",
" - A brown reindeer with white facial markings and large antlers stands prominently on the right, facing forward and adding a playful touch.\n",
"\n",
"6. **Candle Holders**:\n",
" - Three birch-bark candle holders are arranged in front of the reindeer. Two hold lit tealights, casting a warm glow, while the third remains unlit.\n",
"\n",
"7. **Natural Accents**:\n",
" - **Pinecones**: Scattered throughout, adding texture and a woodland feel.\n",
" - **Berry Branches**: Red-berried greenery (likely holly) weaves behind the sign, introducing vibrant color.\n",
" - **Pine Branches**: Fresh-looking branches enhance the seasonal authenticity.\n",
"\n",
"8. **Gift Box**:\n",
" - A small golden gift box with a bow sits near the left, symbolizing holiday gifting.\n",
"\n",
"9. **Textile Detail**:\n",
" - A fabric piece with \"Christmas\" embroidered on it peeks from the left, partially obscured but contributing to the thematic unity.\n",
"\n",
"### **Color Palette & Mood**\n",
"- **Warm Tones**: Browns (wood, reindeer), golds (ornament, gift box), and whites (snow, lights) dominate, creating a inviting glow.\n",
"- **Cool Accents**: Greens (trees, branches) and reds (berries) provide contrast, balancing the warmth.\n",
"- **Lighting**: The lit candles and string lights cast a soft, flickering illumination, enhancing the intimate, celebratory vibe.\n",
"\n",
"### **Composition**\n",
"- **Balance**: The arrangement is symmetrical, with trees and candles on either side framing the central sign and reindeer.\n",
"- **Depth**: Layered elements (trees, lights, branches) create visual interest, drawing the eye inward.\n",
"\n",
"This image beautifully captures the essence of a cozy, handmade Christmas display, blending traditional symbols with natural textures to evoke nostalgia and joy.\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"model = ChatQwQ(model=\"qvq-max-latest\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"https://example.com/image/image.png\"},\n",
" },\n",
" {\"type\": \"text\", \"text\": \"What do you see in this image?\"},\n",
" ]\n",
" )\n",
"]\n",
"\n",
"response = model.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "9242acf7-9a66-40b1-98b5-b113d28fc6ec",
"metadata": {},
"source": [
"#### Video"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f0a9e542-7a85-44d2-8576-14314a50d948",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image provided is a still frame from a video featuring a young woman with short brown hair and bangs, smiling brightly at the camera. Here's a detailed breakdown:\n",
"\n",
"### **Description of the Image:**\n",
"- **Subject:** A youthful female with a cheerful expression, showcasing a wide smile with visible teeth.\n",
"- **Appearance:** \n",
" - Short, neatly styled brown hair with blunt bangs.\n",
" - Natural makeup emphasizing clear skin and subtle eye makeup.\n",
" - Wearing a white round-neck shirt layered under a light pink knitted cardigan.\n",
" - Accessories include a delicate necklace with a small pendant and small earrings.\n",
"- **Background:** An outdoor setting with blurred architectural elements (e.g., buildings with columns), suggesting a campus, park, or residential area.\n",
"- **Lighting:** Soft, natural daylight, enhancing the warm and inviting atmosphere.\n",
"\n",
"### **Key Details About the Video:**\n",
"1. **AI-Generated Content:** The watermark (\"通义·AI合成\" / \"Tongyi AI Synthesis\") indicates this image was created using Alibaba's Tongyi AI model, known for generating hyper-realistic visuals.\n",
"2. **Style & Purpose:** The high-quality, photorealistic style suggests the video may demonstrate AI imaging capabilities, potentially for advertising, entertainment, or educational purposes.\n",
"3. **Context Clues:** The subject's casual yet polished look and the pleasant outdoor setting imply a positive, approachable theme (e.g., lifestyle, technology promotion, or social media content).\n",
"\n",
"### **What We Can Infer About the Video:**\n",
"- Likely showcases dynamic AI-generated scenes featuring the same character in various poses or interactions.\n",
"- May highlight realism in digital avatars or synthetic media.\n",
"- Could be part of a demo reel, tutorial, or creative project emphasizing AI artistry.\n",
"\n",
"### **Limitations:**\n",
"- As only a single frame is provided, specifics about the video's length, narrative, or additional scenes cannot be determined.\n",
"\n",
"If you have more frames or context, feel free to share! 😊\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"model = ChatQwQ(model=\"qvq-max-latest\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"video_url\",\n",
" \"video_url\": {\"url\": \"https://example.com/video/1.mp4\"},\n",
" },\n",
" {\"type\": \"text\", \"text\": \"Can you tell me about this video?\"},\n",
" ]\n",
" )\n",
"]\n",
"\n",
"response = model.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
@@ -258,11 +423,19 @@
"\n",
"For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "824f0c67-5f3b-4079-bc17-2cf92755bdd5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -276,7 +449,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -7,7 +7,7 @@
"source": [
"# Azure AI Data\n",
"\n",
">[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:\n",
">[Azure AI Foundry (formerly Azure AI Studio)](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:\n",
">\n",
">- `Microsoft OneLake`\n",
">- `Azure Blob Storage`\n",

View File

@@ -2,67 +2,94 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# Oracle Autonomous Database\n",
"\n",
"Oracle autonomous database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs.\n",
"Oracle Autonomous Database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs.\n",
"\n",
"This notebook covers how to load documents from oracle autonomous database, the loader supports connection with connection string or tns configuration.\n",
"This notebook covers how to load documents from Oracle Autonomous Database.\n",
"\n",
"## Prerequisites\n",
"1. Database runs in a 'Thin' mode:\n",
" https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html\n",
"2. `pip install oracledb`:\n",
" https://python-oracledb.readthedocs.io/en/latest/user_guide/installation.html"
],
"metadata": {
"collapsed": false
}
"1. A database that python-oracledb's default 'Thin' mode can connected to. This is true of Oracle Autonomous Database, see [python-oracledb Architecture](https://python-oracledb.readthedocs.io/en/latest/user_guide/introduction.html#architecture).\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Instructions"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"pip install oracledb"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain_community.document_loaders import OracleAutonomousDatabaseLoader\n",
"from settings import s"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With mutual TLS authentication (mTLS), wallet_location and wallet_password are required to create the connection, user can create connection by providing either connection string or tns configuration details."
],
"metadata": {
"collapsed": false
}
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# python -m pip install -U langchain-oracledb"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain_oracledb.document_loaders import OracleAutonomousDatabaseLoader\n",
"from settings import s"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"With mutual TLS authentication (mTLS), wallet_location and wallet_password parameters are required to create the connection. See python-oracledb documentation [Connecting to Oracle Cloud Autonomous Databases](https://python-oracledb.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-to-oracle-cloud-autonomous-databases)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"SQL_QUERY = \"select prod_id, time_id from sh.costs fetch first 5 rows only\"\n",
@@ -75,7 +102,7 @@
" config_dir=s.CONFIG_DIR,\n",
" wallet_location=s.WALLET_LOCATION,\n",
" wallet_password=s.PASSWORD,\n",
" tns_name=s.TNS_NAME,\n",
" dsn=s.DSN,\n",
")\n",
"doc_1 = doc_loader_1.load()\n",
"\n",
@@ -84,29 +111,35 @@
" user=s.USERNAME,\n",
" password=s.PASSWORD,\n",
" schema=s.SCHEMA,\n",
" connection_string=s.CONNECTION_STRING,\n",
" dsn=s.DSN,\n",
" wallet_location=s.WALLET_LOCATION,\n",
" wallet_password=s.PASSWORD,\n",
")\n",
"doc_2 = doc_loader_2.load()"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"With TLS authentication, wallet_location and wallet_password are not required.\n",
"Bind variable option is provided by argument \"parameters\"."
],
"metadata": {
"collapsed": false
}
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"With 1-way TLS authentication, only the database credentials and connection string are required to establish a connection.\n",
"The example below also shows passing bind variable values with the argument \"parameters\"."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"SQL_QUERY = \"select channel_id, channel_desc from sh.channels where channel_desc = :1 fetch first 5 rows only\"\n",
@@ -117,7 +150,7 @@
" password=s.PASSWORD,\n",
" schema=s.SCHEMA,\n",
" config_dir=s.CONFIG_DIR,\n",
" tns_name=s.TNS_NAME,\n",
" dsn=s.DSN,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_3 = doc_loader_3.load()\n",
@@ -127,35 +160,32 @@
" user=s.USERNAME,\n",
" password=s.PASSWORD,\n",
" schema=s.SCHEMA,\n",
" connection_string=s.CONNECTION_STRING,\n",
" dsn=s.DSN,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_4 = doc_loader_4.load()"
],
"metadata": {
"collapsed": false
}
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
"pygments_lexer": "ipython3",
"version": "3.12.11"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 4
}

View File

@@ -42,7 +42,9 @@
"source": [
"### Prerequisites\n",
"\n",
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb."
]
},
{
@@ -51,7 +53,7 @@
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
"# python -m pip install -U langchain-oracledb"
]
},
{
@@ -154,7 +156,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"\"\"\"\n",
@@ -199,7 +201,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"\"\"\"\n",

View File

@@ -0,0 +1,334 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Oxylabs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Oxylabs](https://oxylabs.io/) is a web intelligence collection platform that enables companies worldwide to unlock data-driven insights.\n",
"\n",
"## Overview\n",
"\n",
"Oxylabs document loader allows to load data from search engines, e-commerce sites, travel platforms, and any other website. It supports geolocation, browser rendering, data parsing, multiple user agents and many more parameters. Check out [Oxylabs documentation](https://developers.oxylabs.io/scraping-solutions/web-scraper-api) for more information.\n",
"\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | Pricing |\n",
"|:--------------|:------------------------------------------------------------------|:-----:|:------------:|:-----------------------------:|\n",
"| OxylabsLoader | [langchain-oxylabs](https://github.com/oxylabs/langchain-oxylabs) | ✅ | ❌ | Free 5,000 results for 1 week |\n",
"\n",
"### Loader features\n",
"| Document Lazy Loading |\n",
"|:---------------------:|\n",
"| ✅ |\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the required dependencies.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%pip install -U langchain-oxylabs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Credentials\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set up the proper API keys and environment variables.\n",
"Create your API user credentials: Sign up for a free trial or purchase the product\n",
"in the [Oxylabs dashboard](https://dashboard.oxylabs.io/en/registration)\n",
"to create your API user credentials (OXYLABS_USERNAME and OXYLABS_PASSWORD)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OXYLABS_USERNAME\"] = getpass.getpass(\"Enter your Oxylabs username: \")\n",
"os.environ[\"OXYLABS_PASSWORD\"] = getpass.getpass(\"Enter your Oxylabs password: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-06T10:57:51.630011Z",
"start_time": "2025-08-06T10:57:51.623814Z"
}
},
"outputs": [],
"source": [
"from langchain_oxylabs import OxylabsLoader"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-06T10:57:53.685413Z",
"start_time": "2025-08-06T10:57:53.628859Z"
}
},
"outputs": [],
"source": [
"loader = OxylabsLoader(\n",
" urls=[\n",
" \"https://sandbox.oxylabs.io/products/1\",\n",
" \"https://sandbox.oxylabs.io/products/2\",\n",
" ],\n",
" params={\"markdown\": True},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": "## Load"
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-06T10:59:51.487327Z",
"start_time": "2025-08-06T10:59:48.592743Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2751\n",
"[![](data:image/svg+xml...)![logo](data:image/gif;base64...)![logo](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2FnavLogo.a8764883.png&w=750&q=75)](/)\n",
"\n",
"Game platforms:\n",
"\n",
"* **All**\n",
"\n",
"* [Nintendo platform](/products/category/nintendo)\n",
"\n",
"+ wii\n",
"+ wii-u\n",
"+ nintendo-64\n",
"+ switch\n",
"+ gamecube\n",
"+ game-boy-advance\n",
"+ 3ds\n",
"+ ds\n",
"\n",
"* [Xbox platform](/products/category/xbox-platform)\n",
"\n",
"* **Dreamcast**\n",
"\n",
"* [Playstation platform](/products/category/playstation-platform)\n",
"\n",
"* **Pc**\n",
"\n",
"* **Stadia**\n",
"\n",
"Go Back\n",
"\n",
"Note!This is a sandbox website used for web scraping. Information listed in this website does not have any real meaning and should not be associated with the actual products.\n",
"\n",
"![The Legend of Zelda: Ocarina of Time](data:image/gif;base64...)![The Legend of Zelda: Ocarina of Time](/assets/action-adventure.svg)\n",
"\n",
"## The Legend of Zelda: Ocarina of Time\n",
"\n",
"**Developer:** Nintendo**Platform:****Type:** singleplayer\n",
"\n",
"As a young boy, Link is tricked by Ganondorf, the King of the Gerudo Thieves. The evil human uses Link to g\n",
"5542\n",
"[![](data:image/svg+xml...)![logo](data:image/gif;base64...)![logo](/_next/image?url=%2F_next%2Fstatic%2Fmedia%2FnavLogo.a8764883.png&w=750&q=75)](/)\n",
"\n",
"Game platforms:\n",
"\n",
"* **All**\n",
"\n",
"* [Nintendo platform](/products/category/nintendo)\n",
"\n",
"+ wii\n",
"+ wii-u\n",
"+ nintendo-64\n",
"+ switch\n",
"+ gamecube\n",
"+ game-boy-advance\n",
"+ 3ds\n",
"+ ds\n",
"\n",
"* [Xbox platform](/products/category/xbox-platform)\n",
"\n",
"* **Dreamcast**\n",
"\n",
"* [Playstation platform](/products/category/playstation-platform)\n",
"\n",
"* **Pc**\n",
"\n",
"* **Stadia**\n",
"\n",
"Go Back\n",
"\n",
"Note!This is a sandbox website used for web scraping. Information listed in this website does not have any real meaning and should not be associated with the actual products.\n",
"\n",
"![Super Mario Galaxy](data:image/gif;base64...)![Super Mario Galaxy](/assets/action.svg)\n",
"\n",
"## Super Mario Galaxy\n",
"\n",
"**Developer:** Nintendo**Platform:****Type:** singleplayer\n",
"\n",
"[Metacritic's 2007 Wii Game of the Year] The ultimate Nintendo hero is taking the ultimate step ... out into space. Join Mario as he ushers in a new era of video games, de\n"
]
}
],
"source": [
"for document in loader.load():\n",
" print(document.page_content[:1000])"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "## Lazy Load"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"for document in loader.lazy_load():\n",
" print(document.page_content[:1000])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced examples\n",
"\n",
"The following examples show the usage of `OxylabsLoader` with geolocation, currency, pagination and user agent parameters for Amazon Search and Google Search sources."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-06T11:04:19.901122Z",
"start_time": "2025-08-06T11:04:19.838933Z"
}
},
"outputs": [],
"source": [
"loader = OxylabsLoader(\n",
" queries=[\"gaming headset\", \"gaming chair\", \"computer mouse\"],\n",
" params={\n",
" \"source\": \"amazon_search\",\n",
" \"parse\": True,\n",
" \"geo_location\": \"DE\",\n",
" \"currency\": \"EUR\",\n",
" \"pages\": 3,\n",
" },\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"ExecuteTime": {
"end_time": "2025-08-06T11:07:17.648142Z",
"start_time": "2025-08-06T11:07:17.595629Z"
}
},
"outputs": [],
"source": [
"loader = OxylabsLoader(\n",
" queries=[\"europe gdp per capita\", \"us gdp per capita\"],\n",
" params={\n",
" \"source\": \"google_search\",\n",
" \"parse\": True,\n",
" \"geo_location\": \"Paris, France\",\n",
" \"user_agent_type\": \"mobile\",\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"[More information about this package.](https://github.com/oxylabs/langchain-oxylabs)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -132,13 +132,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"\n",
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Define the LLMGraphTransformer\n",

View File

@@ -548,12 +548,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"# from langchain_experimental.graph_transformers import LLMGraphTransformer"
"from langchain_experimental.graph_transformers import LLMGraphTransformer"
]
},
{

View File

@@ -0,0 +1,357 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"---\n",
"sidebar_label: AI/ML API\n",
"---"
],
"metadata": {
"collapsed": false
},
"id": "c74887ead73c5eb4"
},
{
"cell_type": "markdown",
"source": [
"# AimlapiLLM\n",
"\n",
"This page will help you get started with AI/ML API [text completion models](/docs/concepts/text_llms). For detailed documentation of all AimlapiLLM features and configurations, head to the [API reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n",
"\n",
"AI/ML API provides access to **300+ models** (Deepseek, Gemini, ChatGPT, etc.) via high-uptime and high-rate API."
],
"metadata": {
"collapsed": false
},
"id": "c1895707cde83d90"
},
{
"cell_type": "markdown",
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| AimlapiLLM | langchain-aimlapi | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aimlapi?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aimlapi?style=flat-square&label=%20) |"
],
"metadata": {
"collapsed": false
},
"id": "72b0a510b6eac641"
},
{
"cell_type": "markdown",
"source": [
"### Model features\n",
"| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |\n",
"|:------------:|:-----------------:|:---------:|:-----------:|:-----------:|:-----------:|:---------------------:|:------------:|:-----------:|:--------:|\n",
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\n"
],
"metadata": {
"collapsed": false
},
"id": "4b87089494d8877d"
},
{
"cell_type": "markdown",
"source": [
"## Setup\n",
"To access AI/ML API models, sign up at [aimlapi.com](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration), generate an API key, and set the `AIMLAPI_API_KEY` environment variable:"
],
"metadata": {
"collapsed": false
},
"id": "2c45017efcc36569"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:24:48.681319Z",
"start_time": "2025-08-07T07:24:47.490206Z"
}
},
"id": "86b05af725c45941",
"execution_count": 1
},
{
"cell_type": "markdown",
"source": [
"### Installation\n",
"Install the `langchain-aimlapi` package:"
],
"metadata": {
"collapsed": false
},
"id": "51171ba92cb2b382"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-aimlapi"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:18:08.606708Z",
"start_time": "2025-08-07T07:17:59.901457Z"
}
},
"id": "2b15cbaf7d5e1560",
"execution_count": 2
},
{
"cell_type": "markdown",
"source": [
"## Instantiation\n",
"Now we can instantiate the `AimlapiLLM` model and generate text completions:"
],
"metadata": {
"collapsed": false
},
"id": "e94379f9d37fe6b3"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"from langchain_aimlapi import AimlapiLLM\n",
"\n",
"llm = AimlapiLLM(\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
" temperature=0.5,\n",
" max_tokens=256,\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:46:52.875867Z",
"start_time": "2025-08-07T07:46:52.869961Z"
}
},
"id": "8a3af681997723b0",
"execution_count": 23
},
{
"cell_type": "markdown",
"source": [
"## Invocation\n",
"You can invoke the model with a prompt:"
],
"metadata": {
"collapsed": false
},
"id": "c983ab1d95887e8f"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Bubble sort is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. This process is repeated until the entire list is sorted.\n",
"\n",
"The algorithm gets its name from the way smaller elements \"bubble\" to the top of the list. It is commonly used for educational purposes due to its simplicity, but it is not a very efficient sorting algorithm for large data sets.\n",
"\n",
"Here is an implementation of the bubble sort algorithm in Python:\n",
"\n",
"1. Start by defining a function that takes in a list as its argument.\n",
"2. Set a variable \"swapped\" to True, indicating that a swap has occurred.\n",
"3. Create a while loop that runs as long as the \"swapped\" variable is True.\n",
"4. Inside the loop, set the \"swapped\" variable to False.\n",
"5. Create a for loop that iterates through the list, starting from the first element and ending at the second to last element.\n",
"6. Inside the for loop, compare the current element with the next element. If the current element is larger than the next element, swap them and set the \"swapped\" variable to True.\n",
"7. After the for loop, if the \"swapped\" variable\n"
]
}
],
"source": [
"response = llm.invoke(\"Explain the bubble sort algorithm in Python.\")\n",
"print(response)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:46:57.209950Z",
"start_time": "2025-08-07T07:46:53.935975Z"
}
},
"id": "9a193081f431a42a",
"execution_count": 24
},
{
"cell_type": "markdown",
"source": [
"## Streaming Invocation\n",
"You can also stream responses token-by-token:"
],
"metadata": {
"collapsed": false
},
"id": "1afedb28f556c7bd"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" \n",
"\n",
"1. Python\n",
"Python has been consistently growing in popularity and has become one of the most widely used programming languages in recent years. It is used for a wide range of applications such as web development, data analysis, machine learning, and artificial intelligence. Its simple syntax and readability make it an attractive choice for beginners and experienced programmers alike. With the rise of data-driven technology and automation, Python is projected to be the most in-demand language in 2025.\n",
"\n",
"2. JavaScript\n",
"JavaScript continues to dominate the web development scene and is expected to maintain its position as a top programming language in 2025. With the increasing use of front-end frameworks like React and Angular, JavaScript is crucial for building dynamic and interactive user interfaces. Additionally, the rise of serverless architecture and the popularity of Node.js make JavaScript an essential language for both front-end and back-end development.\n",
"\n",
"3. Go\n",
"Go, also known as Golang, is a relatively new programming language developed by Google. It is designed for"
]
}
],
"source": [
"llm = AimlapiLLM(\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
")\n",
"\n",
"for chunk in llm.stream(\"List top 5 programming languages in 2025 with reasons.\"):\n",
" print(chunk, end=\"\", flush=True)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:49:25.223233Z",
"start_time": "2025-08-07T07:49:22.101498Z"
}
},
"id": "a132c9183f648fb4",
"execution_count": 26
},
{
"cell_type": "markdown",
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all AimlapiLLM features and configurations, visit the [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n"
],
"metadata": {
"collapsed": false
},
"id": "7b4ab33058dc0974"
},
{
"cell_type": "markdown",
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
],
"metadata": {
"collapsed": false
},
"id": "900f36a35477c8ae"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | llm"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:49:34.857042Z",
"start_time": "2025-08-07T07:49:34.853032Z"
}
},
"id": "d7f10052eb4ff249",
"execution_count": 27
},
{
"cell_type": "code",
"outputs": [
{
"data": {
"text/plain": "\"\\n\\nWhy do bears have fur coats?\\n\\nBecause they'd look silly in sweaters! \""
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:49:48.565804Z",
"start_time": "2025-08-07T07:49:35.558426Z"
}
},
"id": "184c333c60f94b05",
"execution_count": 28
},
{
"cell_type": "markdown",
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AI/ML API` llm features and configurations head to the API reference: [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration)"
],
"metadata": {
"collapsed": false
},
"id": "804f3a79a8046ec1"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -44,9 +44,7 @@
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet llama-cpp-python"
]
"source": "%pip install --upgrade --quiet llama-cpp-python"
},
{
"cell_type": "markdown",
@@ -64,9 +62,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
"source": "!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
},
{
"cell_type": "markdown",
@@ -80,9 +76,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
]
"source": "!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
},
{
"cell_type": "markdown",
@@ -100,9 +94,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
"source": "!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
},
{
"cell_type": "markdown",
@@ -116,9 +108,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
]
"source": "!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --no-binary :all: --no-cache-dir"
},
{
"cell_type": "markdown",
@@ -174,9 +164,7 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install -e . --force-reinstall --no-cache-dir"
]
"source": "!python -m pip install -e . --force-reinstall --no-cache-dir"
},
{
"cell_type": "markdown",
@@ -718,4 +706,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -22,32 +22,30 @@
"metadata": {},
"source": [
"## Setup\n",
"Ensure that the oci sdk and the langchain-community package are installed"
"Ensure that the oci sdk and the langchain-community package are installed\n",
"\n",
":::caution You are currently on a page documenting the use of Oracle's text generation models. Which are deprecated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"cell_type": "code",
"outputs": [],
"source": [
"!pip install -U oci langchain-community"
]
"execution_count": null,
"source": "!pip install -U langchain-oci"
},
{
"metadata": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
"source": "## Usage"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"from langchain_community.llms.oci_generative_ai import OCIGenAI\n",
"from langchain_oci.llms import OCIGenAI\n",
"\n",
"llm = OCIGenAI(\n",
" model_id=\"cohere.command\",\n",

View File

@@ -0,0 +1,215 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RecallioMemory + LangChain Integration Demo\n",
"A minimal notebook to show drop-in usage of RecallioMemory in LangChain (with scoped writes and recall)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install recallio langchain langchain-recallio openai"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup: API Keys & Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_recallio.memory import RecallioMemory\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"import os\n",
"\n",
"# Set your keys here or use environment variables\n",
"RECALLIO_API_KEY = os.getenv(\"RECALLIO_API_KEY\", \"YOUR_RECALLIO_API_KEY\")\n",
"OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\", \"YOUR_OPENAI_API_KEY\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize RecallioMemory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"memory = RecallioMemory(\n",
" project_id=\"project_abc\",\n",
" api_key=RECALLIO_API_KEY,\n",
" session_id=\"demo-session-001\",\n",
" user_id=\"demo-user-42\",\n",
" default_tags=[\"test\", \"langchain\"],\n",
" return_messages=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build a LangChain ConversationChain with RecallioMemory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can swap in any supported LLM here\n",
"llm = ChatOpenAI(api_key=OPENAI_API_KEY, temperature=0)\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"The following is a friendly conversation between a human and an AI. \"\n",
" \"The AI is talkative and provides lots of specific details from its context. \"\n",
" \"If the AI does not know the answer to a question, it truthfully says it does not know.\",\n",
" ),\n",
" (\"placeholder\", \"{history}\"), # RecallioMemory will fill this slot\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"# LCEL chain that returns an AIMessage\n",
"base_chain = prompt | llm\n",
"\n",
"\n",
"# Create a stateful chain using RecallioMemory\n",
"def chat_with_memory(user_input: str):\n",
" # Load conversation history from memory\n",
" memory_vars = memory.load_memory_variables({\"input\": user_input})\n",
"\n",
" # Run the chain with history and user input\n",
" response = base_chain.invoke(\n",
" {\"input\": user_input, \"history\": memory_vars.get(\"history\", \"\")}\n",
" )\n",
"\n",
" # Save the conversation to memory\n",
" memory.save_context({\"input\": user_input}, {\"output\": response.content})\n",
"\n",
" return response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example: Chat with Memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Bot: Hello Guillaume! It's nice to meet you. How can I assist you today?\n"
]
}
],
"source": [
"# First user message note the AI remembers the name\n",
"resp1 = chat_with_memory(\"Hi! My name is Guillaume. Remember that.\")\n",
"print(\"Bot:\", resp1.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Bot: Your name is Guillaume.\n"
]
}
],
"source": [
"# Second user message AI should recall the name from memory\n",
"resp2 = chat_with_memory(\"What is my name?\")\n",
"print(\"Bot:\", resp2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## See What Is Stored in Recallio\n",
"This is for debugging/demo only; in production, you wouldn't do this on every run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Current memory variables: {'history': [HumanMessage(content='Name is Guillaume', additional_kwargs={}, response_metadata={})]}\n"
]
}
],
"source": [
"print(\"Current memory variables:\", memory.load_memory_variables({}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clear Memory (Optional Cleanup - Requires Manager level Key)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# memory.clear()\n",
"# print(\"Memory cleared.\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,26 +0,0 @@
# Aerospike
>[Aerospike](https://aerospike.com/docs/vector) is a high-performance, distributed database known for its speed and scalability, now with support for vector storage and search, enabling retrieval and search of embedding vectors for machine learning and AI applications.
> See the documentation for Aerospike Vector Search (AVS) [here](https://aerospike.com/docs/vector).
## Installation and Setup
Install the AVS Python SDK and AVS langchain vector store:
```bash
pip install aerospike-vector-search langchain-aerospike
```
See the documentation for the Python SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
The documentation for the AVS langchain vector store is [here](https://langchain-aerospike.readthedocs.io/en/latest/).
## Vector Store
To import this vectorstore:
```python
from langchain_aerospike.vectorstores import Aerospike
```
See a usage example [here](https://python.langchain.com/docs/integrations/vectorstores/aerospike/).

View File

@@ -0,0 +1,272 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# AI/ML API LLM\n",
"\n",
"[AI/ML API](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration) provides an API to query **300+ leading AI models** (Deepseek, Gemini, ChatGPT, etc.) with enterprise-grade performance.\n",
"\n",
"This example demonstrates how to use LangChain to interact with AI/ML API models."
],
"metadata": {
"collapsed": false
},
"id": "bb9dcd1ba7b0f560"
},
{
"cell_type": "markdown",
"source": [
"## Installation"
],
"metadata": {
"collapsed": false
},
"id": "e4c35f60c565d369"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: langchain-aimlapi in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (0.1.0)\n",
"Requirement already satisfied: langchain-core<0.4.0,>=0.3.15 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-aimlapi) (0.3.67)\n",
"Requirement already satisfied: langsmith>=0.3.45 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.4.4)\n",
"Requirement already satisfied: tenacity!=8.4.0,<10.0.0,>=8.1.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (9.1.2)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.33)\n",
"Requirement already satisfied: PyYAML>=5.3 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (6.0.2)\n",
"Requirement already satisfied: packaging<25,>=23.2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (24.2)\n",
"Requirement already satisfied: typing-extensions>=4.7 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (4.14.0)\n",
"Requirement already satisfied: pydantic>=2.7.4 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.11.7)\n",
"Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.0.0)\n",
"Requirement already satisfied: httpx<1,>=0.23.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.28.1)\n",
"Requirement already satisfied: orjson<4.0.0,>=3.9.14 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.10.18)\n",
"Requirement already satisfied: requests<3,>=2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.32.4)\n",
"Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.0.0)\n",
"Requirement already satisfied: zstandard<0.24.0,>=0.23.0 in c:\\users\\tuman\\appdata\\roaming\\python\\python312\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.23.0)\n",
"Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.7.0)\n",
"Requirement already satisfied: pydantic-core==2.33.2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.33.2)\n",
"Requirement already satisfied: typing-inspection>=0.4.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.4.1)\n",
"Requirement already satisfied: anyio in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (4.9.0)\n",
"Requirement already satisfied: certifi in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2025.6.15)\n",
"Requirement already satisfied: httpcore==1.* in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.0.9)\n",
"Requirement already satisfied: idna in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.10)\n",
"Requirement already satisfied: h11>=0.16 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpcore==1.*->httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.16.0)\n",
"Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests<3,>=2->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.4.2)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests<3,>=2->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.5.0)\n",
"Requirement already satisfied: sniffio>=1.1 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anyio->httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.3.1)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"[notice] A new release of pip is available: 25.0.1 -> 25.2\n",
"[notice] To update, run: python.exe -m pip install --upgrade pip\n"
]
}
],
"source": [
"%pip install --upgrade langchain-aimlapi"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-06T15:22:02.570792Z",
"start_time": "2025-08-06T15:21:32.377131Z"
}
},
"id": "77d4a44909effc3c",
"execution_count": 4
},
{
"cell_type": "markdown",
"source": [
"## Environment\n",
"\n",
"To use AI/ML API, you'll need an API key which you can generate at:\n",
"[https://aimlapi.com/app/](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration)\n",
"\n",
"You can pass it via `aimlapi_api_key` parameter or set as environment variable `AIMLAPI_API_KEY`."
],
"metadata": {
"collapsed": false
},
"id": "c41eaf364c0b414f"
},
{
"cell_type": "code",
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:15:37.147559Z",
"start_time": "2025-08-07T07:15:30.919160Z"
}
},
"id": "421cd40d4e54de62",
"execution_count": 3
},
{
"cell_type": "markdown",
"source": [
"## Example: Chat Model"
],
"metadata": {
"collapsed": false
},
"id": "d9cbe98904f4c5e4"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The city that never sleeps! New York City is a treasure trove of excitement, entertainment, and adventure. Here are some fun things to do in NYC:\n",
"\n",
"**Iconic Attractions:**\n",
"\n",
"1. **Statue of Liberty and Ellis Island**: Take a ferry to Liberty Island to see the iconic statue up close and visit the Ellis Island Immigration Museum.\n",
"2. **Central Park**: A tranquil oasis in the middle of Manhattan, perfect for a stroll, picnic, or bike ride.\n",
"3. **Empire State Building**: For a panoramic view of the city, head to the observation deck of this iconic skyscraper.\n",
"4. **The Metropolitan Museum of Art**: One of the world's largest and most famous museums, with a collection that spans over 5,000 years of human history.\n",
"\n",
"**Neighborhood Explorations:**\n",
"\n",
"1. **SoHo**: Known for its trendy boutiques, art galleries, and cast-iron buildings.\n",
"2. **Greenwich Village**: A charming neighborhood with a rich history, known for its bohemian vibe, jazz clubs, and historic brownstones.\n",
"3. **Chinatown and Little Italy**: Experience the vibrant cultures of these two iconic neighborhoods, with delicious food, street festivals, and unique shops.\n",
"4. **Williamsburg, Brooklyn**: A hip neighborhood with a thriving arts scene, trendy bars, and some of the best restaurants in the city.\n",
"\n",
"**Food and Drink:**\n",
"\n",
"1. **Try a classic NYC slice of pizza**: Visit Lombardi's, Joe's Pizza, or Patsy's Pizzeria for a taste of the city's famous pizza.\n",
"2. **Bagels with lox and cream cheese**: A classic NYC breakfast at a Jewish deli like Russ & Daughters Cafe or Ess-a-Bagel.\n",
"3. **Food markets**: Visit Smorgasburg in Brooklyn or Chelsea Market for a variety of artisanal foods and drinks.\n",
"4. **Rooftop bars**: Enjoy a drink with a view at 230 Fifth, the Top of the Strand, or the Roof at The Viceroy Central Park.\n",
"**Performing Arts:**\n",
"\n",
"1. **Broadway shows**: Catch a musical or play on the Great White Way, like Hamilton, The Lion King, or Wicked.\n",
"2. **Jazz clubs**: Visit Blue Note Jazz Club, the Village Vanguard, or the Jazz Standard for live music performances.\n",
"3. **Lincoln Center**: Home to the New York City Ballet, the Metropolitan Opera, and the Juilliard School.\n",
"4. **"
]
}
],
"source": [
"from langchain_aimlapi import ChatAimlapi\n",
"\n",
"chat = ChatAimlapi(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
")\n",
"\n",
"# Stream response\n",
"for chunk in chat.stream(\"Tell me fun things to do in NYC\"):\n",
" print(chunk.content, end=\"\", flush=True)\n",
"\n",
"# Or use invoke()\n",
"# response = chat.invoke(\"Tell me fun things to do in NYC\")\n",
"# print(response)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:15:59.612289Z",
"start_time": "2025-08-07T07:15:47.864231Z"
}
},
"id": "3f73a8e113a58e9b",
"execution_count": 4
},
{
"cell_type": "markdown",
"source": [
"## Example: Text Completion Model"
],
"metadata": {
"collapsed": false
},
"id": "7aca59af5cadce80"
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" # Funkcja ponownie zwraca nową listę, bez zmienienia listy przekazanej jako argument w funkcji\n",
" my_list = [16, 12, 16, 3, 2, 6]\n",
" new_list = my_list[:]\n",
" for x in range(len(new_list)):\n",
" for y in range(len(new_list) - 1):\n",
" if new_list[y] > new_list[y + 1]:\n",
" new_list[y], new_list[y + 1] = new_list[y + 1], new_list[y]\n",
" return new_list, my_list\n",
"\n",
"\n",
"def bubble_sort_lib3(list): # Sortowanie z wykorzystaniem zewnętrznej biblioteki poza pętlą\n",
" from itertools import permutations\n",
" y = len(list)\n",
" perms = []\n",
" for a in range(0, y + 1):\n",
" for subset in permutations(list, a):\n",
" \n"
]
}
],
"source": [
"from langchain_aimlapi import AimlapiLLM\n",
"\n",
"llm = AimlapiLLM(\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
")\n",
"\n",
"print(llm.invoke(\"def bubble_sort(): \"))"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2025-08-07T07:16:22.595703Z",
"start_time": "2025-08-07T07:16:19.410881Z"
}
},
"id": "2af3be417769efc3",
"execution_count": 6
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,38 @@
# Anchor Browser
[Anchor](https://anchorbrowser.io?utm=langchain) is the platform for AI Agentic browser automation, which solves the challenge of automating workflows for web applications that lack APIs or have limited API coverage. It simplifies the creation, deployment, and management of browser-based automations, transforming complex web interactions into simple API endpoints.
`langchain-anchorbrowser` provides 3 main tools:
- `AnchorContentTool` - For web content extractions in Markdown or HTML format.
- `AnchorScreenshotTool` - For web page screenshots.
- `AnchorWebTaskTools` - To perform web tasks.
## Quickstart
### Installation
Install the package:
```bash
pip install langchain-anchorbrowser
```
### Usage
Import and utilize your intended tool. The full list of Anchor Browser available tools see **Tool Features** table in [Anchor Browser tool page](/docs/integrations/tools/anchor_browser)
```python
from langchain_anchorbrowser import AnchorContentTool
# Get Markdown Content for https://www.anchorbrowser.io
AnchorContentTool().invoke(
{"url": "https://www.anchorbrowser.io", "format": "markdown"}
)
```
## Additional Resources
- [PyPi](https://pypi.org/project/langchain-anchorbrowser)
- [Github](https://github.com/anchorbrowser/langchain-anchorbrowser)
- [Anchor Browser Docs](https://docs.anchorbrowser.io/introduction?utm=langchain)
- [Anchor Browser API Reference](https://docs.anchorbrowser.io/api-reference/ai-tools/perform-web-task?utm=langchain)

View File

@@ -21,6 +21,38 @@ pip install deepeval
See an [example](/docs/integrations/callbacks/confident).
```python
from langchain.callbacks.confident_callback import DeepEvalCallbackHandler
## Modern Integration Example
Install the required packages:
```bash
pip install deepeval langchain langchain-openai
```
Authenticate with your API key:
```python
import os
import deepeval
# Load API key from environment variable for security
api_key = os.environ.get("DEEPEVAL_API_KEY")
deepeval.login(api_key)
```
Use the new callback handler:
```python
from deepeval.integrations.langchain import CallbackHandler
handler = CallbackHandler(
name="My Trace",
tags=["production", "v1"],
metadata={"experiment": "A/B"},
thread_id="thread-123",
user_id="user-456"
)
```
See the [full example](/docs/integrations/callbacks/confident).

View File

@@ -10,7 +10,7 @@
Install the python SDK:
```bash
pip install firecrawl-py==0.0.20
pip install firecrawl-py
```
## Document loader

View File

@@ -0,0 +1,129 @@
# Bigtable
Bigtable is a scalable, fully managed key-value and wide-column store ideal for fast access to structured, semi-structured, or unstructured data. This page provides an overview of Bigtable's LangChain integrations.
**Client Library Documentation:** [cloud.google.com/python/docs/reference/langchain-google-bigtable/latest](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest)
**Product Documentation:** [cloud.google.com/bigtable](https://cloud.google.com/bigtable)
## Quick Start
To use this library, you first need to:
1. Select or create a Cloud Platform project.
2. Enable billing for your project.
3. Enable the Google Cloud Bigtable API.
4. Set up Authentication.
## Installation
The main package for this integration is `langchain-google-bigtable`.
```bash
pip install -U langchain-google-bigtable
```
## Integrations
The `langchain-google-bigtable` package provides the following integrations:
### Vector Store
With `BigtableVectorStore`, you can store documents and their vector embeddings to find the most similar or relevant information in your database.
* **Full `VectorStore` Implementation:** Supports all methods from the LangChain `VectorStore` abstract class.
* **Async/Sync Support:** All methods are available in both asynchronous and synchronous versions.
* **Metadata Filtering:** Powerful filtering on metadata fields, including logical AND/OR combinations.
* **Multiple Distance Strategies:** Supports both Cosine and Euclidean distance for similarity search.
* **Customizable Storage:** Full control over how content, embeddings, and metadata are stored in Bigtable columns.
```python
from langchain_google_bigtable import BigtableVectorStore
# Your embedding service and other configurations
# embedding_service = ...
engine = await BigtableEngine.async_initialize(project_id="your-project-id")
vector_store = await BigtableVectorStore.create(
engine=engine,
instance_id="your-instance-id",
table_id="your-table-id",
embedding_service=embedding_service,
collection="your_collection_name",
)
await vector_store.aadd_documents([your_documents])
results = await vector_store.asimilarity_search("your query")
```
Learn more in the [Vector Store how-to guide](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/vector_store.ipynb).
### Key-value Store
Use `BigtableByteStore` as a persistent, scalable key-value store for caching, session management, or other storage needs. It supports both synchronous and asynchronous operations.
```python
from langchain_google_bigtable import BigtableByteStore
# Initialize the store
store = await BigtableByteStore.create(
project_id="your-project-id",
instance_id="your-instance-id",
table_id="your-table-id",
)
# Set and get values
await store.amset([("key1", b"value1")])
retrieved = await store.amget(["key1"])
```
Learn more in the [Key-value Store how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/key-value-store).
### Document Loader
Use the `BigtableLoader` to load data from a Bigtable table and represent it as LangChain `Document` objects.
```python
from langchain_google_bigtable import BigtableLoader
loader = BigtableLoader(
project_id="your-project-id",
instance_id="your-instance-id",
table_id="your-table-name"
)
docs = loader.load()
```
Learn more in the [Document Loader how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/document-loader).
### Chat Message History
Use `BigtableChatMessageHistory` to store conversation histories, enabling stateful chains and agents.
```python
from langchain_google_bigtable import BigtableChatMessageHistory
history = BigtableChatMessageHistory(
project_id="your-project-id",
instance_id="your-instance-id",
table_id="your-message-store",
session_id="user-session-123"
)
history.add_user_message("Hello!")
history.add_ai_message("Hi there!")
```
Learn more in the [Chat Message History how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/chat-message-history).
## Contributions
Contributions to this library are welcome. Please see the CONTRIBUTING guide in the [package repo](https://github.com/googleapis/langchain-google-bigtable-python/) for more details
## License
This project is licensed under the Apache 2.0 License - see the LICENSE file in the [package repo](https://github.com/googleapis/langchain-google-bigtable-python/blob/main/LICENSE) for details.
## Disclaimer
This is not an officially supported Google product.

View File

@@ -929,6 +929,41 @@ from langchain_google_community.gmail.search import GmailSearch
from langchain_google_community.gmail.send_message import GmailSendMessage
```
### MCP Toolbox
[MCP Toolbox](https://github.com/googleapis/genai-toolbox) provides a simple and efficient way to connect to your databases, including those on Google Cloud like [Cloud SQL](https://cloud.google.com/sql/docs) and [AlloyDB](https://cloud.google.com/alloydb/docs/overview). With MCP Toolbox, you can seamlessly integrate your database with LangChain to build powerful, data-driven applications.
#### Installation
To get started, [install the Toolbox server and client](https://github.com/googleapis/genai-toolbox/releases/).
[Configure](https://googleapis.github.io/genai-toolbox/getting-started/configure/) a `tools.yaml` to define your tools, and then execute toolbox to start the server:
```bash
toolbox --tools-file "tools.yaml"
```
Then, install the Toolbox client:
```bash
pip install toolbox-langchain
```
#### Getting Started
Here is a quick example of how to use MCP Toolbox to connect to your database:
```python
from toolbox_langchain import ToolboxClient
async with ToolboxClient("http://127.0.0.1:5000") as client:
tools = client.load_toolset()
```
See [usage example and setup instructions](/docs/integrations/tools/toolbox).
### Memory
Store conversation history using Google Cloud databases.

View File

@@ -2,17 +2,10 @@
This will help you getting started with DigitalOcean Gradient [chat models](/docs/concepts/chat_models).
## Overview
### Integration details
| Class | Package | Package downloads | Package latest |
| :--- | :--- | :---: | :---: |
| [ChatGradient](https://python.langchain.com/api_reference/langchain-gradient/chat_models/langchain_gradient.chat_models.ChatGradient.html) | [langchain-gradient](https://python.langchain.com/api_reference/langchain-gradient/) | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-gradient?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-gradient?style=flat-square&label=%20) |
## Setup
langchain-gradient uses DigitalOcean Gradient Platform.
langchain-gradient uses DigitalOcean's Gradient™ AI Platform.
Create an account on DigitalOcean, acquire a `DIGITALOCEAN_INFERENCE_KEY` API key from the Gradient Platform, and install the `langchain-gradient` integration package.

View File

@@ -77,7 +77,7 @@ from langchain_ibm import WatsonxRerank
See a [usage example](/docs/integrations/tools/ibm_watsonx).
```python
from langchain_ibm import WatsonxToolkit
from langchain_ibm.agent_toolkits.utility import WatsonxToolkit
```
## DB2

View File

@@ -40,11 +40,11 @@ embeddings.embed_query("What is the meaning of life?")
```
## LLMs
`ModelScopeLLM` class exposes LLMs from ModelScope.
`ModelScopeEndpoint` class exposes LLMs from ModelScope.
```python
from langchain_modelscope import ModelScopeLLM
from langchain_modelscope import ModelScopeEndpoint
llm = ModelScopeLLM(model="Qwen/Qwen2.5-Coder-32B-Instruct")
llm = ModelScopeEndpoint(model="Qwen/Qwen2.5-Coder-32B-Instruct")
llm.invoke("The meaning of life is")
```

View File

@@ -1,4 +1,4 @@
# Oracle Cloud Infrastructure (OCI)
# Oracle Cloud Infrastructure (OCI)
The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://www.oracle.com/artificial-intelligence/).
@@ -11,17 +11,15 @@ The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://ww
To use, you should have the latest `oci` python SDK and the langchain_community package installed.
```bash
pip install -U oci langchain-community
python -m pip install -U langchain-oci
```
See [chat](/docs/integrations/llms/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples.
See [chat](/docs/integrations/chat/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples.
```python
from langchain_community.chat_models import ChatOCIGenAI
from langchain_oci.chat_models import ChatOCIGenAI
from langchain_community.llms import OCIGenAI
from langchain_community.embeddings import OCIGenAIEmbeddings
from langchain_oci.embeddings import OCIGenAIEmbeddings
```
## OCI Data Science Model Deployment Endpoint
@@ -42,8 +40,8 @@ See [chat](/docs/integrations/chat/oci_data_science) and [complete](/docs/integr
```python
from langchain_community.chat_models import ChatOCIModelDeployment
from langchain_oci.chat_models import ChatOCIModelDeployment
from langchain_community.llms import OCIModelDeploymentLLM
from langchain_oci.llms import OCIModelDeploymentLLM
```

View File

@@ -3,13 +3,11 @@
>[Ollama](https://ollama.com/) allows you to run open-source large language models,
> such as [gpt-oss](https://ollama.com/library/gpt-oss), locally.
>
>The `ollama` [package](https://pypi.org/project/ollama/0.5.3/) bundles model weights,
> configuration, and data into a single package, defined by a Modelfile. It optimizes
> setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the
> [Ollama model library](https://ollama.com/search).
>`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.
>It optimizes setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
See [this guide](/docs/how_to/local_llms/#ollama) for more details
See [this guide](/docs/how_to/local_llms#ollama) for more details
on how to use `ollama` with LangChain.
## Installation and Setup
@@ -25,7 +23,7 @@ Ollama will start as a background service automatically, if this is disabled, ru
ollama serve
```
After starting ollama, run `ollama pull <name-of-model>` to download a model from the [Ollama model library](https://ollama.com/library):
After starting ollama, run `ollama pull <name-of-model>` to download a model from the [Ollama model library](https://ollama.ai/library):
```bash
ollama pull gpt-oss:20b

View File

@@ -0,0 +1,31 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Recallio\n",
"\n",
"[Recallio](https://recallio.ai/) is a powerfull API allowing to store, index, and retrieve application “memories” with built-in fact extraction, dynamic summaries, reranked recall, and a full knowledge-graph layer.\n",
"\n",
"\n",
"## Installation\n",
"\n",
"```bash\n",
"pip install langchain-recallio\n",
"```\n",
"\n",
"```python\n",
"from langchain_recallio.memory import RecallioMemory\n",
"```"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,26 @@
# Scrapeless
[Scrapeless](https://scrapeless.com) offers flexible and feature-rich data acquisition services with extensive parameter customization and multi-format export support.
## Installation and Setup
```bash
pip install langchain-scrapeless
```
You'll need to set up your Scrapeless API key:
```python
import os
os.environ["SCRAPELESS_API_KEY"] = "your-api-key"
```
## Tools
The Scrapeless integration provides several tools:
- [ScrapelessDeepSerpGoogleSearchTool](/docs/integrations/tools/scrapeless_scraping_api) - Enables comprehensive extraction of Google SERP data across all result types.
- [ScrapelessDeepSerpGoogleTrendsTool](/docs/integrations/tools/scrapeless_scraping_api) - Retrieves keyword trend data from Google, including popularity over time, regional interest, and related searches.
- [ScrapelessUniversalScrapingTool](/docs/integrations/tools/scrapeless_universal_scraping) - Access and extract data from JS-Render websites that typically block bots.
- [ScrapelessCrawlerCrawlTool](/docs/integrations/tools/scrapeless_crawl) - Crawl a website and its linked pages to extract comprehensive data.
- [ScrapelessCrawlerScrapeTool](/docs/integrations/tools/scrapeless_crawl) - Extract information from a single webpage.

View File

@@ -0,0 +1,43 @@
# langchain-siliconflow
This package contains the LangChain integration with SiliconFlow
## Installation
```bash
pip install -U langchain-siliconflow
```
And you should configure credentials by setting the following environment variables:
```bash
export SILICONFLOW_API_KEY="your-api-key"
```
You can set the following environment variable to use the `.cn` endpoint:
```bash
export SILICONFLOW_BASE_URL="https://api.siliconflow.cn/v1"
```
## Chat Models
`ChatSiliconFlow` class exposes chat models from SiliconFlow.
```python
from langchain_siliconflow import ChatSiliconFlow
llm = ChatSiliconFlow()
llm.invoke("Sing a ballad of LangChain.")
```
## Embeddings
`SiliconFlowEmbeddings` class exposes embeddings from SiliconFlow.
```python
from langchain_siliconflow import SiliconFlowEmbeddings
embeddings = SiliconFlowEmbeddings()
embeddings.embed_query("What is the meaning of life?")
```

Some files were not shown because too many files have changed in this diff Show More