[//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.6.3 to 2.7.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>2.7.0</h2> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Security</h2> <p>Addressed high-severity security issues. Impact was limited to specific use cases detailed in the accompanying advisories; overall user exposure was estimated to be marginal.</p> <ul> <li> <p>Decompression-bomb safeguards of the streaming API were bypassed:</p> <ol> <li>When <code>HTTPResponse.drain_conn()</code> was called after the response had been read and decompressed partially. (Reported by <a href="https://github.com/Cycloctane"><code>@Cycloctane</code></a>)</li> <li>During the second <code>HTTPResponse.read(amt=N)</code> or <code>HTTPResponse.stream(amt=N)</code> call when the response was decompressed using the official <a href="https://pypi.org/project/brotli/">Brotli</a> library. (Reported by <a href="https://github.com/kimkou2024"><code>@kimkou2024</code></a>)</li> </ol> <p>See GHSA-mf9v-mfxr-j63j for details.</p> </li> <li> <p>HTTP pools created using <code>ProxyManager.connection_from_url</code> did not strip sensitive headers specified in <code>Retry.remove_headers_on_redirect</code> when redirecting to a different host. (GHSA-qccp-gfcp-xxvc reported by <a href="https://github.com/christos-spearbit"><code>@christos-spearbit</code></a>)</p> </li> </ul> <h2>Deprecations and Removals</h2> <ul> <li>Used <code>FutureWarning</code> instead of <code>DeprecationWarning</code> for better visibility of existing deprecation notices. Rescheduled the removal of deprecated features to version 3.0. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3763">urllib3/urllib3#3763</a>)</li> <li>Removed support for end-of-life Python 3.9. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3720">urllib3/urllib3#3720</a>)</li> <li>Removed support for end-of-life PyPy3.10. (<a href="https://redirect.github.com/urllib3/urllib3/issues/4979">urllib3/urllib3#4979</a>)</li> <li>Bumped the minimum supported pyOpenSSL version to 19.0.0. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3777">urllib3/urllib3#3777</a>)</li> </ul> <h2>Bugfixes</h2> <ul> <li>Fixed a bug where <code>HTTPResponse.read(amt=None)</code> was ignoring decompressed data buffered from previous partial reads. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3636">urllib3/urllib3#3636</a>)</li> <li>Fixed a bug where <code>HTTPResponse.read()</code> could cache only part of the response after a partial read when <code>cache_content=True</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/4967">urllib3/urllib3#4967</a>)</li> <li>Fixed <code>HTTPResponse.stream()</code> and <code>HTTPResponse.read_chunked()</code> to handle <code>amt=0</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3793">urllib3/urllib3#3793</a>)</li> <li>Updated <code>_TYPE_BODY</code> type alias to include missing <code>Iterable[str]</code>, matching the documented and runtime behavior of chunked request bodies. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3798">urllib3/urllib3#3798</a>)</li> <li>Fixed <code>LocationParseError</code> when paths resembling schemeless URIs were passed to <code>HTTPConnectionPool.urlopen()</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3352">urllib3/urllib3#3352</a>)</li> <li>Fixed <code>BaseHTTPResponse.readinto()</code> type annotation to accept <code>memoryview</code> in addition to <code>bytearray</code>, matching the <code>io.RawIOBase.readinto</code> contract and enabling use with <code>io.BufferedReader</code> without type errors. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3764">urllib3/urllib3#3764</a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>2.7.0 (2026-05-07)</h1> <h2>Security</h2> <p>Addressed high-severity security issues. Impact was limited to specific use cases detailed in the accompanying advisories; overall user exposure was estimated to be marginal.</p> <ul> <li> <p>Decompression-bomb safeguards of the streaming API were bypassed:</p> <ol> <li>When <code>HTTPResponse.drain_conn()</code> was called after the response had been read and decompressed partially.</li> <li>During the second <code>HTTPResponse.read(amt=N)</code> or <code>HTTPResponse.stream(amt=N)</code> call when the response was decompressed using the official <code>Brotli <https://pypi.org/project/brotli/></code>__ library.</li> </ol> <p>See <code>GHSA-mf9v-mfxr-j63j <https://github.com/urllib3/urllib3/security/advisories/GHSA-mf9v-mfxr-j63j></code>__ for details.</p> </li> <li> <p>HTTP pools created using <code>ProxyManager.connection_from_url</code> did not strip sensitive headers specified in <code>Retry.remove_headers_on_redirect</code> when redirecting to a different host. (<code>GHSA-qccp-gfcp-xxvc <https://github.com/urllib3/urllib3/security/advisories/GHSA-qccp-gfcp-xxvc></code>__)</p> </li> </ul> <h2>Deprecations and Removals</h2> <ul> <li>Used <code>FutureWarning</code> instead of <code>DeprecationWarning</code> for better visibility of existing deprecation notices. Rescheduled the removal of deprecated features to version 3.0. (<code>[#3763](https://github.com/urllib3/urllib3/issues/3763) <https://github.com/urllib3/urllib3/issues/3763></code>__)</li> <li>Removed support for end-of-life Python 3.9. (<code>[#3720](https://github.com/urllib3/urllib3/issues/3720) <https://github.com/urllib3/urllib3/issues/3720></code>__)</li> <li>Removed support for end-of-life PyPy3.10. (<code>[#4979](https://github.com/urllib3/urllib3/issues/4979) <https://github.com/urllib3/urllib3/issues/4979></code>__)</li> <li>Bumped the minimum supported pyOpenSSL version to 19.0.0. (<code>[#3777](https://github.com/urllib3/urllib3/issues/3777) <https://github.com/urllib3/urllib3/issues/3777></code>__)</li> </ul> <h2>Bugfixes</h2> <ul> <li>Fixed a bug where <code>HTTPResponse.read(amt=None)</code> was ignoring decompressed data buffered from previous partial reads. (<code>[#3636](https://github.com/urllib3/urllib3/issues/3636) <https://github.com/urllib3/urllib3/issues/3636></code>__)</li> <li>Fixed a bug where <code>HTTPResponse.read()</code> could cache only part of the response after a partial read when <code>cache_content=True</code>.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="9a950b92d9"><code>9a950b9</code></a> Release 2.7.0</li> <li><a href="5ec0de499b"><code>5ec0de4</code></a> Merge commit from fork</li> <li><a href="2bdcc44d1e"><code>2bdcc44</code></a> Merge commit from fork</li> <li><a href="f45b0df09d"><code>f45b0df</code></a> Fix a misleading example for <code>ProxyManager</code> (<a href="https://redirect.github.com/urllib3/urllib3/issues/4970">#4970</a>)</li> <li><a href="577193ca02"><code>577193c</code></a> Switch to nightly PyPy3.11 in CI for now (<a href="https://redirect.github.com/urllib3/urllib3/issues/4984">#4984</a>)</li> <li><a href="e90af45bb0"><code>e90af45</code></a> Avoid infinite loop in <code>HTTPResponse.read_chunked</code> when <code>amt=0</code> (<a href="https://redirect.github.com/urllib3/urllib3/issues/4974">#4974</a>)</li> <li><a href="67ed74fdae"><code>67ed74f</code></a> Bump dev dependencies (<a href="https://redirect.github.com/urllib3/urllib3/issues/4972">#4972</a>)</li> <li><a href="3abd481097"><code>3abd481</code></a> Upgrade mypy to version 1.20.2 (<a href="https://redirect.github.com/urllib3/urllib3/issues/4978">#4978</a>)</li> <li><a href="2b8725dfca"><code>2b8725d</code></a> Drop support for EOL PyPy3.10 (<a href="https://redirect.github.com/urllib3/urllib3/issues/4979">#4979</a>)</li> <li><a href="2944b2a0a6"><code>2944b2a</code></a> Upgrade <code>setup-chrome</code> and <code>setup-firefox</code> to fix warnings (<a href="https://redirect.github.com/urllib3/urllib3/issues/4973">#4973</a>)</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/2.6.3...2.7.0">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/langchain-ai/langchain/network/alerts). </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The agent engineering platform.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
Tip
Just getting started? Check out Deep Agents — a higher-level package built on LangChain for agents that have built-in capabilites for common usage patterns such as planning, subagents, file system usage, and more.
Quickstart
pip install langchain
# or
uv add langchain
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-5.4")
result = model.invoke("Hello, world!")
If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
For an equivalent JS/TS library, check out LangChain.js.
Tip
For developing, debugging, and deploying AI agents and LLM applications, see LangSmith.
LangChain ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
- Deep Agents — Build agents that can plan, use subagents, and leverage file systems for complex tasks
- LangGraph — Build agents that can reliably handle complex tasks with our low-level agent orchestration framework
- Integrations — Chat & embedding models, tools & toolkits, and more
- LangSmith — Agent evals, observability, and debugging for LLM apps
- LangSmith Deployment — Deploy and scale agents with a purpose-built platform for long-running, stateful workflows
Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
- Real-time data augmentation — Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more
- Model interoperability — Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly — LangChain's abstractions keep you moving without losing momentum
- Rapid prototyping — Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle
- Production-ready features — Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices
- Vibrant community and ecosystem — Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community
- Flexible abstraction layers — Work at the level of abstraction that suits your needs — from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity
Documentation
- docs.langchain.com – Comprehensive documentation, including conceptual overviews and guides
- reference.langchain.com/python – API reference docs for LangChain packages
- Chat LangChain – Chat with the LangChain documentation and get answers to your questions
Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.
Additional resources
- Contributing Guide – Learn how to contribute to LangChain projects and find good first issues.
- Code of Conduct – Our community guidelines and standards for participation.
- LangChain Academy – Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.