Mason Daugherty 5a9b1ec2dc refactor(langchain-classic): retarget deprecations to create_agent, other chores (#37164)
Sweep classic deprecations so every removal lands on `2.0.0`, runtime
warnings carry the auto-generated since/removal/alternative line, and
replacements steer at `langchain.agents.create_agent` and
`with_structured_output(...)` instead of pre-v1 LangGraph +
`python.langchain.com` links.

## Changes

- **Bump removal targets from `1.0` / `1.0.0` to `2.0.0`** across
agents, chains, memory, retrievers, structured-output, vectorstore
toolkits, and the `langchain_classic._api.module_import` shim — gives
users a real runway now that v1 has shipped.
- **Move bespoke `message=` strings onto `addendum=`** (or split into
`alternative=` + `addendum=`). `warn_deprecated` skips the
auto-generated since/removal/alternative line whenever `message=` is
set, so the prior pattern silently dropped that info from the runtime
`LangChainDeprecationWarning`. Matches the pattern already used in
`HTMLHeaderTextSplitter.split_text_from_url`, which is updated for
consistency.
- **Repoint `alternative=` at v1 replacements**: chains/memory/agent
toolkits → `langchain.agents.create_agent` (with checkpointer or
retrieval-tool guidance in the addendum); `openai_functions` and
`chains/structured_output` → `ChatModel.with_structured_output(...)`;
`openapi` chains → `ChatModel.bind_tools(...)` + HTTP client.
`ConversationChain` no longer points at `RunnableWithMessageHistory`.
- **Refresh `AGENT_DEPRECATION_WARNING`** in
`langchain_classic._api.deprecation` — drop stale LangGraph and
`python.langchain.com` links in favor of `langchain.agents.create_agent`
and the `docs.langchain.com/oss/python/migrate/langchain-v1` guide.
Propagates to all 13 caller sites in `agents/`.
- **Newly deprecate `langchain_classic.chat_models.init_chat_model` and
`langchain_classic.embeddings.init_embeddings`** with the framing
*"maintained in `langchain`; `langchain-classic` retains this entry
point for import-compatibility only"*. The classic docstring examples
and the warning admonition both point at `langchain.chat_models`.
- **Improve `init_chat_model` docstrings** in both `langchain_v1` and
the classic copy: clarify `provider:model` prefix vs. `model_provider=`,
recommend pinned IDs over moving aliases, add the `upstage` provider
row, and refresh examples to GA models (`gpt-5.5`, `claude-opus-4-7`).
- **Standardize partner Anthropic deprecations**: replace
`AnthropicLLM`'s `model_validator(raise_warning)` with
`@deprecated(since="0.1.0", removal="2.0.0",
alternative="ChatAnthropic")`, and pin the `ChatAnthropic`
`output_format` runtime warning at `langchain-anthropic 2.0.0` instead
of "a future version".
2026-05-03 13:15:59 -04:00

The agent engineering platform.

PyPI - License PyPI - Downloads Version Twitter / X

LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.

Note

Looking for the JS/TS library? Check out LangChain.js.

Quickstart

pip install langchain
# or
uv add langchain
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-5.4")
result = model.invoke("Hello, world!")

If you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.

Tip

For developing, debugging, and deploying AI agents and LLM applications, see LangSmith.

LangChain ecosystem

While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.

  • Deep Agents — Build agents that can plan, use subagents, and leverage file systems for complex tasks
  • LangGraph — Build agents that can reliably handle complex tasks with our low-level agent orchestration framework
  • Integrations — Chat & embedding models, tools & toolkits, and more
  • LangSmith — Agent evals, observability, and debugging for LLM apps
  • LangSmith Deployment — Deploy and scale agents with a purpose-built platform for long-running, stateful workflows

Why use LangChain?

LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.

  • Real-time data augmentation — Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more
  • Model interoperability — Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly — LangChain's abstractions keep you moving without losing momentum
  • Rapid prototyping — Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle
  • Production-ready features — Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices
  • Vibrant community and ecosystem — Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community
  • Flexible abstraction layers — Work at the level of abstraction that suits your needs — from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity

Documentation

Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.

Additional resources

  • Contributing Guide Learn how to contribute to LangChain projects and find good first issues.
  • Code of Conduct Our community guidelines and standards for participation.
  • LangChain Academy Comprehensive, free courses on LangChain libraries and products, made by the LangChain team.
Description
Building applications with LLMs through composability
Readme MIT Cite this repository 4.9 GiB
Languages
Python 85%
omnetpp-msg 14.4%
Makefile 0.4%