From c124e673252347eac0f75c5541c7a9eaf162b33f Mon Sep 17 00:00:00 2001 From: Mason Daugherty Date: Tue, 9 Sep 2025 10:50:32 -0400 Subject: [PATCH] chore(docs): update package `README`s (#32869) - Fix badges - Focus on agents - Cut down fluff --- libs/core/README.md | 61 +++++++++-------------------------- libs/langchain/README.md | 32 ++++++++---------- libs/langchain_v1/README.md | 32 ++++++++---------- libs/text-splitters/README.md | 4 +-- 4 files changed, 46 insertions(+), 83 deletions(-) diff --git a/libs/core/README.md b/libs/core/README.md index 50b9ebf27aa..a2f51c05049 100644 --- a/libs/core/README.md +++ b/libs/core/README.md @@ -1,7 +1,7 @@ # 🦜🍎️ LangChain Core -[![Downloads](https://static.pepy.tech/badge/langchain_core/month)](https://pepy.tech/project/langchain_core) -[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) +[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT) +[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-core)](https://pypistats.org/packages/langchain-core) ## Quick Install @@ -11,51 +11,31 @@ pip install langchain-core ## What is it? -LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem. +LangChain Core contains the base abstractions that power the the LangChain ecosystem. -These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more. +These abstractions are designed to be as modular and simple as possible. The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem. For full documentation see the [API reference](https://python.langchain.com/api_reference/core/index.html). +## ⛰️ Why build on top of LangChain Core? + +The LangChain ecosystem is built on top of `langchain-core`. Some of the benefits: + +- **Modularity**: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider. +- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps. +- **Battle-tested**: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies. + ## 1️⃣ Core Interface: Runnables The concept of a `Runnable` is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them -- a common invocation interface (`invoke()`, `batch()`, `stream()`, etc.) -- built-in utilities for retries, fallbacks, schemas and runtime configurability -- easy deployment with [LangGraph](https://github.com/langchain-ai/langgraph) +- A common invocation interface (`invoke()`, `batch()`, `stream()`, etc.) +- Built-in utilities for retries, fallbacks, schemas and runtime configurability +- Easy deployment with [LangGraph](https://github.com/langchain-ai/langgraph) -For more check out the [runnable docs](https://python.langchain.com/docs/concepts/runnables/). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers. - -You can use LangChain Core objects in two ways: - -1. **imperative**, ie. call them directly, eg. `model.invoke(...)` - -2. **declarative**, with LangChain Expression Language (LCEL) - -3. or a mix of both! eg. one of the steps in your LCEL sequence can be a custom function - -| Feature | Imperative | Declarative | -| --------- | ------------------------------- | -------------- | -| Syntax | All of Python | LCEL | -| Tracing | ✅ – Automatic | ✅ – Automatic | -| Parallel | ✅ – with threads or coroutines | ✅ – Automatic | -| Streaming | ✅ – by yielding | ✅ – Automatic | -| Async | ✅ – by writing async functions | ✅ – Automatic | - -## ⚡️ What is LangChain Expression Language? - -LangChain Expression Language (LCEL) is a _declarative language_ for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs. - -LangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support. - -For more check out the [LCEL docs](https://python.langchain.com/docs/concepts/lcel/). - -![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/static/svg/langchain_stack_112024.svg "LangChain Framework Overview") - -For more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows. +For more check out the [`Runnable` docs](https://python.langchain.com/docs/concepts/runnables/). Examples of components that implement the interface include: Chat Models, Tools, Retrievers, and Output Parsers. ## 📕 Releases & Versioning @@ -77,12 +57,3 @@ Patch version increases will occur for: As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/). - -## ⛰️ Why build on top of LangChain Core? - -The whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits: - -- **Modularity**: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider. -- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps. -- **Battle-tested**: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies. -- **Community**: LangChain Core is developed in the open, and we welcome contributions from the community. diff --git a/libs/langchain/README.md b/libs/langchain/README.md index a1fd0a5e7d7..1960f29e79b 100644 --- a/libs/langchain/README.md +++ b/libs/langchain/README.md @@ -2,13 +2,9 @@ ⚡ Building applications with LLMs through composability ⚡ -[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases) -[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain) -[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) +[![PyPI - License](https://img.shields.io/pypi/l/langchain?style=flat-square)](https://opensource.org/licenses/MIT) +[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) -[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) -[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) -[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain) Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs). @@ -54,6 +50,18 @@ Please see [our full documentation](https://python.langchain.com) on: There are five main areas that LangChain is designed to help with. These are, in increasing order of complexity: +**🤖 Agents:** + +Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. + +**📚 Retrieval Augmented Generation:** + +Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. + +**🧐 Evaluation:** + +Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. + **📃 Models and Prompts:** This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with chat models and LLMs. @@ -62,18 +70,6 @@ This includes prompt management, prompt optimization, a generic interface for al Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. -**📚 Retrieval Augmented Generation:** - -Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. - -**🤖 Agents:** - -Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. - -**🧐 Evaluation:** - -Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. - For more information on these concepts, please see our [full documentation](https://python.langchain.com). ## 💁 Contributing diff --git a/libs/langchain_v1/README.md b/libs/langchain_v1/README.md index a1fd0a5e7d7..1960f29e79b 100644 --- a/libs/langchain_v1/README.md +++ b/libs/langchain_v1/README.md @@ -2,13 +2,9 @@ ⚡ Building applications with LLMs through composability ⚡ -[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases) -[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain) -[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) +[![PyPI - License](https://img.shields.io/pypi/l/langchain?style=flat-square)](https://opensource.org/licenses/MIT) +[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) -[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) -[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) -[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain) Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs). @@ -54,6 +50,18 @@ Please see [our full documentation](https://python.langchain.com) on: There are five main areas that LangChain is designed to help with. These are, in increasing order of complexity: +**🤖 Agents:** + +Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. + +**📚 Retrieval Augmented Generation:** + +Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. + +**🧐 Evaluation:** + +Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. + **📃 Models and Prompts:** This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with chat models and LLMs. @@ -62,18 +70,6 @@ This includes prompt management, prompt optimization, a generic interface for al Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. -**📚 Retrieval Augmented Generation:** - -Retrieval Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. - -**🤖 Agents:** - -Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. - -**🧐 Evaluation:** - -Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. - For more information on these concepts, please see our [full documentation](https://python.langchain.com). ## 💁 Contributing diff --git a/libs/text-splitters/README.md b/libs/text-splitters/README.md index 4c1a04785ae..21baa648b21 100644 --- a/libs/text-splitters/README.md +++ b/libs/text-splitters/README.md @@ -1,7 +1,7 @@ # 🦜✂️ LangChain Text Splitters -[![Downloads](https://static.pepy.tech/badge/langchain_text_splitters/month)](https://pepy.tech/project/langchain_text_splitters) -[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) +[![PyPI - License](https://img.shields.io/pypi/l/langchain-text-splitters?style=flat-square)](https://opensource.org/licenses/MIT) +[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain-text-splitters)](https://pypistats.org/packages/langchain-text-splitters) ## Quick Install