mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-21 10:31:23 +00:00
chore(docs): update package README
s (#32869)
- Fix badges - Focus on agents - Cut down fluff
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
# 🦜🍎️ LangChain Core
|
||||
|
||||
[](https://pepy.tech/project/langchain_core)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://pypistats.org/packages/langchain-core)
|
||||
|
||||
## Quick Install
|
||||
|
||||
@@ -11,51 +11,31 @@ pip install langchain-core
|
||||
|
||||
## What is it?
|
||||
|
||||
LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem.
|
||||
LangChain Core contains the base abstractions that power the the LangChain ecosystem.
|
||||
|
||||
These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more.
|
||||
These abstractions are designed to be as modular and simple as possible.
|
||||
|
||||
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
|
||||
|
||||
For full documentation see the [API reference](https://python.langchain.com/api_reference/core/index.html).
|
||||
|
||||
## ⛰️ Why build on top of LangChain Core?
|
||||
|
||||
The LangChain ecosystem is built on top of `langchain-core`. Some of the benefits:
|
||||
|
||||
- **Modularity**: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider.
|
||||
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
|
||||
- **Battle-tested**: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
|
||||
|
||||
## 1️⃣ Core Interface: Runnables
|
||||
|
||||
The concept of a `Runnable` is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them
|
||||
|
||||
- a common invocation interface (`invoke()`, `batch()`, `stream()`, etc.)
|
||||
- built-in utilities for retries, fallbacks, schemas and runtime configurability
|
||||
- easy deployment with [LangGraph](https://github.com/langchain-ai/langgraph)
|
||||
- A common invocation interface (`invoke()`, `batch()`, `stream()`, etc.)
|
||||
- Built-in utilities for retries, fallbacks, schemas and runtime configurability
|
||||
- Easy deployment with [LangGraph](https://github.com/langchain-ai/langgraph)
|
||||
|
||||
For more check out the [runnable docs](https://python.langchain.com/docs/concepts/runnables/). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.
|
||||
|
||||
You can use LangChain Core objects in two ways:
|
||||
|
||||
1. **imperative**, ie. call them directly, eg. `model.invoke(...)`
|
||||
|
||||
2. **declarative**, with LangChain Expression Language (LCEL)
|
||||
|
||||
3. or a mix of both! eg. one of the steps in your LCEL sequence can be a custom function
|
||||
|
||||
| Feature | Imperative | Declarative |
|
||||
| --------- | ------------------------------- | -------------- |
|
||||
| Syntax | All of Python | LCEL |
|
||||
| Tracing | ✅ – Automatic | ✅ – Automatic |
|
||||
| Parallel | ✅ – with threads or coroutines | ✅ – Automatic |
|
||||
| Streaming | ✅ – by yielding | ✅ – Automatic |
|
||||
| Async | ✅ – by writing async functions | ✅ – Automatic |
|
||||
|
||||
## ⚡️ What is LangChain Expression Language?
|
||||
|
||||
LangChain Expression Language (LCEL) is a _declarative language_ for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs.
|
||||
|
||||
LangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support.
|
||||
|
||||
For more check out the [LCEL docs](https://python.langchain.com/docs/concepts/lcel/).
|
||||
|
||||

|
||||
|
||||
For more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows.
|
||||
For more check out the [`Runnable` docs](https://python.langchain.com/docs/concepts/runnables/). Examples of components that implement the interface include: Chat Models, Tools, Retrievers, and Output Parsers.
|
||||
|
||||
## 📕 Releases & Versioning
|
||||
|
||||
@@ -77,12 +57,3 @@ Patch version increases will occur for:
|
||||
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
|
||||
|
||||
For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
|
||||
|
||||
## ⛰️ Why build on top of LangChain Core?
|
||||
|
||||
The whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits:
|
||||
|
||||
- **Modularity**: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider.
|
||||
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
|
||||
- **Battle-tested**: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
|
||||
- **Community**: LangChain Core is developed in the open, and we welcome contributions from the community.
|
||||
|
Reference in New Issue
Block a user