Update readme (#16160)

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
This commit is contained in:
Nuno Campos 2024-01-17 13:56:07 -08:00 committed by GitHub
parent 1e80113ac9
commit ca014d5b04
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -11,29 +11,50 @@ pip install langchain-core
## What is it? ## What is it?
LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem. LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more. These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem. The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
For full documentation see the [API reference](https://api.python.langchain.com/en/stable/core_api_reference.html). For full documentation see the [API reference](https://api.python.langchain.com/en/stable/core_api_reference.html).
## What is LangChain Expression Language? ## 1⃣ Core Interface: Runnables
LangChain Core also contains LangChain Expression Language, or LCEL, a runtime that allows users to compose arbitrary sequences together and get several benefits that are important when building LLM applications. The concept of a Runnable is central to LangChain Core it is the interface that most LangChain Core components implement, giving them
We call these sequences “runnables”.
All runnables expose the same interface with single-invocation, batch, streaming and async methods. - a common invocation interface (invoke, batch, stream, etc.)
This design is useful because it is not enough to have a single sync interface when building an LLM application. - built-in utilities for retries, fallbacks, schemas and runtime configurability
Batch is needed for efficient processing of many inputs. - easy deployment with [LangServe](https://github.com/langchain-ai/langserve)
Streaming (and streaming of intermediate steps) is needed to show the user that progress is being made.
Async interfaces are nice when moving into production. For more check out the [runnable docs](https://python.langchain.com/docs/expression_language/interface). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.
Rather than having to write multiple implementations for all of those, LCEL allows you to write a runnable once and invoke it in many different ways.
You can use LangChain Core objects in two ways:
1. **imperative**, ie. call them directly, eg. `model.invoke(...)`
2. **declarative**, with LangChain Expression Language (LCEL)
| Feature | Imperative | Declarative |
| --------- | ------------------------------- | -------------- |
| Syntax | All of Python | LCEL |
| Tracing | ✅ Automatic | ✅ Automatic |
| Parallel | ✅ with threads or coroutines | ✅ Automatic |
| Streaming | ✅ by yielding | ✅ Automatic |
| Async | ✅ by writing async functions | ✅ Automatic |
## ⚡️ What is LangChain Expression Language?
LangChain Expression Language (LCEL) is a _declarative language_ for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs.
LangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support.
For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/). For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview") ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview")
For more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows.
## 📕 Releases & Versioning ## 📕 Releases & Versioning
`langchain-core` is currently on version `0.1.x`. `langchain-core` is currently on version `0.1.x`.
@ -55,4 +76,13 @@ Patch version increases will occur for:
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/). For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
## ⛰️ Why build on top of LangChain Core?
The whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits:
- **Modularity**: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider.
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- **Battle-tested**: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
- **Community**: LangChain Core is developed in the open, and we welcome contributions from the community.