mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 06:33:41 +00:00
<!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
37 lines
1.6 KiB
Plaintext
37 lines
1.6 KiB
Plaintext
---
|
|
sidebar_class_name: hidden
|
|
---
|
|
|
|
# LangChain Expression Language (LCEL)
|
|
|
|
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
|
|
There are several benefits to writing chains in this manner (as opposed to writing normal code):
|
|
|
|
**Async, Batch, and Streaming Support**
|
|
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
|
|
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.
|
|
|
|
**Fallbacks**
|
|
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
|
|
With LCEL you can easily attach fallbacks to any chain.
|
|
|
|
**Parallelism**
|
|
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
|
|
With LCEL syntax, any components that can be run in parallel automatically are.
|
|
|
|
**Seamless LangSmith Tracing Integration**
|
|
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
|
|
With LCEL, **all** steps are automatically logged to [LangSmith](https://smith.langchain.com) for maximal observability and debuggability.
|
|
|
|
#### [Interface](/docs/expression_language/interface)
|
|
The base interface shared by all LCEL objects
|
|
|
|
#### [How to](/docs/expression_language/how_to)
|
|
How to use core features of LCEL
|
|
|
|
#### [Cookbook](/docs/expression_language/cookbook)
|
|
Examples of common LCEL usage patterns
|
|
|
|
#### [Why use LCEL](/docs/expression_language/why)
|
|
A deeper dive into the benefits of LCEL
|