diff --git a/docs/docs/concepts/lcel.mdx b/docs/docs/concepts/lcel.mdx index 158540818a1..9378ec8e928 100644 --- a/docs/docs/concepts/lcel.mdx +++ b/docs/docs/concepts/lcel.mdx @@ -30,13 +30,13 @@ Other benefits include: As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability. - **Standard API**: Because all chains are built using the Runnable interface, they can be used in the same way as any other Runnable. -- [**Deployable with LangServe**](/docs/concepts/langserve): Chains built with LCEL can be deployed using for production use. +- [**Deployable with LangServe**](/docs/concepts/architecture#langserve): Chains built with LCEL can be deployed using for production use. ## Should I use LCEL? LCEL is an [orchestration solution](https://en.wikipedia.org/wiki/Orchestration_(computing)) -- it allows LangChain to handle run-time execution of chains in an optimized way. -While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/langgraph). +While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/architecture#langgraph). In LangGraph, users define graphs that specify the flow of the application. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable. @@ -44,7 +44,7 @@ Here are some guidelines: * If you are making a single LLM call, you don't need LCEL; instead call the underlying [chat model](/docs/concepts/chat_models) directly. * If you have a simple chain (e.g., prompt + llm + parser, simple retrieval set up etc.), LCEL is a reasonable fit, if you're taking advantage of the LCEL benefits. -* If you're building a complex chain (e.g., with branching, cycles, multiple agents, etc.) use [LangGraph](/docs/concepts/langgraph) instead. Remember that you can always use LCEL within individual nodes in LangGraph. +* If you're building a complex chain (e.g., with branching, cycles, multiple agents, etc.) use [LangGraph](/docs/concepts/architecture#langgraph) instead. Remember that you can always use LCEL within individual nodes in LangGraph. ## Composition Primitives diff --git a/docs/docs/concepts/messages.mdx b/docs/docs/concepts/messages.mdx index c3fdcbfbb90..811396883af 100644 --- a/docs/docs/concepts/messages.mdx +++ b/docs/docs/concepts/messages.mdx @@ -81,7 +81,7 @@ The five main message types are: Other important messages include: -- [RemoveMessage](#removemessage) -- does not correspond to any role. This is an abstraction, mostly used in [LangGraph](/docs/concepts/langgraph) to manage chat history. +- [RemoveMessage](#removemessage) -- does not correspond to any role. This is an abstraction, mostly used in [LangGraph](/docs/concepts/architecture#langgraph) to manage chat history. - **Legacy** [FunctionMessage](#legacy-functionmessage): corresponds to the **function** role in OpenAI's **legacy** function-calling API. You can find more information about **messages** in the [API Reference](https://python.langchain.com/api_reference/core/messages.html). @@ -202,7 +202,7 @@ Please see [tool calling](/docs/concepts/tool_calling) for more information. ### RemoveMessage This is a special message type that does not correspond to any roles. It is used -for managing chat history in [LangGraph](/docs/concepts/langgraph). +for managing chat history in [LangGraph](/docs/concepts/architecture#langgraph). Please see the following for more information on how to use the `RemoveMessage`: diff --git a/docs/docs/concepts/runnables.mdx b/docs/docs/concepts/runnables.mdx index f5e52abdcb5..3d18e567a25 100644 --- a/docs/docs/concepts/runnables.mdx +++ b/docs/docs/concepts/runnables.mdx @@ -105,7 +105,7 @@ In some advanced uses, you may want to programmatically **inspect** the Runnable The Runnable interface provides methods to get the [JSON Schema](https://json-schema.org/) of the input and output types of a Runnable, as well as [Pydantic schemas](https://docs.pydantic.dev/latest/) for the input and output types. -These APIs are mostly used internally for unit-testing and by [LangServe](/docs/concepts/langserve) which uses the APIs for input validation and generation of [OpenAPI documentation](https://www.openapis.org/). +These APIs are mostly used internally for unit-testing and by [LangServe](/docs/concepts/architecture#langserve) which uses the APIs for input validation and generation of [OpenAPI documentation](https://www.openapis.org/). In addition, to the input and output types, some Runnables have been set up with additional run time configuration options. There are corresponding APIs to get the Pydantic Schema and JSON Schema of the configuration options for the Runnable. @@ -281,7 +281,7 @@ See the [How to handle rate limits](https://python.langchain.com/docs/how_to/cha The `configurable` field is used to pass runtime values for configurable attributes of the Runnable. -It is used frequently in [LangGraph](/docs/concepts/langgraph) with +It is used frequently in [LangGraph](/docs/concepts/architecture#langgraph) with [LangGraph Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/) and [memory](https://langchain-ai.github.io/langgraph/concepts/memory/). @@ -339,7 +339,7 @@ much more complex and error-prone than simply using `RunnableLambda` or `Runnabl This is an advanced feature that is unnecessary for most users. It helps with configuration of large "chains" created using the [LangChain Expression Language (LCEL)](/docs/concepts/lcel) -and is leveraged by [LangServe](/docs/concepts/langserve) for deployed Runnables. +and is leveraged by [LangServe](/docs/concepts/architecture#langserve) for deployed Runnables. ::: Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things with your Runnable. This could involve adjusting parameters like the temperature in a chat model or even switching between different chat models. diff --git a/docs/docs/concepts/streaming.mdx b/docs/docs/concepts/streaming.mdx index 796c8c9d823..7ab681b533e 100644 --- a/docs/docs/concepts/streaming.mdx +++ b/docs/docs/concepts/streaming.mdx @@ -26,7 +26,7 @@ The most common and critical data to stream is the output generated by the LLM i Beyond just streaming LLM output, it’s useful to stream progress through more complex workflows or pipelines, giving users a sense of how the application is progressing overall. This could include: - **In LangGraph Workflows:** -With [LangGraph](/docs/concepts/langgraph), workflows are composed of nodes and edges that represent various steps. Streaming here involves tracking changes to the **graph state** as individual **nodes** request updates. This allows for more granular monitoring of which node in the workflow is currently active, giving real-time updates about the status of the workflow as it progresses through different stages. +With [LangGraph](/docs/concepts/architecture#langgraph), workflows are composed of nodes and edges that represent various steps. Streaming here involves tracking changes to the **graph state** as individual **nodes** request updates. This allows for more granular monitoring of which node in the workflow is currently active, giving real-time updates about the status of the workflow as it progresses through different stages. - **In LCEL Pipelines:** Streaming updates from an [LCEL](/docs/concepts/lcel) pipeline involves capturing progress from individual **sub-runnables**. For example, as different steps or components of the pipeline execute, you can stream which sub-runnable is currently running, providing real-time insight into the overall pipeline's progress. @@ -75,7 +75,7 @@ When using `stream()` or `astream()` with chat models, the output is streamed as #### Usage with LangGraph -[LangGraph](/docs/concepts/langgraph) compiled graphs are [Runnables](/docs/concepts/runnables) and support the standard streaming APIs. +[LangGraph](/docs/concepts/architecture#langgraph) compiled graphs are [Runnables](/docs/concepts/runnables) and support the standard streaming APIs. When using the *stream* and *astream* methods with LangGraph, you can **one or more** [streaming mode](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamMode) which allow you to control the type of output that is streamed. The available streaming modes are: @@ -101,7 +101,7 @@ If you compose multiple Runnables using [LangChain’s Expression Language (LCEL :::tip Use the `astream_events` API to access custom data and intermediate outputs from LLM applications built entirely with [LCEL](/docs/concepts/lcel). -While this API is available for use with [LangGraph](/docs/concepts/langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs. +While this API is available for use with [LangGraph](/docs/concepts/architecture#langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs. ::: For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from te chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.