docs: fix more links (#27809)

Fix more broken links
This commit is contained in:
Eugene Yurtsev 2024-10-31 17:15:46 -04:00 committed by GitHub
parent e3ea365725
commit 2f6254605d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
41 changed files with 77 additions and 77 deletions

View File

@ -73,7 +73,7 @@ in certain scenarios.
If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
## How to use in ipython and jupyter notebooks

View File

@ -24,7 +24,7 @@ So a full conversation often involves a combination of two patterns of alternati
## Managing chat history
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models#context_window).
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).
While processing chat history, it's essential to preserve a correct conversation structure.

View File

@ -8,7 +8,7 @@ Modern LLMs are typically accessed through a chat model interface that takes a l
The newest generation of chat models offer additional capabilities:
* [Tool calling](/docs/concepts#tool-calling): Many popular chat models offer a native [tool calling](/docs/concepts#tool-calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Tool calling](/docs/concepts/tool_calling): Many popular chat models offer a native [tool calling](/docs/concepts/tool_calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Structured output](/docs/concepts/structured_outputs): A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
* [Multimodality](/docs/concepts/multimodality): The ability to work with data other than text; for example, images, audio, and video.
@ -18,11 +18,11 @@ LangChain provides a consistent interface for working with chat models from diff
* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Microsoft Azure, Google Vertex, Amazon Bedrock, Hugging Face, Cohere, Groq). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
* Use either LangChain's [messages](/docs/concepts/messages) format or OpenAI format.
* Standard [tool calling API](/docs/concepts#tool-calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard [tool calling API](/docs/concepts/tool_calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard API for structuring outputs (/docs/concepts/structured_outputs) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables#batch), [a rich streaming API](/docs/concepts/streaming).
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), [a rich streaming API](/docs/concepts/streaming).
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
* Additional features like standardized [token usage](/docs/concepts/messages#token_usage), [rate limiting](#rate-limiting), [caching](#cache) and more.
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#caching) and more.
## Integrations
@ -44,7 +44,7 @@ Models that do **not** include the prefix "Chat" in their name or include "LLM"
## Interface
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables#batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output.
@ -65,7 +65,7 @@ The key methods of a chat model are:
2. **stream**: A method that allows you to stream the output of a chat model as it is generated.
3. **batch**: A method that allows you to batch multiple requests to a chat model together for more efficient processing.
4. **bind_tools**: A method that allows you to bind a tool to a chat model for use in the model's execution context.
5. **with_structured_output**: A wrapper around the `invoke` method for models that natively support [structured output](/docs/concepts#structured_output).
5. **with_structured_output**: A wrapper around the `invoke` method for models that natively support [structured output](/docs/concepts/structured_outputs).
Other important methods can be found in the [BaseChatModel API Reference](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html).
@ -104,13 +104,13 @@ ChatModels also accept other parameters that are specific to that integration. T
## Tool calling
Chat models can call [tools](/docs/concepts/tools) to perform tasks such as fetching data from a database, making API requests, or running custom code. Please
see the [tool calling](/docs/concepts#tool-calling) guide for more information.
see the [tool calling](/docs/concepts/tool_calling) guide for more information.
## Structured outputs
Chat models can be requested to respond in a particular format (e.g., JSON or matching a particular schema). This feature is extremely
useful for information extraction tasks. Please read more about
the technique in the [structured outputs](/docs/concepts#structured_output) guide.
the technique in the [structured outputs](/docs/concepts/structured_outputs) guide.
## Multimodality
@ -162,7 +162,7 @@ Please see the [how to cache chat model responses](/docs/how_to/chat_model_cachi
### Conceptual guides
* [Messages](/docs/concepts/messages)
* [Tool calling](/docs/concepts#tool-calling)
* [Tool calling](/docs/concepts/tool_calling)
* [Multimodality](/docs/concepts/multimodality)
* [Structured outputs](/docs/concepts#structured_output)
* [Structured outputs](/docs/concepts/structured_outputs)
* [Tokens](/docs/concepts/tokens)

View File

@ -45,22 +45,22 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessageChunk](/docs/concepts/messages#aimessagechunk)**: A partial response from an AI message. Used when streaming responses from a chat model.
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools#basetool)**: The base class for all tools in LangChain.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs a Runnable.
- **[bind_tools](/docs/concepts/chat_models#bind-tools)**: Allows models to interact with tools.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality#chat-models)**: Chat models that handle multiple data modalities.
- **[Configurable runnables](/docs/concepts/runnables#configurable-Runnables)**: Creating configurable Runnables.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
- **[Configurable runnables](/docs/concepts/runnables/#configurable-runnables)**: Creating configurable Runnables.
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
- **[Embedding models](/docs/concepts/multimodality#embedding-models)**: Models that generate vector embeddings for various data types.
- **[Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
- **[InjectedStore](/docs/concepts/tools#injectedstore)**: A store that can be injected into a tool for data persistence.
- **[InjectedToolArg](/docs/concepts/tools#injectedtoolarg)**: Mechanism to inject arguments into tool functions.
- **[input and output types](/docs/concepts/runnables#input-and-output-types)**: Types used for input and output in Runnables.
- **[Integration packages](/docs/concepts/architecture#partner-packages)**: Third-party packages that integrate with LangChain.
- **[Integration packages](/docs/concepts/architecture/#integration-packages)**: Third-party packages that integrate with LangChain.
- **[invoke](/docs/concepts/runnables)**: A standard method to invoke a Runnable.
- **[JSON mode](/docs/concepts/structured_outputs#json-mode)**: Returning responses in JSON format.
- **[langchain-community](/docs/concepts/architecture#langchain-community)**: Community-driven components for LangChain.
@ -70,20 +70,20 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
- **[Propagation of RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[rate-limiting](/docs/concepts/chat_models#rate-limiting)**: Client side rate limiting for chat models.
- **[RemoveMessage](/docs/concepts/messages#remove-message)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
- **[RunnableConfig](/docs/concepts/runnables#RunnableConfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`,
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.
- **[Tokens](/docs/concepts/tokens)**: The basic unit that a language model reads, processes, and generates under the hood.
- **[Tool artifacts](/docs/concepts/tools#tool-artifacts)**: Add artifacts to the output of a tool that will not be sent to the model, but will be available for downstream processing.
- **[Tool binding](/docs/concepts/tool_calling#tool-binding)**: Binding tools to models.
- **[@tool](/docs/concepts/tools#@tool)**: Decorator for creating tools in LangChain.
- **[@tool](/docs/concepts/tools/#create-tools-using-the-tool-decorator)**: Decorator for creating tools in LangChain.
- **[Toolkits](/docs/concepts/tools#toolkits)**: A collection of tools that can be used together.
- **[ToolMessage](/docs/concepts/messages#toolmessage)**: Represents a message that contains the results of a tool execution.
- **[Vector stores](/docs/concepts/vectorstores)**: Datastores specialized for storing and efficiently searching vector embeddings.
- **[with_structured_output](/docs/concepts/chat_models#with-structured-output)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
- **[with_structured_output](/docs/concepts/structured_outputs/#structured-output-method)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
- **[with_types](/docs/concepts/runnables#with_types)**: Method to overwrite the input and output types of a runnable. Useful when working with complex LCEL chains and deploying with LangServe.

View File

@ -20,8 +20,8 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t
LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables#batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables#async-api). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).
Other benefits include:

View File

@ -46,7 +46,7 @@ The async versions of `abatch` and `abatch_as_completed` these rely on asyncio's
:::
:::tip
When processing a large number of inputs using `batch` or `batch_as_completed`, users may want to control the maximum number of parallel calls. This can be done by setting the `max_concurrency` attribute in the `RunnableConfig` dictionary. See the [RunnableConfig](/docs/concepts/runnables#RunnableConfig) for more information.
When processing a large number of inputs using `batch` or `batch_as_completed`, users may want to control the maximum number of parallel calls. This can be done by setting the `max_concurrency` attribute in the `RunnableConfig` dictionary. See the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) for more information.
Chat Models also have a built-in [rate limiter](/docs/concepts/chat_models#rate-limiting) that can be used to control the rate at which requests are made.
:::
@ -312,7 +312,7 @@ Please read the [Callbacks Conceptual Guide](/docs/concepts/callbacks) for more
:::important
If you're using Python 3.9 or 3.10 in an async environment, you must propagate
the `RunnableConfig` manually to sub-calls in some cases. Please see the
[Propagating RunnableConfig](#propagation-of-RunnableConfig) section for more information.
[Propagating RunnableConfig](#propagation-of-runnableconfig) section for more information.
:::
## Creating a runnable from a function

View File

@ -160,7 +160,7 @@ The `config` will not be part of the tool's schema and will be injected at runti
:::note
You may need to access the `config` object to manually propagate it to subclass. This happens if you're working with python 3.9 / 3.10 in an [async](/docs/concepts/async) environment and need to manually propagate the `config` object to sub-calls.
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
:::
### InjectedState

View File

@ -102,7 +102,7 @@ See our video playlist on [LangSmith tracing and evaluations](https://youtube.co
LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:
- **Ease of swapping providers:** It allows you to swap out different component providers without having to change the underlying code.
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/runnables/#streaming) and [tool calling](/docs/concepts/tool_calling/).
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/streaming) and [tool calling](/docs/concepts/tool_calling/).
[LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/) makes it possible to orchestrate complex applications (e.g., [agents](/docs/concepts/agents/)) and provide features like including [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), or [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).

View File

@ -164,7 +164,7 @@
"Under the hood, `MultiQueryRetriever` generates queries using a specific [prompt](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html). To customize this prompt:\n",
"\n",
"1. Make a [PromptTemplate](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.prompt.PromptTemplate.html) with an input variable for the question;\n",
"2. Implement an [output parser](/docs/concepts#output-parsers) like the one below to split the result into a list of queries.\n",
"2. Implement an [output parser](/docs/concepts/output_parsers) like the one below to split the result into a list of queries.\n",
"\n",
"The prompt and output parser together must support the generation of a list of queries."
]

View File

@ -261,7 +261,7 @@
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `stream_usage` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"You can also enable streaming token usage by setting `stream_usage` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts/lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]

View File

@ -11,8 +11,8 @@
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Runnables](/docs/concepts#runnable-interface)\n",
"- [Tools](/docs/concepts#tools)\n",
"- [Runnables](/docs/concepts/runnables)\n",
"- [Tools](/docs/concepts/tools)\n",
"- [Agents](/docs/tutorials/agents)\n",
"\n",
":::\n",
@ -40,7 +40,7 @@
"id": "2b0dcc1a-48e8-4a81-b920-3563192ce076",
"metadata": {},
"source": [
"LangChain [tools](/docs/concepts#tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.\n",
"LangChain [tools](/docs/concepts/tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.\n",
"\n",
"LangChain tools-- instances of [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html)-- are [Runnables](/docs/concepts/runnables) with additional constraints that enable them to be invoked effectively by language models:\n",
"\n",

View File

@ -38,7 +38,7 @@
"The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n",
"\n",
":::tip\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts/runnables) and will gain the standard `Runnable` functionality out of the box!\n",
":::\n",
"\n",
"\n",

View File

@ -19,7 +19,7 @@
"LangChain supports the creation of tools from:\n",
"\n",
"1. Functions;\n",
"2. LangChain [Runnables](/docs/concepts#runnable-interface);\n",
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
@ -415,7 +415,7 @@
"source": [
"## Creating tools from Runnables\n",
"\n",
"LangChain [Runnables](/docs/concepts#runnable-interface) that accept string or `dict` input can be converted to tools using the [as_tool](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.as_tool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.\n",
"LangChain [Runnables](/docs/concepts/runnables) that accept string or `dict` input can be converted to tools using the [as_tool](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.as_tool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.\n",
"\n",
"Example usage:"
]

View File

@ -9,7 +9,7 @@
"\n",
"The quality of extractions can often be improved by providing reference examples to the LLM.\n",
"\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts#functiontool-calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts/tool_calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"\n",
":::tip\n",
"While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work\n",

View File

@ -14,7 +14,7 @@
"To extract data without tool-calling features: \n",
"\n",
"1. Instruct the LLM to generate text following an expected format (e.g., JSON with a certain schema);\n",
"2. Use [output parsers](/docs/concepts#output-parsers) to structure the model response into a desired Python object.\n",
"2. Use [output parsers](/docs/concepts/output_parsers) to structure the model response into a desired Python object.\n",
"\n",
"First we select a LLM:\n",
"\n",

View File

@ -96,7 +96,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts/runnables), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts/lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

View File

@ -41,7 +41,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and an InMemory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), and [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and an InMemory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), and [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]

View File

@ -254,7 +254,7 @@
"source": [
"## Function-calling\n",
"\n",
"If your LLM of choice implements a [tool-calling](/docs/concepts#functiontool-calling) feature, you can use it to make the model specify which of the provided documents it's referencing when generating its answer. LangChain tool-calling models implement a `.with_structured_output` method which will force generation adhering to a desired schema (see for example [here](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)).\n",
"If your LLM of choice implements a [tool-calling](/docs/concepts/tool_calling) feature, you can use it to make the model specify which of the provided documents it's referencing when generating its answer. LangChain tool-calling models implement a `.with_structured_output` method which will force generation adhering to a desired schema (see for example [here](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)).\n",
"\n",
"### Cite documents\n",
"\n",

View File

@ -14,7 +14,7 @@
"We will cover two approaches:\n",
"\n",
"1. Using the built-in [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n",
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle.\n",
"2. Using a simple [LCEL](/docs/concepts/lcel) implementation, to show the operating principle.\n",
"\n",
"We will also show how to structure sources into the model response, such that a model can report what specific sources it used in generating its answer."
]
@ -28,7 +28,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]

View File

@ -21,7 +21,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]

View File

@ -32,7 +32,7 @@
"\n",
"Streaming is critical in making applications based on LLMs feel responsive to end-users.\n",
"\n",
"Important LangChain primitives like [chat models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [prompts](/docs/concepts/prompt_templates), [retrievers](/docs/concepts/retrievers), and [agents](/docs/concepts/agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n",
"Important LangChain primitives like [chat models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [prompts](/docs/concepts/prompt_templates), [retrievers](/docs/concepts/retrievers), and [agents](/docs/concepts/agents) implement the LangChain [Runnable Interface](/docs/concepts/runnables).\n",
"\n",
"This interface provides two general approaches to stream content:\n",
"\n",

View File

@ -276,7 +276,7 @@
"\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n",
"\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts#agents).\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts/agents).\n",
"\n",
"We'll use the [tool calling agent](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n",
"\n",

View File

@ -201,7 +201,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@ -113,8 +113,8 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](/docs/concepts#langchain-expression-language-lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n",
"- **[Overview](/docs/concepts/lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts/runnables)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
"\n",

View File

@ -217,7 +217,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@ -335,7 +335,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@ -105,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts/runnables)"
]
}
],

View File

@ -7,7 +7,7 @@ sidebar_class_name: hidden
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
[Embedding models](/docs/concepts#embedding-models) create a vector representation of a piece of text.
[Embedding models](/docs/concepts/embedding_models) create a vector representation of a piece of text.
This page documents integrations with various model providers that allow you to use embeddings in LangChain.

View File

@ -118,7 +118,7 @@
"source": [
"## Create the agent\n",
"\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts#agents)\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/agents)\n",
"\n",
"First, we choose the LLM we want to be guiding the agent."
]
@ -176,7 +176,7 @@
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
"Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/agents)"
]
},
{
@ -196,7 +196,7 @@
"id": "1a58c9f8",
"metadata": {},
"source": [
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/agents)"
]
},
{

View File

@ -8,7 +8,7 @@ sidebar_class_name: hidden
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/).
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts/lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/).
Use [LangGraph](/docs/concepts/architecture/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).

View File

@ -1,6 +1,6 @@
# INVALID_PROMPT_INPUT
A [prompt template](/docs/concepts#prompt-templates) received missing or invalid input variables.
A [prompt template](/docs/concepts/prompt_templates) received missing or invalid input variables.
## Troubleshooting

View File

@ -8,7 +8,7 @@
"\n",
"You are passing too many, too few, or mismatched [`ToolMessages`](https://api.js.langchain.com/classes/_langchain_core.messages_tool.ToolMessage.html) to a model.\n",
"\n",
"When [using a model to call tools](/docs/concepts#functiontool-calling), the [`AIMessage`](https://api.js.langchain.com/classes/_langchain_core.messages.AIMessage.html)\n",
"When [using a model to call tools](/docs/concepts/tool_calling), the [`AIMessage`](https://api.js.langchain.com/classes/_langchain_core.messages.AIMessage.html)\n",
"the model responds with will contain a `tool_calls` array. To continue the flow, the next messages you pass back to the model must\n",
"be exactly one `ToolMessage` for each item in that array containing the result of that tool call. Each `ToolMessage` must have a `tool_call_id` field\n",
"that matches one of the `tool_calls` on the `AIMessage`.\n",

View File

@ -38,7 +38,7 @@
"metadata": {},
"source": [
"These include OpenAI style message objects (`{ role: \"user\", content: \"Hello world!\" }`),\n",
"tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts#humanmessage)).\n",
"tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts/messages/#humanmessage)).\n",
"\n",
"If one of these modules receives a value outside of one of these formats, you will receive an error like the following:"
]

View File

@ -6,7 +6,7 @@
"source": [
"# OUTPUT_PARSING_FAILURE\n",
"\n",
"An [output parser](/docs/concepts#output-parsers) was unable to handle model output as expected.\n",
"An [output parser](/docs/concepts/output_parsers) was unable to handle model output as expected.\n",
"\n",
"To illustrate this, let's say you have an output parser that expects a chat model to output JSON surrounded by a markdown code tag (triple backticks). Here would be an example of good input:"
]

View File

@ -195,7 +195,7 @@
"source": [
"We need to use a model that supports function/tool calling.\n",
"\n",
"Please review [the documentation](/docs/concepts#function-tool-calling) for list of some models that can be used with this API."
"Please review [the documentation](/docs/concepts/tool_calling) for list of some models that can be used with this API."
]
},
{

View File

@ -52,7 +52,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and a simple in-memory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), and [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and a simple in-memory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), and [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]
@ -786,7 +786,7 @@
"id": "07dcb968-ed9a-458a-85e1-528cd28c6965",
"metadata": {},
"source": [
"Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel), and implement the usual interface:"
"Tools are LangChain [Runnables](/docs/concepts/lcel), and implement the usual interface:"
]
},
{

View File

@ -237,7 +237,7 @@
"## 1. Indexing: Load {#indexing-load}\n",
"\n",
"We need to first load the blog post contents. We can use\n",
"[DocumentLoaders](/docs/concepts#document-loaders)\n",
"[DocumentLoaders](/docs/concepts/document_loaders)\n",
"for this, which are objects that load in data from a source and return a\n",
"list of\n",
"[Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).\n",
@ -518,7 +518,7 @@
"\n",
"First we need to define our logic for searching over documents.\n",
"LangChain defines a\n",
"[Retriever](/docs/concepts#retrievers/) interface\n",
"[Retriever](/docs/concepts/retrievers) interface\n",
"which wraps an index that can return relevant `Documents` given a string\n",
"query.\n",
"\n",
@ -680,7 +680,7 @@
"id": "4516200c",
"metadata": {},
"source": [
"Well use the [LCEL Runnable](/docs/concepts#langchain-expression-language-lcel)\n",
"Well use the [LCEL Runnable](/docs/concepts/lcel)\n",
"protocol to define the chain, allowing us to \n",
"\n",
"- pipe together components and functions in a transparent way \n",
@ -731,7 +731,7 @@
"source": [
"Let's dissect the LCEL to understand what's going on.\n",
"\n",
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts#langchain-expression-language-lcel). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts/lcel). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
"\n",
"LangChain will automatically cast certain objects to runnables when met with the `|` operator. Here, `format_docs` is cast to a [RunnableLambda](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableLambda.html), and the dict with `\"context\"` and `\"question\"` is cast to a [RunnableParallel](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableParallel.html). The details are less important than the bigger point, which is that each object in the chain is a Runnable.\n",
"\n",
@ -933,10 +933,10 @@
"\n",
"We've covered the steps to build a basic Q&A app over data:\n",
"\n",
"- Loading data with a [Document Loader](/docs/concepts#document-loaders)\n",
"- Chunking the indexed data with a [Text Splitter](/docs/concepts#text-splitters) to make it more easily usable by a model\n",
"- [Embedding the data](/docs/concepts#embedding-models) and storing the data in a [vectorstore](/docs/how_to/vectorstores)\n",
"- [Retrieving](/docs/concepts#retrievers) the previously stored chunks in response to incoming questions\n",
"- Loading data with a [Document Loader](/docs/concepts/document_loaders)\n",
"- Chunking the indexed data with a [Text Splitter](/docs/concepts/text_splitters) to make it more easily usable by a model\n",
"- [Embedding the data](/docs/concepts/embedding_models) and storing the data in a [vectorstore](/docs/how_to/vectorstores)\n",
"- [Retrieving](/docs/concepts/retrievers) the previously stored chunks in response to incoming questions\n",
"- Generating an answer using the retrieved chunks as context\n",
"\n",
"Theres plenty of features, integrations, and extensions to explore in each of\n",

View File

@ -121,7 +121,7 @@
"\n",
"## Vector stores\n",
"\n",
"Vector search is a common way to store and search over unstructured data (such as unstructured text). The idea is to store numeric vectors that are associated with the text. Given a query, we can [embed](/docs/concepts#embedding-models) it as a vector of the same dimension and use vector similarity metrics to identify related data in the store.\n",
"Vector search is a common way to store and search over unstructured data (such as unstructured text). The idea is to store numeric vectors that are associated with the text. Given a query, we can [embed](/docs/concepts/embedding_models) it as a vector of the same dimension and use vector similarity metrics to identify related data in the store.\n",
"\n",
"LangChain [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html) objects contain methods for adding text and `Document` objects to the store, and querying them using various similarity metrics. They are often initialized with [embedding](/docs/how_to/embed_text) models, which determine how text data is translated to numeric vectors.\n",
"\n",

View File

@ -138,7 +138,7 @@
"\n",
"## Chains {#chains}\n",
"\n",
"Chains (i.e., compositions of LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel)) support applications whose steps are predictable. We can create a simple chain that takes a question and does the following:\n",
"Chains (i.e., compositions of LangChain [Runnables](/docs/concepts/lcel)) support applications whose steps are predictable. We can create a simple chain that takes a question and does the following:\n",
"- convert the question into a SQL query;\n",
"- execute the query;\n",
"- use the result to answer the original question.\n",

View File

@ -34,7 +34,7 @@
":::info Prerequisites\n",
"\n",
"These guides assume some familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"- [LangChain Expression Language](/docs/concepts/lcel)\n",
"- [LangGraph](https://langchain-ai.github.io/langgraph/)\n",
":::\n",
"\n",

View File

@ -184,7 +184,7 @@ custom_edit_url:
[Tools](/docs/concepts/tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
A [toolkit](/docs/concepts#toolkits) is a collection of tools meant to be used together.
A [toolkit](/docs/concepts/tools/#toolkits) is a collection of tools meant to be used together.
:::info