mirror of
https://github.com/hwchase17/langchain.git
synced 2025-04-27 19:46:55 +00:00
docs: concept rhs
This commit is contained in:
parent
4b641f87ae
commit
824017c203
@ -8,82 +8,155 @@ The conceptual guide does not cover step-by-step instructions or specific implem
|
||||
|
||||
## High level
|
||||
|
||||
- **[Why LangChain?](/docs/concepts/why_langchain)**: Overview of the value that LangChain provides.
|
||||
- **[Architecture](/docs/concepts/architecture)**: How packages are organized in the LangChain ecosystem.
|
||||
### [Why LangChain?](/docs/concepts/why_langchain)
|
||||
Overview of the value that LangChain provides.
|
||||
### [Architecture](/docs/concepts/architecture)
|
||||
How packages are organized in the LangChain ecosystem.
|
||||
|
||||
## Concepts
|
||||
|
||||
- **[Chat models](/docs/concepts/chat_models)**: LLMs exposed via a chat API that process sequences of messages as input and output a message.
|
||||
- **[Messages](/docs/concepts/messages)**: The unit of communication in chat models, used to represent model input and output.
|
||||
- **[Chat history](/docs/concepts/chat_history)**: A conversation represented as a sequence of messages, alternating between user messages and model responses.
|
||||
- **[Tools](/docs/concepts/tools)**: A function with an associated schema defining the function's name, description, and the arguments it accepts.
|
||||
- **[Tool calling](/docs/concepts/tool_calling)**: A type of chat model API that accepts tool schemas, along with messages, as input and returns invocations of those tools as part of the output message.
|
||||
- **[Structured output](/docs/concepts/structured_outputs)**: A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
|
||||
- **[Memory](https://langchain-ai.github.io/langgraph/concepts/memory/)**: Information about a conversation that is persisted so that it can be used in future conversations.
|
||||
- **[Multimodality](/docs/concepts/multimodality)**: The ability to work with data that comes in different forms, such as text, audio, images, and video.
|
||||
- **[Runnable interface](/docs/concepts/runnables)**: The base abstraction that many LangChain components and the LangChain Expression Language are built on.
|
||||
- **[LangChain Expression Language (LCEL)](/docs/concepts/lcel)**: A syntax for orchestrating LangChain components. Most useful for simpler applications.
|
||||
- **[Document loaders](/docs/concepts/document_loaders)**: Load a source as a list of documents.
|
||||
- **[Retrieval](/docs/concepts/retrieval)**: Information retrieval systems can retrieve structured or unstructured data from a datasource in response to a query.
|
||||
- **[Text splitters](/docs/concepts/text_splitters)**: Split long text into smaller chunks that can be individually indexed to enable granular retrieval.
|
||||
- **[Embedding models](/docs/concepts/embedding_models)**: Models that represent data such as text or images in a vector space.
|
||||
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
|
||||
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
|
||||
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
|
||||
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
|
||||
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
|
||||
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
|
||||
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
|
||||
- **[Example selectors](/docs/concepts/example_selectors)**: Used to select the most relevant examples from a dataset based on a given input. Example selectors are used in few-shot prompting to select examples for a prompt.
|
||||
- **[Async programming](/docs/concepts/async)**: The basics that one should know to use LangChain in an asynchronous context.
|
||||
- **[Callbacks](/docs/concepts/callbacks)**: Callbacks enable the execution of custom auxiliary code in built-in components. Callbacks are used to stream outputs from LLMs in LangChain, trace the intermediate steps of an application, and more.
|
||||
- **[Tracing](/docs/concepts/tracing)**: The process of recording the steps that an application takes to go from input to output. Tracing is essential for debugging and diagnosing issues in complex applications.
|
||||
- **[Evaluation](/docs/concepts/evaluation)**: The process of assessing the performance and effectiveness of AI applications. This involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications.
|
||||
### [Chat models](/docs/concepts/chat_models)
|
||||
LLMs exposed via a chat API that process sequences of messages as input and output a message.
|
||||
### [Messages](/docs/concepts/messages)
|
||||
The unit of communication in chat models, used to represent model input and output.
|
||||
### [Chat history](/docs/concepts/chat_history)
|
||||
A conversation represented as a sequence of messages, alternating between user messages and model responses.
|
||||
### [Tools](/docs/concepts/tools)
|
||||
A function with an associated schema defining the function's name, description, and the arguments it accepts.
|
||||
### [Tool calling](/docs/concepts/tool_calling)
|
||||
A type of chat model API that accepts tool schemas, along with messages, as input and returns invocations of those tools as part of the output message.
|
||||
### [Structured output](/docs/concepts/structured_outputs)
|
||||
A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
|
||||
### [Memory](https://langchain-ai.github.io/langgraph/concepts/memory/)
|
||||
Information about a conversation that is persisted so that it can be used in future conversations.
|
||||
### [Multimodality](/docs/concepts/multimodality)
|
||||
The ability to work with data that comes in different forms, such as text, audio, images, and video.
|
||||
### [Runnable interface](/docs/concepts/runnables)
|
||||
The base abstraction that many LangChain components and the LangChain Expression Language are built on.
|
||||
### [LangChain Expression Language (LCEL)](/docs/concepts/lcel)
|
||||
A syntax for orchestrating LangChain components. Most useful for simpler applications.
|
||||
### [Document loaders](/docs/concepts/document_loaders)
|
||||
Load a source as a list of documents.
|
||||
### [Retrieval](/docs/concepts/retrieval)
|
||||
Information retrieval systems can retrieve structured or unstructured data from a datasource in response to a query.
|
||||
### [Text splitters](/docs/concepts/text_splitters)
|
||||
Split long text into smaller chunks that can be individually indexed to enable granular retrieval.
|
||||
### [Embedding models](/docs/concepts/embedding_models)
|
||||
Models that represent data such as text or images in a vector space.
|
||||
### [Vector stores](/docs/concepts/vectorstores)
|
||||
Storage of and efficient search over vectors and associated metadata.
|
||||
### [Retriever](/docs/concepts/retrievers)
|
||||
A component that returns relevant documents from a knowledge base in response to a query.
|
||||
### [Retrieval Augmented Generation (RAG)](/docs/concepts/rag)
|
||||
A technique that enhances language models by combining them with external knowledge bases.
|
||||
### [Agents](/docs/concepts/agents)
|
||||
Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
|
||||
### [Prompt templates](/docs/concepts/prompt_templates)
|
||||
Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
|
||||
### [Output parsers](/docs/concepts/output_parsers)
|
||||
Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
|
||||
### [Few-shot prompting](/docs/concepts/few_shot_prompting)
|
||||
A technique for improving model performance by providing a few examples of the task to perform in the prompt.
|
||||
### [Example selectors](/docs/concepts/example_selectors)
|
||||
Used to select the most relevant examples from a dataset based on a given input. Example selectors are used in few-shot prompting to select examples for a prompt.
|
||||
### [Async programming](/docs/concepts/async)
|
||||
The basics that one should know to use LangChain in an asynchronous context.
|
||||
### [Callbacks](/docs/concepts/callbacks)
|
||||
Callbacks enable the execution of custom auxiliary code in built-in components. Callbacks are used to stream outputs from LLMs in LangChain, trace the intermediate steps of an application, and more.
|
||||
### [Tracing](/docs/concepts/tracing)
|
||||
The process of recording the steps that an application takes to go from input to output. Tracing is essential for debugging and diagnosing issues in complex applications.
|
||||
### [Evaluation](/docs/concepts/evaluation)
|
||||
The process of assessing the performance and effectiveness of AI applications. This involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications.
|
||||
|
||||
## Glossary
|
||||
|
||||
- **[AIMessageChunk](/docs/concepts/messages#aimessagechunk)**: A partial response from an AI message. Used when streaming responses from a chat model.
|
||||
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
|
||||
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
|
||||
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
|
||||
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs a Runnable.
|
||||
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
|
||||
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
|
||||
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
|
||||
- **[Configurable runnables](/docs/concepts/runnables/#configurable-runnables)**: Creating configurable Runnables.
|
||||
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
|
||||
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
|
||||
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
|
||||
- **[Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
|
||||
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
|
||||
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
|
||||
- **[InjectedStore](/docs/concepts/tools#injectedstore)**: A store that can be injected into a tool for data persistence.
|
||||
- **[InjectedToolArg](/docs/concepts/tools#injectedtoolarg)**: Mechanism to inject arguments into tool functions.
|
||||
- **[input and output types](/docs/concepts/runnables#input-and-output-types)**: Types used for input and output in Runnables.
|
||||
- **[Integration packages](/docs/concepts/architecture/#integration-packages)**: Third-party packages that integrate with LangChain.
|
||||
- **[invoke](/docs/concepts/runnables)**: A standard method to invoke a Runnable.
|
||||
- **[JSON mode](/docs/concepts/structured_outputs#json-mode)**: Returning responses in JSON format.
|
||||
- **[langchain-community](/docs/concepts/architecture#langchain-community)**: Community-driven components for LangChain.
|
||||
- **[langchain-core](/docs/concepts/architecture#langchain-core)**: Core langchain package. Includes base interfaces and in-memory implementations.
|
||||
- **[langchain](/docs/concepts/architecture#langchain)**: A package for higher level components (e.g., some pre-built chains).
|
||||
- **[langgraph](/docs/concepts/architecture#langgraph)**: Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
|
||||
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
|
||||
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
|
||||
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
|
||||
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
|
||||
- **[rate-limiting](/docs/concepts/chat_models#rate-limiting)**: Client side rate limiting for chat models.
|
||||
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
|
||||
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
|
||||
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
|
||||
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`,
|
||||
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
|
||||
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.
|
||||
- **[Tokens](/docs/concepts/tokens)**: The basic unit that a language model reads, processes, and generates under the hood.
|
||||
- **[Tool artifacts](/docs/concepts/tools#tool-artifacts)**: Add artifacts to the output of a tool that will not be sent to the model, but will be available for downstream processing.
|
||||
- **[Tool binding](/docs/concepts/tool_calling#tool-binding)**: Binding tools to models.
|
||||
- **[@tool](/docs/concepts/tools/#create-tools-using-the-tool-decorator)**: Decorator for creating tools in LangChain.
|
||||
- **[Toolkits](/docs/concepts/tools#toolkits)**: A collection of tools that can be used together.
|
||||
- **[ToolMessage](/docs/concepts/messages#toolmessage)**: Represents a message that contains the results of a tool execution.
|
||||
- **[Vector stores](/docs/concepts/vectorstores)**: Datastores specialized for storing and efficiently searching vector embeddings.
|
||||
- **[with_structured_output](/docs/concepts/structured_outputs/#structured-output-method)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
|
||||
- **[with_types](/docs/concepts/runnables#with_types)**: Method to overwrite the input and output types of a runnable. Useful when working with complex LCEL chains and deploying with LangServe.
|
||||
### [AIMessageChunk](/docs/concepts/messages#aimessagechunk)
|
||||
A partial response from an AI message. Used when streaming responses from a chat model.
|
||||
### [AIMessage](/docs/concepts/messages#aimessage)
|
||||
Represents a complete response from an AI model.
|
||||
### [astream_events](/docs/concepts/chat_models#key-methods)
|
||||
Stream granular information from [LCEL](/docs/concepts/lcel) chains.
|
||||
### [BaseTool](/docs/concepts/tools/#tool-interface)
|
||||
The base class for all tools in LangChain.
|
||||
### [batch](/docs/concepts/runnables)
|
||||
Use to execute a runnable with batch inputs a Runnable.
|
||||
### [bind_tools](/docs/concepts/tool_calling/#tool-binding)
|
||||
Allows models to interact with tools.
|
||||
### [Caching](/docs/concepts/chat_models#caching)
|
||||
Storing results to avoid redundant calls to a chat model.
|
||||
### [Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)
|
||||
Chat models that handle multiple data modalities.
|
||||
### [Configurable runnables](/docs/concepts/runnables/#configurable-runnables)
|
||||
Creating configurable Runnables.
|
||||
### [Context window](/docs/concepts/chat_models#context-window)
|
||||
The maximum size of input a chat model can process.
|
||||
### [Conversation patterns](/docs/concepts/chat_history#conversation-patterns)
|
||||
Common patterns in chat interactions.
|
||||
### [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)
|
||||
LangChain's representation of a document.
|
||||
### [Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)
|
||||
Models that generate vector embeddings for various data types.
|
||||
### [HumanMessage](/docs/concepts/messages#humanmessage)
|
||||
Represents a message from a human user.
|
||||
### [InjectedState](/docs/concepts/tools#injectedstate)
|
||||
A state injected into a tool function.
|
||||
### [InjectedStore](/docs/concepts/tools#injectedstore)
|
||||
A store that can be injected into a tool for data persistence.
|
||||
### [InjectedToolArg](/docs/concepts/tools#injectedtoolarg)
|
||||
Mechanism to inject arguments into tool functions.
|
||||
### [input and output types](/docs/concepts/runnables#input-and-output-types)
|
||||
Types used for input and output in Runnables.
|
||||
### [Integration packages](/docs/concepts/architecture/#integration-packages)
|
||||
Third-party packages that integrate with LangChain.
|
||||
### [invoke](/docs/concepts/runnables)
|
||||
A standard method to invoke a Runnable.
|
||||
### [JSON mode](/docs/concepts/structured_outputs#json-mode)
|
||||
Returning responses in JSON format.
|
||||
### [langchain-community](/docs/concepts/architecture#langchain-community)
|
||||
Community-driven components for LangChain.
|
||||
### [langchain-core](/docs/concepts/architecture#langchain-core)
|
||||
Core langchain package. Includes base interfaces and in-memory implementations.
|
||||
### [langchain](/docs/concepts/architecture#langchain)
|
||||
A package for higher level components (e.g., some pre-built chains).
|
||||
### [langgraph](/docs/concepts/architecture#langgraph)
|
||||
Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
|
||||
### [langserve](/docs/concepts/architecture#langserve)
|
||||
Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
|
||||
### [Managing chat history](/docs/concepts/chat_history#managing-chat-history)
|
||||
Techniques to maintain and manage the chat history.
|
||||
### [OpenAI format](/docs/concepts/messages#openai-format)
|
||||
OpenAI's message format for chat models.
|
||||
### [Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)
|
||||
Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
|
||||
### [rate-limiting](/docs/concepts/chat_models#rate-limiting)
|
||||
Client side rate limiting for chat models.
|
||||
### [RemoveMessage](/docs/concepts/messages/#removemessage)
|
||||
An abstraction used to remove a message from chat history, used primarily in LangGraph.
|
||||
### [role](/docs/concepts/messages#role)
|
||||
Represents the role (e.g., user, assistant) of a chat message.
|
||||
### [RunnableConfig](/docs/concepts/runnables/#runnableconfig)
|
||||
Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
|
||||
### [Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)
|
||||
Parameters such as API key, `temperature`, and `max_tokens`,
|
||||
### [stream](/docs/concepts/streaming)
|
||||
Use to stream output from a Runnable or a graph.
|
||||
### [Tokenization](/docs/concepts/tokens)
|
||||
The process of converting data into tokens and vice versa.
|
||||
### [Tokens](/docs/concepts/tokens)
|
||||
The basic unit that a language model reads, processes, and generates under the hood.
|
||||
### [Tool artifacts](/docs/concepts/tools#tool-artifacts)
|
||||
Add artifacts to the output of a tool that will not be sent to the model, but will be available for downstream processing.
|
||||
### [Tool binding](/docs/concepts/tool_calling#tool-binding)
|
||||
Binding tools to models.
|
||||
### [@tool](/docs/concepts/tools/#create-tools-using-the-tool-decorator)
|
||||
Decorator for creating tools in LangChain.
|
||||
### [Toolkits](/docs/concepts/tools#toolkits)
|
||||
A collection of tools that can be used together.
|
||||
### [ToolMessage](/docs/concepts/messages#toolmessage)
|
||||
Represents a message that contains the results of a tool execution.
|
||||
### [Vector stores](/docs/concepts/vectorstores)
|
||||
Datastores specialized for storing and efficiently searching vector embeddings.
|
||||
### [with_structured_output](/docs/concepts/structured_outputs/#structured-output-method)
|
||||
A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
|
||||
### [with_types](/docs/concepts/runnables#with_types)
|
||||
Method to overwrite the input and output types of a runnable. Useful when working with complex LCEL chains and deploying with LangServe.
|
||||
|
Loading…
Reference in New Issue
Block a user