docs[patch]: Add canonical URLs pointing at 0.2 docs for popular pages (#22250)

@efriis
This commit is contained in:
Jacob Lee 2024-05-31 07:55:10 -07:00 committed by GitHub
parent b8550e7d3a
commit f883981446
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
50 changed files with 276 additions and 6 deletions

View File

@ -17,6 +17,12 @@
"id": "befa7fd1", "id": "befa7fd1",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/sequence/\" />\n",
"</head>\n",
"```\n",
"\n",
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging." "LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
] ]
}, },

View File

@ -4,6 +4,10 @@ sidebar_class_name: hidden
# LangChain Expression Language (LCEL) # LangChain Expression Language (LCEL)
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:

View File

@ -16,6 +16,12 @@
"id": "9a9acd2e", "id": "9a9acd2e",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/\" />\n",
"</head>\n",
"```\n",
"\n",
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n", "To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n",
"\n", "\n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n", "This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",

View File

@ -4,6 +4,10 @@ sidebar_position: 2
# Installation # Installation
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/how_to/installation/" />
</head>
## Official release ## Official release
To install LangChain run: To install LangChain run:

View File

@ -5,6 +5,10 @@ sidebar_class_name: hidden
# Introduction # Introduction
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/introduction/" />
</head>
**LangChain** is a framework for developing applications powered by large language models (LLMs). **LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle: LangChain simplifies every stage of the LLM application lifecycle:

View File

@ -4,6 +4,10 @@ sidebar_position: 1
# Quickstart # Quickstart
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/tutorials/llm_chain/" />
</head>
In this quickstart we'll show you how to: In this quickstart we'll show you how to:
- Get setup with LangChain, LangSmith and LangServe - Get setup with LangChain, LangSmith and LangServe
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers - Use the most basic and common components of LangChain: prompt templates, models, and output parsers

View File

@ -17,6 +17,12 @@
"source": [ "source": [
"# ChatOpenAI\n", "# ChatOpenAI\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/integrations/chat/openai/\" />\n",
"</head>\n",
"```\n",
"\n",
"This notebook covers how to get started with OpenAI chat models." "This notebook covers how to get started with OpenAI chat models."
] ]
}, },

View File

@ -6,6 +6,12 @@
"source": [ "source": [
"# Llama.cpp\n", "# Llama.cpp\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/integrations/llms/llamacpp/\" />\n",
"</head>\n",
"```\n",
"\n",
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp).\n", "[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
"\n", "\n",
"It supports inference for [many LLMs](https://github.com/ggerganov/llama.cpp#description) models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n", "It supports inference for [many LLMs](https://github.com/ggerganov/llama.cpp#description) models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n",

View File

@ -6,6 +6,12 @@
"source": [ "source": [
"# Ollama\n", "# Ollama\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/integrations/llms/ollama/\" />\n",
"</head>\n",
"```\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n", "[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n", "\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n", "Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# OpenAI\n", "# OpenAI\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/integrations/llms/openai/\" />\n",
"</head>\n",
"```\n",
"\n",
"[OpenAI](https://platform.openai.com/docs/introduction) offers a spectrum of models with different levels of power suitable for different tasks.\n", "[OpenAI](https://platform.openai.com/docs/introduction) offers a spectrum of models with different levels of power suitable for different tasks.\n",
"\n", "\n",
"This example goes over how to use LangChain to interact with `OpenAI` [models](https://platform.openai.com/docs/models)" "This example goes over how to use LangChain to interact with `OpenAI` [models](https://platform.openai.com/docs/models)"

View File

@ -5,6 +5,12 @@ sidebar_class_name: hidden
# Providers # Providers
```{=mdx}
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/integrations/platforms/" />
</head>
```
:::info :::info
If you'd like to write your own integration, see [Extending LangChain](/docs/guides/development/extending_langchain/). If you'd like to write your own integration, see [Extending LangChain](/docs/guides/development/extending_langchain/).

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# Chroma\n", "# Chroma\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/\" />\n",
"</head>\n",
"```\n",
"\n",
">[Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n", ">[Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n",
"\n", "\n",
"\n", "\n",

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# Faiss\n", "# Faiss\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/sequence/\" />\n",
"</head>\n",
"```\n",
"\n",
">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n", ">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n",
"\n", "\n",
"[Faiss documentation](https://faiss.ai/).\n", "[Faiss documentation](https://faiss.ai/).\n",

View File

@ -5,6 +5,10 @@ title: Types
# Agent Types # Agent Types
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/how_to/agent_executor/" />
</head>
This categorizes all the available agents along a few dimensions. This categorizes all the available agents along a few dimensions.
**Intended Model Type** **Intended Model Type**

View File

@ -17,6 +17,12 @@
"source": [ "source": [
"# ReAct\n", "# ReAct\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/agent_executor/\" />\n",
"</head>\n",
"```\n",
"\n",
"This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic." "This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic."
] ]
}, },

View File

@ -16,6 +16,12 @@
"source": [ "source": [
"# Tool calling agent\n", "# Tool calling agent\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/agent_executor/\" />\n",
"</head>\n",
"```\n",
"\n",
"[Tool calling](/docs/modules/model_io/chat/function_calling) allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools. The goal of tools APIs is to more reliably return valid and useful tool calls than what can be done using a generic text completion or chat API.\n", "[Tool calling](/docs/modules/model_io/chat/function_calling) allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools. The goal of tools APIs is to more reliably return valid and useful tool calls than what can be done using a generic text completion or chat API.\n",
"\n", "\n",
"We can take advantage of this structured output, combined with the fact that you can bind multiple tools to a [tool calling chat model](/docs/integrations/chat/) and\n", "We can take advantage of this structured output, combined with the fact that you can bind multiple tools to a [tool calling chat model](/docs/integrations/chat/) and\n",

View File

@ -17,6 +17,12 @@
"source": [ "source": [
"# Custom agent\n", "# Custom agent\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/agents/\" />\n",
"</head>\n",
"```\n",
"\n",
"This notebook goes through how to create your own custom agent.\n", "This notebook goes through how to create your own custom agent.\n",
"\n", "\n",
"In this example, we will use OpenAI Tool Calling to create this agent.\n", "In this example, we will use OpenAI Tool Calling to create this agent.\n",

View File

@ -17,6 +17,12 @@
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77", "id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/concepts/\" />\n",
"</head>\n",
"```\n",
"\n",
"The core idea of agents is to use a language model to choose a sequence of actions to take.\n", "The core idea of agents is to use a language model to choose a sequence of actions to take.\n",
"In chains, a sequence of actions is hardcoded (in code).\n", "In chains, a sequence of actions is hardcoded (in code).\n",
"In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.\n", "In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.\n",

View File

@ -18,6 +18,12 @@
"source": [ "source": [
"# Quickstart\n", "# Quickstart\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/agent_executor/\" />\n",
"</head>\n",
"```\n",
"\n",
"To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index.\n", "To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index.\n",
"\n", "\n",
"This will assume knowledge of [LLMs](/docs/modules/model_io/) and [retrieval](/docs/modules/data_connection/) so if you haven't already explored those sections, it is recommended you do so.\n", "This will assume knowledge of [LLMs](/docs/modules/model_io/) and [retrieval](/docs/modules/data_connection/) so if you haven't already explored those sections, it is recommended you do so.\n",
@ -705,7 +711,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.1" "version": "3.10.5"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -18,6 +18,12 @@
"id": "b872d874-ad6e-49b5-9435-66063a64d1a8", "id": "b872d874-ad6e-49b5-9435-66063a64d1a8",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/concepts/\" />\n",
"</head>\n",
"```\n",
"\n",
"Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with [LCEL](/docs/expression_language). \n", "Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with [LCEL](/docs/expression_language). \n",
"\n", "\n",
"LCEL is great for constructing your chains, but it's also nice to have chains used off the shelf. There are two types of off-the-shelf chains that LangChain supports:\n", "LCEL is great for constructing your chains, but it's also nice to have chains used off the shelf. There are two types of off-the-shelf chains that LangChain supports:\n",

View File

@ -4,6 +4,10 @@ sidebar_class_name: hidden
--- ---
# Document loaders # Document loaders
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
:::info :::info
Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools. Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools.
::: :::

View File

@ -4,6 +4,12 @@ keywords: [PyPDFDirectoryLoader, PyMuPDFLoader]
# PDF # PDF
```{=mdx}
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/how_to/document_loader_pdf/" />
</head>
```
>[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. >[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This covers how to load `PDF` documents into the Document format that we use downstream. This covers how to load `PDF` documents into the Document format that we use downstream.

View File

@ -4,6 +4,12 @@ sidebar_class_name: hidden
--- ---
# Text Splitters # Text Splitters
```{=mdx}
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
```
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example
is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain
has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# Recursively split by character\n", "# Recursively split by character\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/recursive_text_splitter/\" />\n",
"</head>\n",
"```\n",
"\n",
"This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `[\"\\n\\n\", \"\\n\", \" \", \"\"]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\n", "This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `[\"\\n\\n\", \"\\n\", \" \", \"\"]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\n",
"\n", "\n",
"1. How the text is split: by list of characters.\n", "1. How the text is split: by list of characters.\n",

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# Semantic Chunking\n", "# Semantic Chunking\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/semantic-chunker/\" />\n",
"</head>\n",
"```\n",
"\n",
"Splits the text based on semantic similarity.\n", "Splits the text based on semantic similarity.\n",
"\n", "\n",
"Taken from Greg Kamradt's wonderful notebook:\n", "Taken from Greg Kamradt's wonderful notebook:\n",

View File

@ -5,6 +5,10 @@ sidebar_class_name: hidden
# Retrieval # Retrieval
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
Many LLM applications require user-specific data that is not part of the model's training set. Many LLM applications require user-specific data that is not part of the model's training set.
The primary way of accomplishing this is through Retrieval Augmented Generation (RAG). The primary way of accomplishing this is through Retrieval Augmented Generation (RAG).
In this process, external data is *retrieved* and then passed to the LLM when doing the *generation* step. In this process, external data is *retrieved* and then passed to the LLM when doing the *generation* step.

View File

@ -6,6 +6,10 @@ sidebar_class_name: hidden
# Retrievers # Retrievers
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used
as the backbone of a retriever, but there are other types of retrievers as well. as the backbone of a retriever, but there are other types of retrievers as well.

View File

@ -17,6 +17,12 @@
"source": [ "source": [
"# Vector store-backed retriever\n", "# Vector store-backed retriever\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/vectorstore_retriever/\" />\n",
"</head>\n",
"```\n",
"\n",
"A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.\n", "A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.\n",
"It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.\n", "It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.\n",
"\n", "\n",

View File

@ -4,6 +4,10 @@ sidebar_class_name: hidden
--- ---
# Text embedding models # Text embedding models
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
:::info :::info
Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
::: :::

View File

@ -4,6 +4,12 @@ sidebar_class_name: hidden
--- ---
# Vector stores # Vector stores
```{=mdx}
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
```
:::info :::info
Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores. Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
::: :::

View File

@ -4,6 +4,10 @@ sidebar_class_name: hidden
# Components # Components
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
LangChain provides standard, extendable interfaces and external integrations for the following main components: LangChain provides standard, extendable interfaces and external integrations for the following main components:
## [Model I/O](/docs/modules/model_io/) ## [Model I/O](/docs/modules/model_io/)

View File

@ -19,6 +19,12 @@
"# Tool calling\n", "# Tool calling\n",
"\n", "\n",
"```{=mdx}\n", "```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/tool_calling/\" />\n",
"</head>\n",
"```\n",
"\n",
"```{=mdx}\n",
":::info\n", ":::info\n",
"We use the term \"tool calling\" interchangeably with \"function calling\". Although\n", "We use the term \"tool calling\" interchangeably with \"function calling\". Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n", "function calling is sometimes meant to refer to invocations of a single function,\n",

View File

@ -5,6 +5,10 @@ sidebar_class_name: hidden
# Chat Models # Chat Models
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
Chat Models are a core component of LangChain. Chat Models are a core component of LangChain.
A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs (as opposed to using plain text). A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs (as opposed to using plain text).

View File

@ -5,6 +5,10 @@ sidebar_class_name: hidden
# LLMs # LLMs
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
Large Language Models (LLMs) are a core component of LangChain. Large Language Models (LLMs) are a core component of LangChain.
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs. To be specific, this interface is one that takes as input a string and returns a string. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs. To be specific, this interface is one that takes as input a string and returns a string.

View File

@ -5,6 +5,10 @@ sidebar_class_name: hidden
--- ---
# Output Parsers # Output Parsers
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
Output parsers are responsible for taking the output of an LLM and transforming it to a more suitable format. This is very useful when you are using LLMs to generate any form of structured data. Output parsers are responsible for taking the output of an LLM and transforming it to a more suitable format. This is very useful when you are using LLMs to generate any form of structured data.
Besides having a large collection of different types of output parsers, one distinguishing benefit of LangChain OutputParsers is that many of them support streaming. Besides having a large collection of different types of output parsers, one distinguishing benefit of LangChain OutputParsers is that many of them support streaming.

View File

@ -4,6 +4,10 @@ sidebar_class_name: hidden
--- ---
# Prompts # Prompts
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/concepts/" />
</head>
A prompt for a language model is a set of instructions or input provided by a user to A prompt for a language model is a set of instructions or input provided by a user to
guide the model's response, helping it understand the context and generate relevant guide the model's response, helping it understand the context and generate relevant
and coherent language-based output, such as answering questions, completing sentences, and coherent language-based output, such as answering questions, completing sentences,

View File

@ -18,6 +18,12 @@
"source": [ "source": [
"# Quick reference\n", "# Quick reference\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/llm_chain/\" />\n",
"</head>\n",
"```\n",
"\n",
"Prompt templates are predefined recipes for generating prompts for language models.\n", "Prompt templates are predefined recipes for generating prompts for language models.\n",
"\n", "\n",
"A template may include instructions, few-shot examples, and specific context and\n", "A template may include instructions, few-shot examples, and specific context and\n",

View File

@ -7,6 +7,12 @@
"source": [ "source": [
"# Defining Custom Tools\n", "# Defining Custom Tools\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/custom_tools/\" />\n",
"</head>\n",
"```\n",
"\n",
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n", "When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n", "\n",
"- `name` (str), is required and must be unique within a set of tools provided to an agent\n", "- `name` (str), is required and must be unique within a set of tools provided to an agent\n",

View File

@ -18,6 +18,12 @@
"source": [ "source": [
"# Tools\n", "# Tools\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/concepts/\" />\n",
"</head>\n",
"```\n",
"\n",
"Tools are interfaces that an agent, chain, or LLM can use to interact with the world.\n", "Tools are interfaces that an agent, chain, or LLM can use to interact with the world.\n",
"They combine a few things:\n", "They combine a few things:\n",
"\n", "\n",

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# Chatbots\n", "# Chatbots\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/chatbot/\" />\n",
"</head>\n",
"```\n",
"\n",
"## Overview\n", "## Overview\n",
"\n", "\n",
"Chatbots are one of the most popular use-cases for LLMs. The core features of chatbots are that they can have long-running, stateful conversations and can answer user questions using relevant information.\n", "Chatbots are one of the most popular use-cases for LLMs. The core features of chatbots are that they can have long-running, stateful conversations and can answer user questions using relevant information.\n",

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# Quickstart\n", "# Quickstart\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/chatbot/\" />\n",
"</head>\n",
"```\n",
"\n",
"[![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/chatbots.ipynb)" "[![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/chatbots.ipynb)"
] ]
}, },

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# Tool usage\n", "# Tool usage\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/chatbots_tools/\" />\n",
"</head>\n",
"```\n",
"\n",
"This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n", "This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
"\n", "\n",
"Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/modules/agents/).\n", "Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/modules/agents/).\n",

View File

@ -18,6 +18,12 @@
"source": [ "source": [
"## Overview\n", "## Overview\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/extraction/\" />\n",
"</head>\n",
"```\n",
"\n",
"Large Language Models (LLMs) are emerging as an extremely capable technology for powering information extraction applications.\n", "Large Language Models (LLMs) are emerging as an extremely capable technology for powering information extraction applications.\n",
"\n", "\n",
"Classical solutions to information extraction rely on a combination of people, (many) hand-crafted rules (e.g., regular expressions), and custom fine-tuned ML models.\n", "Classical solutions to information extraction rely on a combination of people, (many) hand-crafted rules (e.g., regular expressions), and custom fine-tuned ML models.\n",

View File

@ -17,6 +17,12 @@
"source": [ "source": [
"# Add chat history\n", "# Add chat history\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to/\" />\n",
"</head>\n",
"```\n",
"\n",
"In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n", "In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n",
"\n", "\n",
"In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/expression_language/how_to/message_history).\n", "In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/expression_language/how_to/message_history).\n",

View File

@ -15,7 +15,13 @@
"id": "86fc5bb2-017f-434e-8cd6-53ab214a5604", "id": "86fc5bb2-017f-434e-8cd6-53ab214a5604",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Q&A with RAG" "# Q&A with RAG\n",
"\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/rag/\" />\n",
"</head>\n",
"```"
] ]
}, },
{ {

View File

@ -5,6 +5,10 @@ title: Quickstart
# Quickstart # Quickstart
<head>
<link rel="canonical" href="https://python.langchain.com/v0.2/docs/tutorials/rag/" />
</head>
LangChain has a number of components designed to help build LangChain has a number of components designed to help build
question-answering applications, and RAG applications more generally. To question-answering applications, and RAG applications more generally. To
familiarize ourselves with these, well build a simple Q&A application familiarize ourselves with these, well build a simple Q&A application

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# Agents\n", "# Agents\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/sql_qa/\" />\n",
"</head>\n",
"```\n",
"\n",
"LangChain has a SQL Agent which provides a more flexible way of interacting with SQL Databases than a chain. The main advantages of using the SQL Agent are:\n", "LangChain has a SQL Agent which provides a more flexible way of interacting with SQL Databases than a chain. The main advantages of using the SQL Agent are:\n",
"\n", "\n",
"- It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).\n", "- It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).\n",

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# SQL\n", "# SQL\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/sql_qa/\" />\n",
"</head>\n",
"```\n",
"\n",
"One of the most common types of databases that we can build Q&A systems for are SQL databases. LangChain comes with a number of built-in chains and agents that are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite). They enable use cases such as:\n", "One of the most common types of databases that we can build Q&A systems for are SQL databases. LangChain comes with a number of built-in chains and agents that are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite). They enable use cases such as:\n",
"\n", "\n",
"* Generating queries that will be run based on natural language questions,\n", "* Generating queries that will be run based on natural language questions,\n",

View File

@ -15,6 +15,12 @@
"source": [ "source": [
"# Quickstart\n", "# Quickstart\n",
"\n", "\n",
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/sql_qa/\" />\n",
"</head>\n",
"```\n",
"\n",
"In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.\n", "In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.\n",
"\n", "\n",
"## ⚠️ Security note ⚠️\n", "## ⚠️ Security note ⚠️\n",

View File

@ -16,6 +16,12 @@
"id": "cf13f702", "id": "cf13f702",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"<head>\n",
" <link rel=\"canonical\" href=\"https://python.langchain.com/v0.2/docs/tutorials/summarization/\" />\n",
"</head>\n",
"```\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/summarization.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/summarization.ipynb)\n",
"\n", "\n",
"## Use case\n", "## Use case\n",
@ -589,9 +595,9 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "poetry-venv", "display_name": "Python 3",
"language": "python", "language": "python",
"name": "poetry-venv" "name": "python3"
}, },
"language_info": { "language_info": {
"codemirror_mode": { "codemirror_mode": {
@ -603,7 +609,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.1" "version": "3.10.5"
} }
}, },
"nbformat": 4, "nbformat": 4,