mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-26 16:43:35 +00:00
docs[patch]: Standardize prerequisites in tutorial docs (#23150)
CC @baskaryan
This commit is contained in:
parent
3d54784e6d
commit
0c2ebe5f47
@ -21,6 +21,16 @@
|
||||
"source": [
|
||||
"# Build an Agent\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chat Models](/docs/concepts/#chat-models)\n",
|
||||
"- [Tools](/docs/concepts/#tools)\n",
|
||||
"- [Agents](/docs/concepts/#agents)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"By themselves, language models can't take actions - they just output text.\n",
|
||||
"A big use case for LangChain is creating **agents**.\n",
|
||||
"Agents are systems that use LLMs as reasoning engines to determine which actions to take and the inputs to pass them.\n",
|
||||
@ -28,16 +38,6 @@
|
||||
"\n",
|
||||
"In this tutorial we will build an agent that can interact with a search engine. You will be able to ask this agent questions, watch it call the search tool, and have conversations with it.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Concepts\n",
|
||||
"\n",
|
||||
"In following this tutorial, you will learn how to:\n",
|
||||
"\n",
|
||||
"- Use [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
|
||||
"- Use a Search [Tool](/docs/concepts/#tools) to look up information from the Internet\n",
|
||||
"- Compose a [LangGraph Agent](/docs/concepts/#agents), which use an LLM to determine actions and then execute them\n",
|
||||
"- Debug and trace your application using [LangSmith](/docs/concepts/#langsmith)\n",
|
||||
"\n",
|
||||
"## End-to-end agent\n",
|
||||
"\n",
|
||||
"The code snippet below represents a fully functional agent that uses an LLM to decide which tools to use. It is equipped with a generic search tool. It has conversational memory - meaning that it can be used as a multi-turn chatbot.\n",
|
||||
|
@ -25,6 +25,16 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chat Models](/docs/concepts/#chat-models)\n",
|
||||
"- [Prompt Templates](/docs/concepts/#prompt-templates)\n",
|
||||
"- [Chat History](/docs/concepts/#chat-history)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"We'll go over an example of how to design and implement an LLM-powered chatbot. \n",
|
||||
@ -39,18 +49,6 @@
|
||||
"\n",
|
||||
"This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Concepts\n",
|
||||
"\n",
|
||||
"Here are a few of the high-level components we'll be working with:\n",
|
||||
"\n",
|
||||
"- [`Chat Models`](/docs/concepts/#chat-models). The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs.\n",
|
||||
"- [`Prompt Templates`](/docs/concepts/#prompt-templates), which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
|
||||
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
|
||||
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
|
||||
"\n",
|
||||
"We'll cover how to fit the above components together to create a powerful conversational chatbot.\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"### Jupyter Notebook\n",
|
||||
|
@ -17,18 +17,21 @@
|
||||
"source": [
|
||||
"# Build an Extraction Chain\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chat Models](/docs/concepts/#chat-models)\n",
|
||||
"- [Tools](/docs/concepts/#tools)\n",
|
||||
"- [Tool calling](/docs/concepts/#function-tool-calling)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"In this tutorial, we will build a chain to extract structured information from unstructured text. \n",
|
||||
"\n",
|
||||
":::{.callout-important}\n",
|
||||
"This tutorial will only work with models that support **function/tool calling**\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"## Concepts\n",
|
||||
"\n",
|
||||
"Concepts we will cover are:\n",
|
||||
"- Using [language models](/docs/concepts/#chat-models)\n",
|
||||
"- Using [function/tool calling](/docs/concepts/#function-tool-calling)\n",
|
||||
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n"
|
||||
"This tutorial will only work with models that support **tool calling**\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -7,6 +7,18 @@
|
||||
"source": [
|
||||
"# Build a Local RAG Application\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chat Models](/docs/concepts/#chat-models)\n",
|
||||
"- [Chaining runnables](/docs/how_to/sequence/)\n",
|
||||
"- [Embeddings](/docs/concepts/#embedding-models)\n",
|
||||
"- [Vector stores](/docs/concepts/#vector-stores)\n",
|
||||
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.\n",
|
||||
"\n",
|
||||
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
|
||||
|
@ -19,6 +19,18 @@
|
||||
"source": [
|
||||
"# Build a PDF ingestion and Question/Answering system\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Document loaders](/docs/concepts/#document-loaders)\n",
|
||||
"- [Chat models](/docs/concepts/#chat-models)\n",
|
||||
"- [Embeddings](/docs/concepts/#embedding-models)\n",
|
||||
"- [Vector stores](/docs/concepts/#vector-stores)\n",
|
||||
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"PDF files often hold crucial unstructured data unavailable from other sources. They can be quite lengthy, and unlike plain text files, cannot generally be fed directly into the prompt of a language model.\n",
|
||||
"\n",
|
||||
"In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/#document-loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n",
|
||||
|
@ -17,6 +17,20 @@
|
||||
"source": [
|
||||
"# Conversational RAG\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chat history](/docs/concepts/#chat-history)\n",
|
||||
"- [Chat models](/docs/concepts/#chat-models)\n",
|
||||
"- [Embeddings](/docs/concepts/#embedding-models)\n",
|
||||
"- [Vector stores](/docs/concepts/#vector-stores)\n",
|
||||
"- [Retrieval-augmented generation](/docs/tutorials/rag/)\n",
|
||||
"- [Tools](/docs/concepts/#tools)\n",
|
||||
"- [Agents](/docs/concepts/#agents)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n",
|
||||
"\n",
|
||||
"In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/how_to/message_history).\n",
|
||||
|
@ -17,6 +17,18 @@
|
||||
"source": [
|
||||
"# Build a Query Analysis System\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Document loaders](/docs/concepts/#document-loaders)\n",
|
||||
"- [Chat models](/docs/concepts/#chat-models)\n",
|
||||
"- [Embeddings](/docs/concepts/#embedding-models)\n",
|
||||
"- [Vector stores](/docs/concepts/#vector-stores)\n",
|
||||
"- [Retrieval](/docs/concepts/#retrieval)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"This page will show how to use query analysis in a basic end-to-end example. This will cover creating a simple search engine, showing a failure mode that occurs when passing a raw user question to that search, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.\n",
|
||||
"\n",
|
||||
"For the purpose of this example, we will do retrieval over the LangChain YouTube videos."
|
||||
|
@ -16,6 +16,9 @@
|
||||
"LangSmith will become increasingly helpful as our application grows in\n",
|
||||
"complexity.\n",
|
||||
"\n",
|
||||
"If you're already familiar with basic retrieval, you might also be interested in\n",
|
||||
"this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n",
|
||||
"\n",
|
||||
"## What is RAG?\n",
|
||||
"\n",
|
||||
"RAG is a technique for augmenting LLM knowledge with additional data.\n",
|
||||
@ -36,7 +39,7 @@
|
||||
"The most common full sequence from raw data to answer looks like:\n",
|
||||
"\n",
|
||||
"### Indexing\n",
|
||||
"1. **Load**: First we need to load our data. This is done with [DocumentLoaders](/docs/concepts/#document-loaders).\n",
|
||||
"1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n",
|
||||
"2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n",
|
||||
"3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vectorstores) and [Embeddings](/docs/concepts/#embedding-models) model.\n",
|
||||
"\n",
|
||||
@ -930,14 +933,10 @@
|
||||
"the above sections. Along from the **Go deeper** sources mentioned\n",
|
||||
"above, good next steps include:\n",
|
||||
"\n",
|
||||
"- [Return\n",
|
||||
" sources](/docs/how_to/qa_sources): Learn\n",
|
||||
" how to return source documents\n",
|
||||
"- [Streaming](/docs/how_to/streaming):\n",
|
||||
" Learn how to stream outputs and intermediate steps\n",
|
||||
"- [Add chat\n",
|
||||
" history](/docs/how_to/message_history):\n",
|
||||
" Learn how to add chat history to your app"
|
||||
"- [Return sources](/docs/how_to/qa_sources): Learn how to return source documents\n",
|
||||
"- [Streaming](/docs/how_to/streaming): Learn how to stream outputs and intermediate steps\n",
|
||||
"- [Add chat history](/docs/how_to/message_history): Learn how to add chat history to your app\n",
|
||||
"- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -6,6 +6,17 @@
|
||||
"source": [
|
||||
"# Build a Question/Answering system over SQL data\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"\n",
|
||||
"- [Chaining runnables](/docs/how_to/sequence/)\n",
|
||||
"- [Chat models](/docs/concepts/#chat-models)\n",
|
||||
"- [Tools](/docs/concepts/#tools)\n",
|
||||
"- [Agents](/docs/concepts/#agents)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"Enabling a LLM system to query structured data can be qualitatively different from unstructured text data. Whereas in the latter it is common to generate text that can be searched against a vector database, the approach for structured data is often for the LLM to write and execute queries in a DSL, such as SQL. In this guide we'll go over the basic ways to create a Q&A system over tabular data in databases. We will cover implementations using both chains and agents. These systems will allow us to ask a question about the data in a database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many times as it needs to answer the question.\n",
|
||||
"\n",
|
||||
"## ⚠️ Security note ⚠️\n",
|
||||
|
Loading…
Reference in New Issue
Block a user