diff --git a/docs/docs/expression_language/get_started.ipynb b/docs/docs/expression_language/get_started.ipynb index f3f55a36fe5..d6db4200383 100644 --- a/docs/docs/expression_language/get_started.ipynb +++ b/docs/docs/expression_language/get_started.ipynb @@ -17,6 +17,12 @@ "id": "befa7fd1", "metadata": {}, "source": [ + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging." ] }, diff --git a/docs/docs/expression_language/index.mdx b/docs/docs/expression_language/index.mdx index 9b970fda6db..4aff0314d3f 100644 --- a/docs/docs/expression_language/index.mdx +++ b/docs/docs/expression_language/index.mdx @@ -4,6 +4,10 @@ sidebar_class_name: hidden # LangChain Expression Language (LCEL) + + + + LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: diff --git a/docs/docs/expression_language/interface.ipynb b/docs/docs/expression_language/interface.ipynb index 88485abd50e..91197e28d25 100644 --- a/docs/docs/expression_language/interface.ipynb +++ b/docs/docs/expression_language/interface.ipynb @@ -16,6 +16,12 @@ "id": "9a9acd2e", "metadata": {}, "source": [ + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n", "\n", "This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n", diff --git a/docs/docs/get_started/installation.mdx b/docs/docs/get_started/installation.mdx index e84ff564604..41e9b399f8f 100644 --- a/docs/docs/get_started/installation.mdx +++ b/docs/docs/get_started/installation.mdx @@ -4,6 +4,10 @@ sidebar_position: 2 # Installation + + + + ## Official release To install LangChain run: diff --git a/docs/docs/get_started/introduction.mdx b/docs/docs/get_started/introduction.mdx index ab8350b7d3b..2a1047da853 100644 --- a/docs/docs/get_started/introduction.mdx +++ b/docs/docs/get_started/introduction.mdx @@ -5,6 +5,10 @@ sidebar_class_name: hidden # Introduction + + + + **LangChain** is a framework for developing applications powered by large language models (LLMs). LangChain simplifies every stage of the LLM application lifecycle: diff --git a/docs/docs/get_started/quickstart.mdx b/docs/docs/get_started/quickstart.mdx index 87b5a100132..9ba099db689 100644 --- a/docs/docs/get_started/quickstart.mdx +++ b/docs/docs/get_started/quickstart.mdx @@ -4,6 +4,10 @@ sidebar_position: 1 # Quickstart + + + + In this quickstart we'll show you how to: - Get setup with LangChain, LangSmith and LangServe - Use the most basic and common components of LangChain: prompt templates, models, and output parsers diff --git a/docs/docs/integrations/chat/openai.ipynb b/docs/docs/integrations/chat/openai.ipynb index 176e3840ec9..a68ea4a2f0d 100644 --- a/docs/docs/integrations/chat/openai.ipynb +++ b/docs/docs/integrations/chat/openai.ipynb @@ -17,6 +17,12 @@ "source": [ "# ChatOpenAI\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "This notebook covers how to get started with OpenAI chat models." ] }, diff --git a/docs/docs/integrations/llms/llamacpp.ipynb b/docs/docs/integrations/llms/llamacpp.ipynb index 5b03e5f4a25..e9bd62e7e4f 100644 --- a/docs/docs/integrations/llms/llamacpp.ipynb +++ b/docs/docs/integrations/llms/llamacpp.ipynb @@ -6,6 +6,12 @@ "source": [ "# Llama.cpp\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp).\n", "\n", "It supports inference for [many LLMs](https://github.com/ggerganov/llama.cpp#description) models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n", diff --git a/docs/docs/integrations/llms/ollama.ipynb b/docs/docs/integrations/llms/ollama.ipynb index cd4d1782e2f..f6cebb31fe9 100644 --- a/docs/docs/integrations/llms/ollama.ipynb +++ b/docs/docs/integrations/llms/ollama.ipynb @@ -6,6 +6,12 @@ "source": [ "# Ollama\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n", "\n", "Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n", diff --git a/docs/docs/integrations/llms/openai.ipynb b/docs/docs/integrations/llms/openai.ipynb index a368259fb80..6af90fb1401 100644 --- a/docs/docs/integrations/llms/openai.ipynb +++ b/docs/docs/integrations/llms/openai.ipynb @@ -7,6 +7,12 @@ "source": [ "# OpenAI\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[OpenAI](https://platform.openai.com/docs/introduction) offers a spectrum of models with different levels of power suitable for different tasks.\n", "\n", "This example goes over how to use LangChain to interact with `OpenAI` [models](https://platform.openai.com/docs/models)" diff --git a/docs/docs/integrations/platforms/index.mdx b/docs/docs/integrations/platforms/index.mdx index 5e7040eba49..16a878a6f60 100644 --- a/docs/docs/integrations/platforms/index.mdx +++ b/docs/docs/integrations/platforms/index.mdx @@ -5,6 +5,12 @@ sidebar_class_name: hidden # Providers +```{=mdx} + + + +``` + :::info If you'd like to write your own integration, see [Extending LangChain](/docs/guides/development/extending_langchain/). diff --git a/docs/docs/integrations/vectorstores/chroma.ipynb b/docs/docs/integrations/vectorstores/chroma.ipynb index 747abbebb82..33f5707dcf5 100644 --- a/docs/docs/integrations/vectorstores/chroma.ipynb +++ b/docs/docs/integrations/vectorstores/chroma.ipynb @@ -7,6 +7,12 @@ "source": [ "# Chroma\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", ">[Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n", "\n", "\n", diff --git a/docs/docs/integrations/vectorstores/faiss.ipynb b/docs/docs/integrations/vectorstores/faiss.ipynb index 022d5a5003f..f4a3fb2e8aa 100644 --- a/docs/docs/integrations/vectorstores/faiss.ipynb +++ b/docs/docs/integrations/vectorstores/faiss.ipynb @@ -7,6 +7,12 @@ "source": [ "# Faiss\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", ">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n", "\n", "[Faiss documentation](https://faiss.ai/).\n", @@ -83,7 +89,7 @@ "metadata": { "tags": [] }, - "outputs": [ + "outputs": [ { "data": { "text/plain": [ diff --git a/docs/docs/modules/agents/agent_types/index.mdx b/docs/docs/modules/agents/agent_types/index.mdx index 054c9415419..7f9a28e463b 100644 --- a/docs/docs/modules/agents/agent_types/index.mdx +++ b/docs/docs/modules/agents/agent_types/index.mdx @@ -5,6 +5,10 @@ title: Types # Agent Types + + + + This categorizes all the available agents along a few dimensions. **Intended Model Type** diff --git a/docs/docs/modules/agents/agent_types/react.ipynb b/docs/docs/modules/agents/agent_types/react.ipynb index 2a8e604d482..d94cde17412 100644 --- a/docs/docs/modules/agents/agent_types/react.ipynb +++ b/docs/docs/modules/agents/agent_types/react.ipynb @@ -17,6 +17,12 @@ "source": [ "# ReAct\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic." ] }, diff --git a/docs/docs/modules/agents/agent_types/tool_calling.ipynb b/docs/docs/modules/agents/agent_types/tool_calling.ipynb index e9fcae5d6c3..a250a03adbf 100644 --- a/docs/docs/modules/agents/agent_types/tool_calling.ipynb +++ b/docs/docs/modules/agents/agent_types/tool_calling.ipynb @@ -16,6 +16,12 @@ "source": [ "# Tool calling agent\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[Tool calling](/docs/modules/model_io/chat/function_calling) allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools. The goal of tools APIs is to more reliably return valid and useful tool calls than what can be done using a generic text completion or chat API.\n", "\n", "We can take advantage of this structured output, combined with the fact that you can bind multiple tools to a [tool calling chat model](/docs/integrations/chat/) and\n", diff --git a/docs/docs/modules/agents/how_to/custom_agent.ipynb b/docs/docs/modules/agents/how_to/custom_agent.ipynb index 8376017632d..5ec9a77ec66 100644 --- a/docs/docs/modules/agents/how_to/custom_agent.ipynb +++ b/docs/docs/modules/agents/how_to/custom_agent.ipynb @@ -17,6 +17,12 @@ "source": [ "# Custom agent\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "This notebook goes through how to create your own custom agent.\n", "\n", "In this example, we will use OpenAI Tool Calling to create this agent.\n", diff --git a/docs/docs/modules/agents/index.ipynb b/docs/docs/modules/agents/index.ipynb index 04a53086180..9552a51f62d 100644 --- a/docs/docs/modules/agents/index.ipynb +++ b/docs/docs/modules/agents/index.ipynb @@ -17,6 +17,12 @@ "id": "f4c03f40-1328-412d-8a48-1db0cd481b77", "metadata": {}, "source": [ + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "The core idea of agents is to use a language model to choose a sequence of actions to take.\n", "In chains, a sequence of actions is hardcoded (in code).\n", "In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.\n", diff --git a/docs/docs/modules/agents/quick_start.ipynb b/docs/docs/modules/agents/quick_start.ipynb index 883f22076c4..93db0ab765b 100644 --- a/docs/docs/modules/agents/quick_start.ipynb +++ b/docs/docs/modules/agents/quick_start.ipynb @@ -18,6 +18,12 @@ "source": [ "# Quickstart\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index.\n", "\n", "This will assume knowledge of [LLMs](/docs/modules/model_io/) and [retrieval](/docs/modules/data_connection/) so if you haven't already explored those sections, it is recommended you do so.\n", @@ -705,7 +711,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.5" } }, "nbformat": 4, diff --git a/docs/docs/modules/chains.ipynb b/docs/docs/modules/chains.ipynb index b57e41b3710..78949e56e1e 100644 --- a/docs/docs/modules/chains.ipynb +++ b/docs/docs/modules/chains.ipynb @@ -18,6 +18,12 @@ "id": "b872d874-ad6e-49b5-9435-66063a64d1a8", "metadata": {}, "source": [ + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with [LCEL](/docs/expression_language). \n", "\n", "LCEL is great for constructing your chains, but it's also nice to have chains used off the shelf. There are two types of off-the-shelf chains that LangChain supports:\n", diff --git a/docs/docs/modules/data_connection/document_loaders/index.mdx b/docs/docs/modules/data_connection/document_loaders/index.mdx index bb12e97e6b2..56e88570dd1 100644 --- a/docs/docs/modules/data_connection/document_loaders/index.mdx +++ b/docs/docs/modules/data_connection/document_loaders/index.mdx @@ -4,6 +4,10 @@ sidebar_class_name: hidden --- # Document loaders + + + + :::info Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools. ::: diff --git a/docs/docs/modules/data_connection/document_loaders/pdf.mdx b/docs/docs/modules/data_connection/document_loaders/pdf.mdx index c9215df055e..efadb9b690e 100644 --- a/docs/docs/modules/data_connection/document_loaders/pdf.mdx +++ b/docs/docs/modules/data_connection/document_loaders/pdf.mdx @@ -4,6 +4,12 @@ keywords: [PyPDFDirectoryLoader, PyMuPDFLoader] # PDF +```{=mdx} + + + +``` + >[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. This covers how to load `PDF` documents into the Document format that we use downstream. diff --git a/docs/docs/modules/data_connection/document_transformers/index.mdx b/docs/docs/modules/data_connection/document_transformers/index.mdx index 455bb5cc4cd..6a57537876e 100644 --- a/docs/docs/modules/data_connection/document_transformers/index.mdx +++ b/docs/docs/modules/data_connection/document_transformers/index.mdx @@ -4,6 +4,12 @@ sidebar_class_name: hidden --- # Text Splitters +```{=mdx} + + + +``` + Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. diff --git a/docs/docs/modules/data_connection/document_transformers/recursive_text_splitter.ipynb b/docs/docs/modules/data_connection/document_transformers/recursive_text_splitter.ipynb index 4d576acddd9..b6c840e17e9 100644 --- a/docs/docs/modules/data_connection/document_transformers/recursive_text_splitter.ipynb +++ b/docs/docs/modules/data_connection/document_transformers/recursive_text_splitter.ipynb @@ -7,6 +7,12 @@ "source": [ "# Recursively split by character\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `[\"\\n\\n\", \"\\n\", \" \", \"\"]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\n", "\n", "1. How the text is split: by list of characters.\n", diff --git a/docs/docs/modules/data_connection/document_transformers/semantic-chunker.ipynb b/docs/docs/modules/data_connection/document_transformers/semantic-chunker.ipynb index 2f766dd1c0f..ed0b5a33152 100644 --- a/docs/docs/modules/data_connection/document_transformers/semantic-chunker.ipynb +++ b/docs/docs/modules/data_connection/document_transformers/semantic-chunker.ipynb @@ -7,6 +7,12 @@ "source": [ "# Semantic Chunking\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "Splits the text based on semantic similarity.\n", "\n", "Taken from Greg Kamradt's wonderful notebook:\n", diff --git a/docs/docs/modules/data_connection/index.mdx b/docs/docs/modules/data_connection/index.mdx index 78806d1749d..138121b9e7b 100644 --- a/docs/docs/modules/data_connection/index.mdx +++ b/docs/docs/modules/data_connection/index.mdx @@ -5,6 +5,10 @@ sidebar_class_name: hidden # Retrieval + + + + Many LLM applications require user-specific data that is not part of the model's training set. The primary way of accomplishing this is through Retrieval Augmented Generation (RAG). In this process, external data is *retrieved* and then passed to the LLM when doing the *generation* step. diff --git a/docs/docs/modules/data_connection/retrievers/index.mdx b/docs/docs/modules/data_connection/retrievers/index.mdx index e6213a28055..413ce68e7fd 100644 --- a/docs/docs/modules/data_connection/retrievers/index.mdx +++ b/docs/docs/modules/data_connection/retrievers/index.mdx @@ -6,6 +6,10 @@ sidebar_class_name: hidden # Retrievers + + + + A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well. diff --git a/docs/docs/modules/data_connection/retrievers/vectorstore.ipynb b/docs/docs/modules/data_connection/retrievers/vectorstore.ipynb index d41db69552f..6442a6b5bd6 100644 --- a/docs/docs/modules/data_connection/retrievers/vectorstore.ipynb +++ b/docs/docs/modules/data_connection/retrievers/vectorstore.ipynb @@ -17,6 +17,12 @@ "source": [ "# Vector store-backed retriever\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.\n", "It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.\n", "\n", diff --git a/docs/docs/modules/data_connection/text_embedding/index.mdx b/docs/docs/modules/data_connection/text_embedding/index.mdx index d3d45993260..cd77a9b4322 100644 --- a/docs/docs/modules/data_connection/text_embedding/index.mdx +++ b/docs/docs/modules/data_connection/text_embedding/index.mdx @@ -4,6 +4,10 @@ sidebar_class_name: hidden --- # Text embedding models + + + + :::info Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. ::: diff --git a/docs/docs/modules/data_connection/vectorstores/index.mdx b/docs/docs/modules/data_connection/vectorstores/index.mdx index 2ffebd37956..ba4f1e0faae 100644 --- a/docs/docs/modules/data_connection/vectorstores/index.mdx +++ b/docs/docs/modules/data_connection/vectorstores/index.mdx @@ -4,6 +4,12 @@ sidebar_class_name: hidden --- # Vector stores +```{=mdx} + + + +``` + :::info Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores. ::: diff --git a/docs/docs/modules/index.mdx b/docs/docs/modules/index.mdx index c8992959c9c..baeb3dddebb 100644 --- a/docs/docs/modules/index.mdx +++ b/docs/docs/modules/index.mdx @@ -4,6 +4,10 @@ sidebar_class_name: hidden # Components + + + + LangChain provides standard, extendable interfaces and external integrations for the following main components: ## [Model I/O](/docs/modules/model_io/) diff --git a/docs/docs/modules/model_io/chat/function_calling.ipynb b/docs/docs/modules/model_io/chat/function_calling.ipynb index 7b40f158757..252b4a8010a 100644 --- a/docs/docs/modules/model_io/chat/function_calling.ipynb +++ b/docs/docs/modules/model_io/chat/function_calling.ipynb @@ -19,6 +19,12 @@ "# Tool calling\n", "\n", "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", + "```{=mdx}\n", ":::info\n", "We use the term \"tool calling\" interchangeably with \"function calling\". Although\n", "function calling is sometimes meant to refer to invocations of a single function,\n", diff --git a/docs/docs/modules/model_io/chat/index.mdx b/docs/docs/modules/model_io/chat/index.mdx index 55cebd22b80..84ed334e518 100644 --- a/docs/docs/modules/model_io/chat/index.mdx +++ b/docs/docs/modules/model_io/chat/index.mdx @@ -5,6 +5,10 @@ sidebar_class_name: hidden # Chat Models + + + + Chat Models are a core component of LangChain. A chat model is a language model that uses chat messages as inputs and returns chat messages as outputs (as opposed to using plain text). diff --git a/docs/docs/modules/model_io/llms/index.mdx b/docs/docs/modules/model_io/llms/index.mdx index 7965c48d0ee..8d1fa4cb817 100644 --- a/docs/docs/modules/model_io/llms/index.mdx +++ b/docs/docs/modules/model_io/llms/index.mdx @@ -5,6 +5,10 @@ sidebar_class_name: hidden # LLMs + + + + Large Language Models (LLMs) are a core component of LangChain. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs. To be specific, this interface is one that takes as input a string and returns a string. diff --git a/docs/docs/modules/model_io/output_parsers/index.mdx b/docs/docs/modules/model_io/output_parsers/index.mdx index a2c3246e268..ce2a7377223 100644 --- a/docs/docs/modules/model_io/output_parsers/index.mdx +++ b/docs/docs/modules/model_io/output_parsers/index.mdx @@ -5,6 +5,10 @@ sidebar_class_name: hidden --- # Output Parsers + + + + Output parsers are responsible for taking the output of an LLM and transforming it to a more suitable format. This is very useful when you are using LLMs to generate any form of structured data. Besides having a large collection of different types of output parsers, one distinguishing benefit of LangChain OutputParsers is that many of them support streaming. diff --git a/docs/docs/modules/model_io/prompts/index.mdx b/docs/docs/modules/model_io/prompts/index.mdx index 459f766e2e0..bf4298a86d8 100644 --- a/docs/docs/modules/model_io/prompts/index.mdx +++ b/docs/docs/modules/model_io/prompts/index.mdx @@ -4,6 +4,10 @@ sidebar_class_name: hidden --- # Prompts + + + + A prompt for a language model is a set of instructions or input provided by a user to guide the model's response, helping it understand the context and generate relevant and coherent language-based output, such as answering questions, completing sentences, diff --git a/docs/docs/modules/model_io/prompts/quick_start.ipynb b/docs/docs/modules/model_io/prompts/quick_start.ipynb index adeae37a438..57af391a80c 100644 --- a/docs/docs/modules/model_io/prompts/quick_start.ipynb +++ b/docs/docs/modules/model_io/prompts/quick_start.ipynb @@ -18,6 +18,12 @@ "source": [ "# Quick reference\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "Prompt templates are predefined recipes for generating prompts for language models.\n", "\n", "A template may include instructions, few-shot examples, and specific context and\n", diff --git a/docs/docs/modules/tools/custom_tools.ipynb b/docs/docs/modules/tools/custom_tools.ipynb index f46b996f980..9a02d7aafd6 100644 --- a/docs/docs/modules/tools/custom_tools.ipynb +++ b/docs/docs/modules/tools/custom_tools.ipynb @@ -7,6 +7,12 @@ "source": [ "# Defining Custom Tools\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n", "\n", "- `name` (str), is required and must be unique within a set of tools provided to an agent\n", diff --git a/docs/docs/modules/tools/index.ipynb b/docs/docs/modules/tools/index.ipynb index c6bf74cbac9..35c6ab3b2e3 100644 --- a/docs/docs/modules/tools/index.ipynb +++ b/docs/docs/modules/tools/index.ipynb @@ -18,6 +18,12 @@ "source": [ "# Tools\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "Tools are interfaces that an agent, chain, or LLM can use to interact with the world.\n", "They combine a few things:\n", "\n", diff --git a/docs/docs/use_cases/chatbots/index.ipynb b/docs/docs/use_cases/chatbots/index.ipynb index c4b52cbc85e..1832938e780 100644 --- a/docs/docs/use_cases/chatbots/index.ipynb +++ b/docs/docs/use_cases/chatbots/index.ipynb @@ -15,6 +15,12 @@ "source": [ "# Chatbots\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "## Overview\n", "\n", "Chatbots are one of the most popular use-cases for LLMs. The core features of chatbots are that they can have long-running, stateful conversations and can answer user questions using relevant information.\n", diff --git a/docs/docs/use_cases/chatbots/quickstart.ipynb b/docs/docs/use_cases/chatbots/quickstart.ipynb index f48f4d00775..c6a0b9e17fd 100644 --- a/docs/docs/use_cases/chatbots/quickstart.ipynb +++ b/docs/docs/use_cases/chatbots/quickstart.ipynb @@ -15,6 +15,12 @@ "source": [ "# Quickstart\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/chatbots.ipynb)" ] }, diff --git a/docs/docs/use_cases/chatbots/tool_usage.ipynb b/docs/docs/use_cases/chatbots/tool_usage.ipynb index 4002ecd252e..2e8c12cd492 100644 --- a/docs/docs/use_cases/chatbots/tool_usage.ipynb +++ b/docs/docs/use_cases/chatbots/tool_usage.ipynb @@ -15,6 +15,12 @@ "source": [ "# Tool usage\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n", "\n", "Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/modules/agents/).\n", diff --git a/docs/docs/use_cases/extraction/index.ipynb b/docs/docs/use_cases/extraction/index.ipynb index 78822f39504..75ef2926737 100644 --- a/docs/docs/use_cases/extraction/index.ipynb +++ b/docs/docs/use_cases/extraction/index.ipynb @@ -18,6 +18,12 @@ "source": [ "## Overview\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "Large Language Models (LLMs) are emerging as an extremely capable technology for powering information extraction applications.\n", "\n", "Classical solutions to information extraction rely on a combination of people, (many) hand-crafted rules (e.g., regular expressions), and custom fine-tuned ML models.\n", diff --git a/docs/docs/use_cases/question_answering/chat_history.ipynb b/docs/docs/use_cases/question_answering/chat_history.ipynb index 93e9a1af1b3..1a039224541 100644 --- a/docs/docs/use_cases/question_answering/chat_history.ipynb +++ b/docs/docs/use_cases/question_answering/chat_history.ipynb @@ -17,6 +17,12 @@ "source": [ "# Add chat history\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n", "\n", "In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/expression_language/how_to/message_history).\n", diff --git a/docs/docs/use_cases/question_answering/index.ipynb b/docs/docs/use_cases/question_answering/index.ipynb index 0bcca9487a9..b3b8fad7ece 100644 --- a/docs/docs/use_cases/question_answering/index.ipynb +++ b/docs/docs/use_cases/question_answering/index.ipynb @@ -15,7 +15,13 @@ "id": "86fc5bb2-017f-434e-8cd6-53ab214a5604", "metadata": {}, "source": [ - "# Q&A with RAG" + "# Q&A with RAG\n", + "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```" ] }, { diff --git a/docs/docs/use_cases/question_answering/quickstart.mdx b/docs/docs/use_cases/question_answering/quickstart.mdx index fd360cef701..12319bd724b 100644 --- a/docs/docs/use_cases/question_answering/quickstart.mdx +++ b/docs/docs/use_cases/question_answering/quickstart.mdx @@ -5,6 +5,10 @@ title: Quickstart # Quickstart + + + + LangChain has a number of components designed to help build question-answering applications, and RAG applications more generally. To familiarize ourselves with these, we’ll build a simple Q&A application diff --git a/docs/docs/use_cases/sql/agents.ipynb b/docs/docs/use_cases/sql/agents.ipynb index 065237ea1e0..f877ed372d5 100644 --- a/docs/docs/use_cases/sql/agents.ipynb +++ b/docs/docs/use_cases/sql/agents.ipynb @@ -15,6 +15,12 @@ "source": [ "# Agents\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "LangChain has a SQL Agent which provides a more flexible way of interacting with SQL Databases than a chain. The main advantages of using the SQL Agent are:\n", "\n", "- It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).\n", diff --git a/docs/docs/use_cases/sql/index.ipynb b/docs/docs/use_cases/sql/index.ipynb index 1d80832fb8e..355ba95d43a 100644 --- a/docs/docs/use_cases/sql/index.ipynb +++ b/docs/docs/use_cases/sql/index.ipynb @@ -15,6 +15,12 @@ "source": [ "# SQL\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "One of the most common types of databases that we can build Q&A systems for are SQL databases. LangChain comes with a number of built-in chains and agents that are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite). They enable use cases such as:\n", "\n", "* Generating queries that will be run based on natural language questions,\n", diff --git a/docs/docs/use_cases/sql/quickstart.ipynb b/docs/docs/use_cases/sql/quickstart.ipynb index 084fb66fc44..1794b6c1d4d 100644 --- a/docs/docs/use_cases/sql/quickstart.ipynb +++ b/docs/docs/use_cases/sql/quickstart.ipynb @@ -15,6 +15,12 @@ "source": [ "# Quickstart\n", "\n", + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.\n", "\n", "## ⚠️ Security note ⚠️\n", diff --git a/docs/docs/use_cases/summarization.ipynb b/docs/docs/use_cases/summarization.ipynb index 97caf9d4326..f975108261b 100644 --- a/docs/docs/use_cases/summarization.ipynb +++ b/docs/docs/use_cases/summarization.ipynb @@ -16,6 +16,12 @@ "id": "cf13f702", "metadata": {}, "source": [ + "```{=mdx}\n", + "\n", + " \n", + "\n", + "```\n", + "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/v0.1/docs/docs/use_cases/summarization.ipynb)\n", "\n", "## Use case\n", @@ -589,9 +595,9 @@ ], "metadata": { "kernelspec": { - "display_name": "poetry-venv", + "display_name": "Python 3", "language": "python", - "name": "poetry-venv" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -603,7 +609,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.5" } }, "nbformat": 4,