docs: fix mdx codefences (#26802)

```
git grep -l -E '"```\{=mdx\}\\n",' | xargs perl -0777 -i -pe 's/"```\{=mdx\}\\n",\n    (\W.*?),\n\s*"```\\?n?"/$1/s'
```
This commit is contained in:
Erick Friis 2024-09-23 23:06:13 -07:00 committed by GitHub
parent 35081d2765
commit e40a2b8bbf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
68 changed files with 66 additions and 221 deletions

View File

@ -49,7 +49,6 @@
"\n", "\n",
"To install LangChain run:\n", "To install LangChain run:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -63,7 +62,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",
@ -270,11 +268,9 @@
"\n", "\n",
"Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n"
"```"
] ]
}, },
{ {

View File

@ -28,11 +28,9 @@
"id": "289b31de", "id": "289b31de",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -155,11 +155,9 @@
"\n", "\n",
"For example, OpenAI will return a message [chunk](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n", "For example, OpenAI will return a message [chunk](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n",
"\n", "\n",
"```{=mdx}\n",
":::note\n", ":::note\n",
"By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n", "By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n",
":::\n", ":::\n"
"```"
] ]
}, },
{ {

View File

@ -266,11 +266,9 @@
"\n", "\n",
"We first instantiate a chat model that supports [tool calling](/docs/how_to/tool_calling/):\n", "We first instantiate a chat model that supports [tool calling](/docs/how_to/tool_calling/):\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -49,13 +49,11 @@
"\n", "\n",
"Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n", "Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -17,13 +17,11 @@
"\n", "\n",
"Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n", "Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -350,14 +350,12 @@
"\n", "\n",
"Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n", " openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -196,14 +196,12 @@
"\n", "\n",
"Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n", " openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -18,11 +18,9 @@
"\n", "\n",
"First we select a LLM:\n", "First we select a LLM:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"model\" />\n", "<ChatModelTabs customVarName=\"model\" />\n"
"```"
] ]
}, },
{ {

View File

@ -17,14 +17,12 @@
"source": [ "source": [
"# How to do tool/function calling\n", "# How to do tool/function calling\n",
"\n", "\n",
"```{=mdx}\n",
":::info\n", ":::info\n",
"We use the term tool calling interchangeably with function calling. Although\n", "We use the term tool calling interchangeably with function calling. Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n", "function calling is sometimes meant to refer to invocations of a single function,\n",
"we treat all models as though they can return multiple tool or function calls in \n", "we treat all models as though they can return multiple tool or function calls in \n",
"each message.\n", "each message.\n",
":::\n", ":::\n",
"```\n",
"\n", "\n",
"Tool calling allows a model to respond to a given prompt by generating output that \n", "Tool calling allows a model to respond to a given prompt by generating output that \n",
"matches a user-defined schema. While the name implies that the model is performing \n", "matches a user-defined schema. While the name implies that the model is performing \n",
@ -165,14 +163,12 @@
"source": [ "source": [
"We can bind them to chat models as follows:\n", "We can bind them to chat models as follows:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n", " fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n", "/>\n",
"```\n",
"\n", "\n",
"We can use the `bind_tools()` method to handle converting\n", "We can use the `bind_tools()` method to handle converting\n",
"`Multiply` to a \"tool\" and binding it to the model (i.e.,\n", "`Multiply` to a \"tool\" and binding it to the model (i.e.,\n",

View File

@ -155,13 +155,11 @@
"\n", "\n",
"First we construct a runnable (which here accepts a dict as input and returns a message as output):\n", "First we construct a runnable (which here accepts a dict as input and returns a message as output):\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -246,11 +246,9 @@
"\n", "\n",
"We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n", "We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -120,11 +120,9 @@
"id": "646840fb-5212-48ea-8bc7-ec7be5ec727e", "id": "646840fb-5212-48ea-8bc7-ec7be5ec727e",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -67,11 +67,9 @@
"source": [ "source": [
"Let's first select a LLM:\n", "Let's first select a LLM:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -124,11 +124,9 @@
"We can now create the chain that we will use to do question-answering over.\n", "We can now create the chain that we will use to do question-answering over.\n",
"\n", "\n",
"Let's first select a LLM.\n", "Let's first select a LLM.\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -100,11 +100,9 @@
"\n", "\n",
"Let's first select a LLM:\n", "Let's first select a LLM:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -93,11 +93,9 @@
"\n", "\n",
"Let's first select a LLM:\n", "Let's first select a LLM:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -37,13 +37,11 @@
"\n", "\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n", "To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"model\"\n", " customVarName=\"model\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -167,11 +167,9 @@
"source": [ "source": [
"And create a [SQL agent](/docs/tutorials/sql_qa) to interact with it:\n", "And create a [SQL agent](/docs/tutorials/sql_qa) to interact with it:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -94,11 +94,9 @@
"\n", "\n",
"One easy and reliable way to do this is using [tool-calling](/docs/how_to/tool_calling). Below, we show how we can use this feature to obtain output conforming to a desired format (in this case, a list of table names). We use the chat model's `.bind_tools` method to bind a tool in Pydantic format, and feed this into an output parser to reconstruct the object from the model's response.\n", "One easy and reliable way to do this is using [tool-calling](/docs/how_to/tool_calling). Below, we show how we can use this feature to obtain output conforming to a desired format (in this case, a list of table names). We use the chat model's `.bind_tools` method to bind a tool in Pydantic format, and feed this into an output parser to reconstruct the object from the model's response.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -125,11 +125,9 @@
"source": [ "source": [
"For example, using our current DB we can see that we'll get a SQLite-specific prompt.\n", "For example, using our current DB we can see that we'll get a SQLite-specific prompt.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -91,11 +91,9 @@
"\n", "\n",
"Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:\n", "Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -67,13 +67,11 @@
"\n", "\n",
"We will show examples of streaming using a chat model. Choose one from the options below:\n", "We will show examples of streaming using a chat model. Choose one from the options below:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"model\"\n", " customVarName=\"model\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -47,13 +47,11 @@
"\n", "\n",
"As an example, let's get a model to generate a joke and separate the setup from the punchline:\n", "As an example, let's get a model to generate a joke and separate the setup from the punchline:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -41,13 +41,11 @@
"## Load chat model\n", "## Load chat model\n",
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -46,13 +46,11 @@
"## Load chat model\n", "## Load chat model\n",
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -31,13 +31,11 @@
"## Load chat model\n", "## Load chat model\n",
"\n", "\n",
"Let's first load a chat model:\n", "Let's first load a chat model:\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -145,13 +145,11 @@
"\n", "\n",
"With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n", "With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -196,14 +196,12 @@
"To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n", "To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n",
"the `add` and `multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n", "the `add` and `multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n", " fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -29,14 +29,12 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n", " fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -42,14 +42,12 @@
"source": [ "source": [
"We can bind them to chat models as follows:\n", "We can bind them to chat models as follows:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n", " fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -31,11 +31,9 @@
"\n", "\n",
"Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n", "Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"model\" />\n", "<ChatModelTabs customVarName=\"model\" />\n"
"```"
] ]
}, },
{ {

View File

@ -147,11 +147,9 @@
"\n", "\n",
"First we'll define our model and tools. We'll start with just a single tool, `multiply`.\n", "First we'll define our model and tools. We'll start with just a single tool, `multiply`.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\"/>\n", "<ChatModelTabs customVarName=\"llm\"/>\n"
"```"
] ]
}, },
{ {

View File

@ -79,11 +79,9 @@
"\n", "\n",
"Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.\n", "Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\"/>\n", "<ChatModelTabs customVarName=\"llm\"/>\n"
"```"
] ]
}, },
{ {

View File

@ -77,11 +77,9 @@
"id": "43721981-4595-4721-bea0-5c67696426d3", "id": "43721981-4595-4721-bea0-5c67696426d3",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\"/>\n", "<ChatModelTabs customVarName=\"llm\"/>\n"
"```"
] ]
}, },
{ {

View File

@ -89,11 +89,9 @@
"source": [ "source": [
"You can select any of the given models for this how-to guide. Keep in mind that most of these models already [support native tool calling](/docs/integrations/chat/), so using the prompting strategy shown here doesn't make sense for these models, and instead you should follow the [how to use a chat model to call tools](/docs/how_to/tool_calling) guide.\n", "You can select any of the given models for this how-to guide. Keep in mind that most of these models already [support native tool calling](/docs/integrations/chat/), so using the prompting strategy shown here doesn't make sense for these models, and instead you should follow the [how to use a chat model to call tools](/docs/how_to/tool_calling) guide.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n",
"```\n",
"\n", "\n",
"To illustrate the idea, we'll use `phi3` via Ollama, which does **NOT** have native support for tool calling. If you'd like to use `Ollama` as well follow [these instructions](/docs/integrations/chat/ollama/)." "To illustrate the idea, we'll use `phi3` via Ollama, which does **NOT** have native support for tool calling. If you'd like to use `Ollama` as well follow [these instructions](/docs/integrations/chat/ollama/)."
] ]

View File

@ -215,11 +215,9 @@
"\n", "\n",
"We will need a LLM or chat model:\n", "We will need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -248,11 +248,9 @@
"\n", "\n",
"We will need a LLM or chat model:\n", "We will need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -152,11 +152,9 @@
"\n", "\n",
"We will need a LLM or chat model:\n", "We will need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -17,13 +17,11 @@
"source": [ "source": [
"# ChatGPT Plugins\n", "# ChatGPT Plugins\n",
"\n", "\n",
"```{=mdx}\n",
":::warning Deprecated\n", ":::warning Deprecated\n",
"\n", "\n",
"OpenAI has [deprecated plugins](https://openai.com/index/chatgpt-plugins/).\n", "OpenAI has [deprecated plugins](https://openai.com/index/chatgpt-plugins/).\n",
"\n", "\n",
":::\n", ":::\n",
"```\n",
"\n", "\n",
"This example shows how to use ChatGPT Plugins within LangChain abstractions.\n", "This example shows how to use ChatGPT Plugins within LangChain abstractions.\n",
"\n", "\n",

View File

@ -208,11 +208,9 @@
"\n", "\n",
"We will need a LLM or chat model:\n", "We will need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -159,11 +159,9 @@
"\n", "\n",
"We will need a LLM or chat model:\n", "We will need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -133,11 +133,9 @@
"source": [ "source": [
"We will also need a LLM or chat model:\n", "We will also need a LLM or chat model:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -263,11 +263,9 @@
"\n", "\n",
"We can use our tool in a chain by first binding it to a [tool-calling model](/docs/how_to/tool_calling/) and then calling it:\n", "We can use our tool in a chain by first binding it to a [tool-calling model](/docs/how_to/tool_calling/) and then calling it:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -107,11 +107,9 @@
"\n", "\n",
"Below, we instantiate our vector store using the explicit embedding class:\n", "Below, we instantiate our vector store using the explicit embedding class:\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -66,11 +66,9 @@
"\n", "\n",
"Below is a basic initialization, including the use of a directory to save the data locally.\n", "Below is a basic initialization, including the use of a directory to save the data locally.\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -80,11 +80,9 @@
"source": [ "source": [
"## Instantiation\n", "## Instantiation\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -167,11 +167,9 @@
"\n", "\n",
"Below, we create the vector store object with the cluster information and the search index name. \n", "Below, we create the vector store object with the cluster information and the search index name. \n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -242,11 +242,9 @@
"you also need to provide the embedding model and text column in your source table to\n", "you also need to provide the embedding model and text column in your source table to\n",
"use for the embeddings:\n", "use for the embeddings:\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -80,11 +80,9 @@
"### Running with Authentication\n", "### Running with Authentication\n",
"For production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters `es_api_key` or `es_user` and `es_password`.\n", "For production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters `es_api_key` or `es_user` and `es_password`.\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -66,11 +66,9 @@
"source": [ "source": [
"## Initialization\n", "## Initialization\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -41,11 +41,9 @@
"\n", "\n",
"## Initialization\n", "## Initialization\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -88,11 +88,9 @@
"source": [ "source": [
"## Initialization\n", "## Initialization\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -92,11 +92,9 @@
"source": [ "source": [
"## Instantiation\n", "## Instantiation\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -130,11 +130,9 @@
"source": [ "source": [
"Now that our Pinecone index is setup, we can initialize our vector store. \n", "Now that our Pinecone index is setup, we can initialize our vector store. \n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -77,11 +77,9 @@
"For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.\n", "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.\n",
"\n", "\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {

View File

@ -362,11 +362,9 @@
"\n", "\n",
"Below we will use the `RedisVectorStore.__init__` method using a `RedisConfig` instance.\n", "Below we will use the `RedisVectorStore.__init__` method using a `RedisConfig` instance.\n",
"\n", "\n",
"```{=mdx}\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n", "\n",
"<EmbeddingTabs/>\n", "<EmbeddingTabs/>\n"
"```"
] ]
}, },
{ {
@ -884,11 +882,9 @@
"## Chain usage\n", "## Chain usage\n",
"The code below shows how to use the vector store as a retriever in a simple RAG chain:\n", "The code below shows how to use the vector store as a retriever in a simple RAG chain:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -223,11 +223,9 @@
"\n", "\n",
"Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n"
"```"
] ]
}, },
{ {

View File

@ -61,7 +61,6 @@
"\n", "\n",
"To install LangChain run:\n", "To install LangChain run:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -75,7 +74,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",
@ -107,11 +105,9 @@
"\n", "\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n", "First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs openaiParams={`model=\"gpt-3.5-turbo\"`} />\n", "<ChatModelTabs openaiParams={`model=\"gpt-3.5-turbo\"`} />\n"
"```"
] ]
}, },
{ {

View File

@ -51,7 +51,6 @@
"\n", "\n",
"To install LangChain run:\n", "To install LangChain run:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -65,7 +64,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",

View File

@ -45,7 +45,6 @@
"\n", "\n",
"To install LangChain run:\n", "To install LangChain run:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -59,7 +58,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",
@ -97,11 +95,9 @@
"\n", "\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n", "First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n"
"```"
] ]
}, },
{ {

View File

@ -119,11 +119,9 @@
"\n", "\n",
"Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n", "Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" openaiParams={`model=\"gpt-4o\"`} />\n", "<ChatModelTabs customVarName=\"llm\" openaiParams={`model=\"gpt-4o\"`} />\n"
"```"
] ]
}, },
{ {

View File

@ -133,11 +133,9 @@
"id": "646840fb-5212-48ea-8bc7-ec7be5ec727e", "id": "646840fb-5212-48ea-8bc7-ec7be5ec727e",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -129,11 +129,9 @@
"We can create a simple indexing pipeline and RAG chain to do this in ~20\n", "We can create a simple indexing pipeline and RAG chain to do this in ~20\n",
"lines of code:\n", "lines of code:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {
@ -620,12 +618,10 @@
"Well use the gpt-4o-mini OpenAI chat model, but any LangChain `LLM`\n", "Well use the gpt-4o-mini OpenAI chat model, but any LangChain `LLM`\n",
"or `ChatModel` could be substituted in.\n", "or `ChatModel` could be substituted in.\n",
"\n", "\n",
"```{=mdx}\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
" anthropicParams={`model=\"claude-3-sonnet-20240229\", temperature=0.2, max_tokens=1024`}\n", " anthropicParams={`model=\"claude-3-sonnet-20240229\", temperature=0.2, max_tokens=1024`}\n",
"/>\n", "/>\n",
"```\n",
"\n", "\n",
"Well use a prompt for RAG that is checked into the LangChain prompt hub\n", "Well use a prompt for RAG that is checked into the LangChain prompt hub\n",
"([here](https://smith.langchain.com/hub/rlm/rag-prompt))." "([here](https://smith.langchain.com/hub/rlm/rag-prompt))."
@ -959,7 +955,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Python 3 (ipykernel)", "display_name": ".venv",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -973,7 +969,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.11.5" "version": "3.11.4"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -27,7 +27,6 @@
"\n", "\n",
"This tutorial requires the `langchain`, `langchain-chroma`, and `langchain-openai` packages:\n", "This tutorial requires the `langchain`, `langchain-chroma`, and `langchain-openai` packages:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -41,7 +40,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n", "\n",
@ -389,11 +387,9 @@
"\n", "\n",
"Retrievers can easily be incorporated into more complex applications, such as retrieval-augmented generation (RAG) applications that combine a given question with retrieved context into a prompt for a LLM. Below we show a minimal example.\n", "Retrievers can easily be incorporated into more complex applications, such as retrieval-augmented generation (RAG) applications that combine a given question with retrieved context into a prompt for a LLM. Below we show a minimal example.\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -147,11 +147,9 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs customVarName=\"llm\" />\n", "<ChatModelTabs customVarName=\"llm\" />\n"
"```"
] ]
}, },
{ {

View File

@ -74,7 +74,6 @@
"\n", "\n",
"To install LangChain run:\n", "To install LangChain run:\n",
"\n", "\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n", "import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n", "import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n", "import CodeBlock from \"@theme/CodeBlock\";\n",
@ -88,7 +87,6 @@
" </TabItem>\n", " </TabItem>\n",
"</Tabs>\n", "</Tabs>\n",
"\n", "\n",
"```\n",
"\n", "\n",
"\n", "\n",
"For more details, see our [Installation guide](/docs/how_to/installation).\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n",
@ -206,13 +204,11 @@
"source": [ "source": [
"Let's next select a LLM:\n", "Let's next select a LLM:\n",
"\n", "\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n", "\n",
"<ChatModelTabs\n", "<ChatModelTabs\n",
" customVarName=\"llm\"\n", " customVarName=\"llm\"\n",
"/>\n", "/>\n"
"```"
] ]
}, },
{ {

View File

@ -13,11 +13,6 @@ from nbconvert.preprocessors import Preprocessor
class EscapePreprocessor(Preprocessor): class EscapePreprocessor(Preprocessor):
def preprocess_cell(self, cell, resources, cell_index): def preprocess_cell(self, cell, resources, cell_index):
if cell.cell_type == "markdown": if cell.cell_type == "markdown":
# find all occurrences of ```{=mdx} blocks and remove wrapper
if "```{=mdx}\n" in cell.source:
cell.source = re.sub(
r"```{=mdx}\n(.*?)\n```", r"\1", cell.source, flags=re.DOTALL
)
# rewrite .ipynb links to .md # rewrite .ipynb links to .md
cell.source = re.sub( cell.source = re.sub(
r"\[([^\]]*)\]\((?![^\)]*//)([^)]*)\.ipynb\)", r"\[([^\]]*)\]\((?![^\)]*//)([^)]*)\.ipynb\)",