diff --git a/docs/docs/how_to/agent_executor.ipynb b/docs/docs/how_to/agent_executor.ipynb index 334b1eb872d..657a6e5b37c 100644 --- a/docs/docs/how_to/agent_executor.ipynb +++ b/docs/docs/how_to/agent_executor.ipynb @@ -49,7 +49,6 @@ "\n", "To install LangChain run:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -63,7 +62,6 @@ " \n", "\n", "\n", - "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", @@ -270,11 +268,9 @@ "\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/chat_model_caching.ipynb b/docs/docs/how_to/chat_model_caching.ipynb index 37c3cd5d65e..b305223904c 100644 --- a/docs/docs/how_to/chat_model_caching.ipynb +++ b/docs/docs/how_to/chat_model_caching.ipynb @@ -28,11 +28,9 @@ "id": "289b31de", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/chat_token_usage_tracking.ipynb b/docs/docs/how_to/chat_token_usage_tracking.ipynb index f469b1bfb71..84948920e8c 100644 --- a/docs/docs/how_to/chat_token_usage_tracking.ipynb +++ b/docs/docs/how_to/chat_token_usage_tracking.ipynb @@ -155,11 +155,9 @@ "\n", "For example, OpenAI will return a message [chunk](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n", "\n", - "```{=mdx}\n", ":::note\n", "By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n", - ":::\n", - "```" + ":::\n" ] }, { diff --git a/docs/docs/how_to/convert_runnable_to_tool.ipynb b/docs/docs/how_to/convert_runnable_to_tool.ipynb index f2575b79011..0497582920f 100644 --- a/docs/docs/how_to/convert_runnable_to_tool.ipynb +++ b/docs/docs/how_to/convert_runnable_to_tool.ipynb @@ -266,11 +266,9 @@ "\n", "We first instantiate a chat model that supports [tool calling](/docs/how_to/tool_calling/):\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/debugging.ipynb b/docs/docs/how_to/debugging.ipynb index d1bc2f1365f..a594ebf1591 100644 --- a/docs/docs/how_to/debugging.ipynb +++ b/docs/docs/how_to/debugging.ipynb @@ -49,13 +49,11 @@ "\n", "Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/dynamic_chain.ipynb b/docs/docs/how_to/dynamic_chain.ipynb index 53790472a4a..ce2b98ef2cd 100644 --- a/docs/docs/how_to/dynamic_chain.ipynb +++ b/docs/docs/how_to/dynamic_chain.ipynb @@ -17,13 +17,11 @@ "\n", "Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/extraction_examples.ipynb b/docs/docs/how_to/extraction_examples.ipynb index d10fa0b8521..4c2c8627a02 100644 --- a/docs/docs/how_to/extraction_examples.ipynb +++ b/docs/docs/how_to/extraction_examples.ipynb @@ -350,14 +350,12 @@ "\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/extraction_long_text.ipynb b/docs/docs/how_to/extraction_long_text.ipynb index cbd5925bd5a..b4e8b3dbcbc 100644 --- a/docs/docs/how_to/extraction_long_text.ipynb +++ b/docs/docs/how_to/extraction_long_text.ipynb @@ -196,14 +196,12 @@ "\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/extraction_parse.ipynb b/docs/docs/how_to/extraction_parse.ipynb index 3c406ade2ac..597c3ad1bd7 100644 --- a/docs/docs/how_to/extraction_parse.ipynb +++ b/docs/docs/how_to/extraction_parse.ipynb @@ -18,11 +18,9 @@ "\n", "First we select a LLM:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/function_calling.ipynb b/docs/docs/how_to/function_calling.ipynb index 136b5c0d8fa..042b40eae52 100644 --- a/docs/docs/how_to/function_calling.ipynb +++ b/docs/docs/how_to/function_calling.ipynb @@ -17,14 +17,12 @@ "source": [ "# How to do tool/function calling\n", "\n", - "```{=mdx}\n", ":::info\n", "We use the term tool calling interchangeably with function calling. Although\n", "function calling is sometimes meant to refer to invocations of a single function,\n", "we treat all models as though they can return multiple tool or function calls in \n", "each message.\n", ":::\n", - "```\n", "\n", "Tool calling allows a model to respond to a given prompt by generating output that \n", "matches a user-defined schema. While the name implies that the model is performing \n", @@ -165,14 +163,12 @@ "source": [ "We can bind them to chat models as follows:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```\n", "\n", "We can use the `bind_tools()` method to handle converting\n", "`Multiply` to a \"tool\" and binding it to the model (i.e.,\n", diff --git a/docs/docs/how_to/message_history.ipynb b/docs/docs/how_to/message_history.ipynb index bb08967f62a..ec843eab8bf 100644 --- a/docs/docs/how_to/message_history.ipynb +++ b/docs/docs/how_to/message_history.ipynb @@ -155,13 +155,11 @@ "\n", "First we construct a runnable (which here accepts a dict as input and returns a message as output):\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/multi_vector.ipynb b/docs/docs/how_to/multi_vector.ipynb index db01daee38a..9c37cbd5aa0 100644 --- a/docs/docs/how_to/multi_vector.ipynb +++ b/docs/docs/how_to/multi_vector.ipynb @@ -246,11 +246,9 @@ "\n", "We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/qa_chat_history_how_to.ipynb b/docs/docs/how_to/qa_chat_history_how_to.ipynb index f65ac13cb38..abe688a6b90 100644 --- a/docs/docs/how_to/qa_chat_history_how_to.ipynb +++ b/docs/docs/how_to/qa_chat_history_how_to.ipynb @@ -120,11 +120,9 @@ "id": "646840fb-5212-48ea-8bc7-ec7be5ec727e", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/qa_citations.ipynb b/docs/docs/how_to/qa_citations.ipynb index 1b27659a947..2347b1bdb75 100644 --- a/docs/docs/how_to/qa_citations.ipynb +++ b/docs/docs/how_to/qa_citations.ipynb @@ -67,11 +67,9 @@ "source": [ "Let's first select a LLM:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/qa_per_user.ipynb b/docs/docs/how_to/qa_per_user.ipynb index bf949670701..09e9e2839c1 100644 --- a/docs/docs/how_to/qa_per_user.ipynb +++ b/docs/docs/how_to/qa_per_user.ipynb @@ -124,11 +124,9 @@ "We can now create the chain that we will use to do question-answering over.\n", "\n", "Let's first select a LLM.\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/qa_sources.ipynb b/docs/docs/how_to/qa_sources.ipynb index 6f49fad8d26..756a428def0 100644 --- a/docs/docs/how_to/qa_sources.ipynb +++ b/docs/docs/how_to/qa_sources.ipynb @@ -100,11 +100,9 @@ "\n", "Let's first select a LLM:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/qa_streaming.ipynb b/docs/docs/how_to/qa_streaming.ipynb index 8eba1a8d9ce..ed023809e3e 100644 --- a/docs/docs/how_to/qa_streaming.ipynb +++ b/docs/docs/how_to/qa_streaming.ipynb @@ -93,11 +93,9 @@ "\n", "Let's first select a LLM:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/sequence.ipynb b/docs/docs/how_to/sequence.ipynb index 2420afb057b..8fc7be8d8f6 100644 --- a/docs/docs/how_to/sequence.ipynb +++ b/docs/docs/how_to/sequence.ipynb @@ -37,13 +37,11 @@ "\n", "To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/sql_csv.ipynb b/docs/docs/how_to/sql_csv.ipynb index d4a292b43af..887856daa2d 100644 --- a/docs/docs/how_to/sql_csv.ipynb +++ b/docs/docs/how_to/sql_csv.ipynb @@ -167,11 +167,9 @@ "source": [ "And create a [SQL agent](/docs/tutorials/sql_qa) to interact with it:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/sql_large_db.ipynb b/docs/docs/how_to/sql_large_db.ipynb index 5e27faa8f8e..53f4bf6224d 100644 --- a/docs/docs/how_to/sql_large_db.ipynb +++ b/docs/docs/how_to/sql_large_db.ipynb @@ -94,11 +94,9 @@ "\n", "One easy and reliable way to do this is using [tool-calling](/docs/how_to/tool_calling). Below, we show how we can use this feature to obtain output conforming to a desired format (in this case, a list of table names). We use the chat model's `.bind_tools` method to bind a tool in Pydantic format, and feed this into an output parser to reconstruct the object from the model's response.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/sql_prompting.ipynb b/docs/docs/how_to/sql_prompting.ipynb index 990087edffe..0cd1a1c2626 100644 --- a/docs/docs/how_to/sql_prompting.ipynb +++ b/docs/docs/how_to/sql_prompting.ipynb @@ -125,11 +125,9 @@ "source": [ "For example, using our current DB we can see that we'll get a SQLite-specific prompt.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/sql_query_checking.ipynb b/docs/docs/how_to/sql_query_checking.ipynb index f4205c3f141..e15609d7ba4 100644 --- a/docs/docs/how_to/sql_query_checking.ipynb +++ b/docs/docs/how_to/sql_query_checking.ipynb @@ -91,11 +91,9 @@ "\n", "Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/streaming.ipynb b/docs/docs/how_to/streaming.ipynb index cc91cccb7d6..02b6e63466f 100644 --- a/docs/docs/how_to/streaming.ipynb +++ b/docs/docs/how_to/streaming.ipynb @@ -67,13 +67,11 @@ "\n", "We will show examples of streaming using a chat model. Choose one from the options below:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/structured_output.ipynb b/docs/docs/how_to/structured_output.ipynb index a1d5c760bdc..8a691a6e17a 100644 --- a/docs/docs/how_to/structured_output.ipynb +++ b/docs/docs/how_to/structured_output.ipynb @@ -47,13 +47,11 @@ "\n", "As an example, let's get a model to generate a joke and separate the setup from the punchline:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/summarize_map_reduce.ipynb b/docs/docs/how_to/summarize_map_reduce.ipynb index 72c50b81d0b..9cf7d872b54 100644 --- a/docs/docs/how_to/summarize_map_reduce.ipynb +++ b/docs/docs/how_to/summarize_map_reduce.ipynb @@ -41,13 +41,11 @@ "## Load chat model\n", "\n", "Let's first load a chat model:\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/summarize_refine.ipynb b/docs/docs/how_to/summarize_refine.ipynb index 4364785217e..648d815d2a0 100644 --- a/docs/docs/how_to/summarize_refine.ipynb +++ b/docs/docs/how_to/summarize_refine.ipynb @@ -46,13 +46,11 @@ "## Load chat model\n", "\n", "Let's first load a chat model:\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/summarize_stuff.ipynb b/docs/docs/how_to/summarize_stuff.ipynb index 415895a8c93..42d85d106bd 100644 --- a/docs/docs/how_to/summarize_stuff.ipynb +++ b/docs/docs/how_to/summarize_stuff.ipynb @@ -31,13 +31,11 @@ "## Load chat model\n", "\n", "Let's first load a chat model:\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/tool_artifacts.ipynb b/docs/docs/how_to/tool_artifacts.ipynb index eebc3372cd5..9687834cf70 100644 --- a/docs/docs/how_to/tool_artifacts.ipynb +++ b/docs/docs/how_to/tool_artifacts.ipynb @@ -145,13 +145,11 @@ "\n", "With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/tool_calling.ipynb b/docs/docs/how_to/tool_calling.ipynb index 214d99557ed..c812617d194 100644 --- a/docs/docs/how_to/tool_calling.ipynb +++ b/docs/docs/how_to/tool_calling.ipynb @@ -196,14 +196,12 @@ "To actually bind those schemas to a chat model, we'll use the `.bind_tools()` method. This handles converting\n", "the `add` and `multiply` schemas to the proper format for the model. The tool schema will then be passed it in each time the model is invoked.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/tool_results_pass_to_model.ipynb b/docs/docs/how_to/tool_results_pass_to_model.ipynb index 87f451e7367..78e391e4359 100644 --- a/docs/docs/how_to/tool_results_pass_to_model.ipynb +++ b/docs/docs/how_to/tool_results_pass_to_model.ipynb @@ -29,14 +29,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/tool_runtime.ipynb b/docs/docs/how_to/tool_runtime.ipynb index 9841d9d2aa4..fbbd2beba90 100644 --- a/docs/docs/how_to/tool_runtime.ipynb +++ b/docs/docs/how_to/tool_runtime.ipynb @@ -42,14 +42,12 @@ "source": [ "We can bind them to chat models as follows:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/docs/how_to/tool_stream_events.ipynb b/docs/docs/how_to/tool_stream_events.ipynb index 84809f1230f..6167e158d6c 100644 --- a/docs/docs/how_to/tool_stream_events.ipynb +++ b/docs/docs/how_to/tool_stream_events.ipynb @@ -31,11 +31,9 @@ "\n", "Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/tools_chain.ipynb b/docs/docs/how_to/tools_chain.ipynb index efa46fd7192..78f88c84715 100644 --- a/docs/docs/how_to/tools_chain.ipynb +++ b/docs/docs/how_to/tools_chain.ipynb @@ -147,11 +147,9 @@ "\n", "First we'll define our model and tools. We'll start with just a single tool, `multiply`.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/tools_error.ipynb b/docs/docs/how_to/tools_error.ipynb index 08e38e9c6f0..989236b5efc 100644 --- a/docs/docs/how_to/tools_error.ipynb +++ b/docs/docs/how_to/tools_error.ipynb @@ -79,11 +79,9 @@ "\n", "Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/tools_human.ipynb b/docs/docs/how_to/tools_human.ipynb index 5aa591a71ba..73b24d97445 100644 --- a/docs/docs/how_to/tools_human.ipynb +++ b/docs/docs/how_to/tools_human.ipynb @@ -77,11 +77,9 @@ "id": "43721981-4595-4721-bea0-5c67696426d3", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/how_to/tools_prompting.ipynb b/docs/docs/how_to/tools_prompting.ipynb index 6c3c894c94f..03cc039e60f 100644 --- a/docs/docs/how_to/tools_prompting.ipynb +++ b/docs/docs/how_to/tools_prompting.ipynb @@ -89,11 +89,9 @@ "source": [ "You can select any of the given models for this how-to guide. Keep in mind that most of these models already [support native tool calling](/docs/integrations/chat/), so using the prompting strategy shown here doesn't make sense for these models, and instead you should follow the [how to use a chat model to call tools](/docs/how_to/tool_calling) guide.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```\n", "\n", "To illustrate the idea, we'll use `phi3` via Ollama, which does **NOT** have native support for tool calling. If you'd like to use `Ollama` as well follow [these instructions](/docs/integrations/chat/ollama/)." ] diff --git a/docs/docs/integrations/retrievers/arxiv.ipynb b/docs/docs/integrations/retrievers/arxiv.ipynb index a334a2bc1eb..9cf3b40efaa 100644 --- a/docs/docs/integrations/retrievers/arxiv.ipynb +++ b/docs/docs/integrations/retrievers/arxiv.ipynb @@ -215,11 +215,9 @@ "\n", "We will need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/retrievers/box.ipynb b/docs/docs/integrations/retrievers/box.ipynb index 17fbf336f1d..8b25c1089e8 100644 --- a/docs/docs/integrations/retrievers/box.ipynb +++ b/docs/docs/integrations/retrievers/box.ipynb @@ -248,11 +248,9 @@ "\n", "We will need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/retrievers/wikipedia.ipynb b/docs/docs/integrations/retrievers/wikipedia.ipynb index 239bf0d67a9..53c393e503b 100644 --- a/docs/docs/integrations/retrievers/wikipedia.ipynb +++ b/docs/docs/integrations/retrievers/wikipedia.ipynb @@ -152,11 +152,9 @@ "\n", "We will need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/tools/chatgpt_plugins.ipynb b/docs/docs/integrations/tools/chatgpt_plugins.ipynb index 08a9c5924ed..809a13869e2 100644 --- a/docs/docs/integrations/tools/chatgpt_plugins.ipynb +++ b/docs/docs/integrations/tools/chatgpt_plugins.ipynb @@ -17,13 +17,11 @@ "source": [ "# ChatGPT Plugins\n", "\n", - "```{=mdx}\n", ":::warning Deprecated\n", "\n", "OpenAI has [deprecated plugins](https://openai.com/index/chatgpt-plugins/).\n", "\n", ":::\n", - "```\n", "\n", "This example shows how to use ChatGPT Plugins within LangChain abstractions.\n", "\n", diff --git a/docs/docs/integrations/tools/github.ipynb b/docs/docs/integrations/tools/github.ipynb index a0f61cc7aeb..cca81b1777e 100644 --- a/docs/docs/integrations/tools/github.ipynb +++ b/docs/docs/integrations/tools/github.ipynb @@ -208,11 +208,9 @@ "\n", "We will need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/tools/gmail.ipynb b/docs/docs/integrations/tools/gmail.ipynb index 5ab7825cc21..b105e0ae575 100644 --- a/docs/docs/integrations/tools/gmail.ipynb +++ b/docs/docs/integrations/tools/gmail.ipynb @@ -159,11 +159,9 @@ "\n", "We will need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/tools/sql_database.ipynb b/docs/docs/integrations/tools/sql_database.ipynb index 251409ddf9f..ef3d63e2b26 100644 --- a/docs/docs/integrations/tools/sql_database.ipynb +++ b/docs/docs/integrations/tools/sql_database.ipynb @@ -133,11 +133,9 @@ "source": [ "We will also need a LLM or chat model:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/tools/tavily_search.ipynb b/docs/docs/integrations/tools/tavily_search.ipynb index a8c6de9e1cd..7a89acb94a3 100644 --- a/docs/docs/integrations/tools/tavily_search.ipynb +++ b/docs/docs/integrations/tools/tavily_search.ipynb @@ -263,11 +263,9 @@ "\n", "We can use our tool in a chain by first binding it to a [tool-calling model](/docs/how_to/tool_calling/) and then calling it:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/astradb.ipynb b/docs/docs/integrations/vectorstores/astradb.ipynb index 17969ef5ee7..36c9e072741 100644 --- a/docs/docs/integrations/vectorstores/astradb.ipynb +++ b/docs/docs/integrations/vectorstores/astradb.ipynb @@ -107,11 +107,9 @@ "\n", "Below, we instantiate our vector store using the explicit embedding class:\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/chroma.ipynb b/docs/docs/integrations/vectorstores/chroma.ipynb index b03f8dba701..d5c2133b7c6 100644 --- a/docs/docs/integrations/vectorstores/chroma.ipynb +++ b/docs/docs/integrations/vectorstores/chroma.ipynb @@ -66,11 +66,9 @@ "\n", "Below is a basic initialization, including the use of a directory to save the data locally.\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/clickhouse.ipynb b/docs/docs/integrations/vectorstores/clickhouse.ipynb index 521eb3c2f97..2631ab40a91 100644 --- a/docs/docs/integrations/vectorstores/clickhouse.ipynb +++ b/docs/docs/integrations/vectorstores/clickhouse.ipynb @@ -80,11 +80,9 @@ "source": [ "## Instantiation\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/couchbase.ipynb b/docs/docs/integrations/vectorstores/couchbase.ipynb index 2e28d6d78a6..c80fbbd8914 100644 --- a/docs/docs/integrations/vectorstores/couchbase.ipynb +++ b/docs/docs/integrations/vectorstores/couchbase.ipynb @@ -167,11 +167,9 @@ "\n", "Below, we create the vector store object with the cluster information and the search index name. \n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb b/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb index 74d7390e3d0..adfe23173b4 100644 --- a/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb +++ b/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb @@ -242,11 +242,9 @@ "you also need to provide the embedding model and text column in your source table to\n", "use for the embeddings:\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/elasticsearch.ipynb b/docs/docs/integrations/vectorstores/elasticsearch.ipynb index b13f40a36d7..f2be67d8d26 100644 --- a/docs/docs/integrations/vectorstores/elasticsearch.ipynb +++ b/docs/docs/integrations/vectorstores/elasticsearch.ipynb @@ -80,11 +80,9 @@ "### Running with Authentication\n", "For production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters `es_api_key` or `es_user` and `es_password`.\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/faiss.ipynb b/docs/docs/integrations/vectorstores/faiss.ipynb index 09a1a759b59..854e5a25dcb 100644 --- a/docs/docs/integrations/vectorstores/faiss.ipynb +++ b/docs/docs/integrations/vectorstores/faiss.ipynb @@ -66,11 +66,9 @@ "source": [ "## Initialization\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/milvus.ipynb b/docs/docs/integrations/vectorstores/milvus.ipynb index 984a3fa6c6d..2dfbbbb2b6e 100644 --- a/docs/docs/integrations/vectorstores/milvus.ipynb +++ b/docs/docs/integrations/vectorstores/milvus.ipynb @@ -41,11 +41,9 @@ "\n", "## Initialization\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb index 2b49b21fb94..004da71dac2 100644 --- a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb +++ b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb @@ -88,11 +88,9 @@ "source": [ "## Initialization\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/pgvector.ipynb b/docs/docs/integrations/vectorstores/pgvector.ipynb index db6979de967..afbd082aa1a 100644 --- a/docs/docs/integrations/vectorstores/pgvector.ipynb +++ b/docs/docs/integrations/vectorstores/pgvector.ipynb @@ -92,11 +92,9 @@ "source": [ "## Instantiation\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/pinecone.ipynb b/docs/docs/integrations/vectorstores/pinecone.ipynb index eb30997458e..ecdf557f3d5 100644 --- a/docs/docs/integrations/vectorstores/pinecone.ipynb +++ b/docs/docs/integrations/vectorstores/pinecone.ipynb @@ -130,11 +130,9 @@ "source": [ "Now that our Pinecone index is setup, we can initialize our vector store. \n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/qdrant.ipynb b/docs/docs/integrations/vectorstores/qdrant.ipynb index 6762389edb6..0ab58c72850 100644 --- a/docs/docs/integrations/vectorstores/qdrant.ipynb +++ b/docs/docs/integrations/vectorstores/qdrant.ipynb @@ -77,11 +77,9 @@ "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.\n", "\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/integrations/vectorstores/redis.ipynb b/docs/docs/integrations/vectorstores/redis.ipynb index c8e1de0a113..7cd3704b5d7 100644 --- a/docs/docs/integrations/vectorstores/redis.ipynb +++ b/docs/docs/integrations/vectorstores/redis.ipynb @@ -362,11 +362,9 @@ "\n", "Below we will use the `RedisVectorStore.__init__` method using a `RedisConfig` instance.\n", "\n", - "```{=mdx}\n", "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { @@ -884,11 +882,9 @@ "## Chain usage\n", "The code below shows how to use the vector store as a retriever in a simple RAG chain:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/agents.ipynb b/docs/docs/tutorials/agents.ipynb index 8a0fb964ad7..6f688255e56 100644 --- a/docs/docs/tutorials/agents.ipynb +++ b/docs/docs/tutorials/agents.ipynb @@ -223,11 +223,9 @@ "\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/chatbot.ipynb b/docs/docs/tutorials/chatbot.ipynb index c05db80cb74..0740f6d3a82 100644 --- a/docs/docs/tutorials/chatbot.ipynb +++ b/docs/docs/tutorials/chatbot.ipynb @@ -61,7 +61,6 @@ "\n", "To install LangChain run:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -75,7 +74,6 @@ " \n", "\n", "\n", - "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", @@ -107,11 +105,9 @@ "\n", "First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/extraction.ipynb b/docs/docs/tutorials/extraction.ipynb index fe73cebdf76..3fa1f7cabfa 100644 --- a/docs/docs/tutorials/extraction.ipynb +++ b/docs/docs/tutorials/extraction.ipynb @@ -51,7 +51,6 @@ "\n", "To install LangChain run:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -65,7 +64,6 @@ " \n", "\n", "\n", - "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", diff --git a/docs/docs/tutorials/llm_chain.ipynb b/docs/docs/tutorials/llm_chain.ipynb index ef6c92e6561..2f03745f9cc 100644 --- a/docs/docs/tutorials/llm_chain.ipynb +++ b/docs/docs/tutorials/llm_chain.ipynb @@ -45,7 +45,6 @@ "\n", "To install LangChain run:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -59,7 +58,6 @@ " \n", "\n", "\n", - "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", @@ -97,11 +95,9 @@ "\n", "First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/pdf_qa.ipynb b/docs/docs/tutorials/pdf_qa.ipynb index 294562b7daf..74247cfb1be 100644 --- a/docs/docs/tutorials/pdf_qa.ipynb +++ b/docs/docs/tutorials/pdf_qa.ipynb @@ -119,11 +119,9 @@ "\n", "Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/qa_chat_history.ipynb b/docs/docs/tutorials/qa_chat_history.ipynb index 183db2dee6f..ec99dc4c543 100644 --- a/docs/docs/tutorials/qa_chat_history.ipynb +++ b/docs/docs/tutorials/qa_chat_history.ipynb @@ -133,11 +133,9 @@ "id": "646840fb-5212-48ea-8bc7-ec7be5ec727e", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/rag.ipynb b/docs/docs/tutorials/rag.ipynb index e1419e1b29f..edef518c05d 100644 --- a/docs/docs/tutorials/rag.ipynb +++ b/docs/docs/tutorials/rag.ipynb @@ -129,11 +129,9 @@ "We can create a simple indexing pipeline and RAG chain to do this in ~20\n", "lines of code:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { @@ -620,12 +618,10 @@ "We’ll use the gpt-4o-mini OpenAI chat model, but any LangChain `LLM`\n", "or `ChatModel` could be substituted in.\n", "\n", - "```{=mdx}\n", "\n", - "```\n", "\n", "We’ll use a prompt for RAG that is checked into the LangChain prompt hub\n", "([here](https://smith.langchain.com/hub/rlm/rag-prompt))." @@ -959,7 +955,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv", "language": "python", "name": "python3" }, @@ -973,7 +969,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.5" + "version": "3.11.4" } }, "nbformat": 4, diff --git a/docs/docs/tutorials/retrievers.ipynb b/docs/docs/tutorials/retrievers.ipynb index 34d12c65cd3..acec0cc303f 100644 --- a/docs/docs/tutorials/retrievers.ipynb +++ b/docs/docs/tutorials/retrievers.ipynb @@ -27,7 +27,6 @@ "\n", "This tutorial requires the `langchain`, `langchain-chroma`, and `langchain-openai` packages:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -41,7 +40,6 @@ " \n", "\n", "\n", - "```\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", "\n", @@ -389,11 +387,9 @@ "\n", "Retrievers can easily be incorporated into more complex applications, such as retrieval-augmented generation (RAG) applications that combine a given question with retrieved context into a prompt for a LLM. Below we show a minimal example.\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/sql_qa.ipynb b/docs/docs/tutorials/sql_qa.ipynb index cc94b5522bf..0bab8a06693 100644 --- a/docs/docs/tutorials/sql_qa.ipynb +++ b/docs/docs/tutorials/sql_qa.ipynb @@ -147,11 +147,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", - "\n", - "```" + "\n" ] }, { diff --git a/docs/docs/tutorials/summarization.ipynb b/docs/docs/tutorials/summarization.ipynb index 2c32f6d9ff3..7e178db1b11 100644 --- a/docs/docs/tutorials/summarization.ipynb +++ b/docs/docs/tutorials/summarization.ipynb @@ -74,7 +74,6 @@ "\n", "To install LangChain run:\n", "\n", - "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", @@ -88,7 +87,6 @@ " \n", "\n", "\n", - "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", @@ -206,13 +204,11 @@ "source": [ "Let's next select a LLM:\n", "\n", - "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "\n", - "```" + "/>\n" ] }, { diff --git a/docs/scripts/notebook_convert.py b/docs/scripts/notebook_convert.py index 291f46c4f4d..d74bfc5a50e 100644 --- a/docs/scripts/notebook_convert.py +++ b/docs/scripts/notebook_convert.py @@ -13,11 +13,6 @@ from nbconvert.preprocessors import Preprocessor class EscapePreprocessor(Preprocessor): def preprocess_cell(self, cell, resources, cell_index): if cell.cell_type == "markdown": - # find all occurrences of ```{=mdx} blocks and remove wrapper - if "```{=mdx}\n" in cell.source: - cell.source = re.sub( - r"```{=mdx}\n(.*?)\n```", r"\1", cell.source, flags=re.DOTALL - ) # rewrite .ipynb links to .md cell.source = re.sub( r"\[([^\]]*)\]\((?![^\)]*//)([^)]*)\.ipynb\)",