docs: make docs mdxv2 compatible (#26798)

prep for docusaurus migration
This commit is contained in:
Erick Friis 2024-09-23 21:24:23 -07:00 committed by GitHub
parent 2a4c5713cd
commit 603d38f06d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
34 changed files with 107 additions and 94 deletions

View File

@ -46,7 +46,7 @@ generate-files:
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:

View File

@ -451,8 +451,7 @@ steps of the thought-process and explore other directions from there. To verify
the effectiveness of the proposed technique, we implemented a ToT-based solver
for the Sudoku Puzzle. Experimental results show that the ToT framework can
significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
implementation of the ToT-based Sudoku solver is available on [GitHub](https://github.com/jieyilong/tree-of-thought-puzzle-solver).
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models

View File

@ -732,10 +732,10 @@ of the object.
If you're creating a custom chain or runnable, you need to remember to propagate request time
callbacks to any child objects.
:::important Async in Python<=3.10
:::important Async in Python&lt;=3.10
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
and is running `async` in python<=3.10, will have to propagate callbacks to child
and is running `async` in python&lt;=3.10, will have to propagate callbacks to child
objects manually. This is because LangChain cannot automatically propagate
callbacks to child objects in this case.

View File

@ -38,9 +38,9 @@
"\n",
"\n",
":::caution COMPATIBILITY\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python&lt;=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"If you are running python&lt;=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n",
":::"
@ -115,7 +115,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In python <= 3.10, you must propagate the config manually!"
"In python &lt;= 3.10, you must propagate the config manually!"
]
},
{

View File

@ -39,7 +39,7 @@
"| `AIMessageChunk` / `HumanMessageChunk` / ... | Chunk variant of each type of message. |\n",
"\n",
"\n",
"::: {.callout-note}\n",
":::{.callout-note}\n",
"`ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.\n",
"\n",
"This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema.\n",

View File

@ -118,7 +118,7 @@
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
":::{.callout-tip}\n",
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
":::\n",
"\n",

View File

@ -517,7 +517,7 @@
"id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n",
"\n",
"We're focusing on streaming concepts, not necessarily the results of the chains.\n",
@ -689,27 +689,27 @@
"Below is a reference table that shows some events that might be emitted by the various Runnable objects.\n",
"\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.\n",
":::\n",
"\n",
"| event | name | chunk | input | output |\n",
"|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n",
"| on_chat_model_start | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | |\n",
"| on_chat_model_start | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | |\n",
"| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") | | |\n",
"| on_chat_model_end | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content=\"hello world\") |\n",
"| on_llm_start | [model name] | | {'input': 'hello'} | |\n",
"| on_chat_model_end | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | AIMessageChunk(content=\"hello world\") |\n",
"| on_llm_start | [model name] | | \\{'input': 'hello'\\} | |\n",
"| on_llm_stream | [model name] | 'Hello' | | |\n",
"| on_llm_end | [model name] | | 'Hello human!' | |\n",
"| on_chain_start | format_docs | | | |\n",
"| on_chain_stream | format_docs | \"hello world!, goodbye world!\" | | |\n",
"| on_chain_end | format_docs | | [Document(...)] | \"hello world!, goodbye world!\" |\n",
"| on_tool_start | some_tool | | {\"x\": 1, \"y\": \"2\"} | |\n",
"| on_tool_end | some_tool | | | {\"x\": 1, \"y\": \"2\"} |\n",
"| on_retriever_start | [retriever name] | | {\"query\": \"hello\"} | |\n",
"| on_retriever_end | [retriever name] | | {\"query\": \"hello\"} | [Document(...), ..] |\n",
"| on_prompt_start | [template_name] | | {\"question\": \"hello\"} | |\n",
"| on_prompt_end | [template_name] | | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |"
"| on_tool_start | some_tool | | \\{\"x\": 1, \"y\": \"2\"\\} | |\n",
"| on_tool_end | some_tool | | | \\{\"x\": 1, \"y\": \"2\"\\} |\n",
"| on_retriever_start | [retriever name] | | \\{\"query\": \"hello\"\\} | |\n",
"| on_retriever_end | [retriever name] | | \\{\"query\": \"hello\"\\} | [Document(...), ..] |\n",
"| on_prompt_start | [template_name] | | \\{\"question\": \"hello\"\\} | |\n",
"| on_prompt_end | [template_name] | | \\{\"question\": \"hello\"\\} | ChatPromptValue(messages: [SystemMessage, ...]) |"
]
},
{

View File

@ -20,9 +20,9 @@
"\n",
":::caution Compatibility\n",
"\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python&lt;=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"If you are running python&lt;=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n",
"\n",

View File

@ -17,7 +17,7 @@
"source": [
"# [Deprecated] Experimental Anthropic Tools Wrapper\n",
"\n",
"::: {.callout-warning}\n",
":::{.callout-warning}\n",
"\n",
"The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.15`.\n",
"\n",
@ -118,7 +118,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@ -132,7 +132,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@ -44,7 +44,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Individual NFT\n",
"- metadata={'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'})"
"- metadata=\\{'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'\\}"
]
},
{

View File

@ -19,7 +19,7 @@
"\n",
"You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n",
"\n",
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>\n"
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/&lt;space_key&gt;/pages/&lt;page_id&gt;\n"
]
},
{

View File

@ -40,9 +40,9 @@
"source": [
"The Figma API Requires an access token, node_ids, and a file key.\n",
"\n",
"The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename\n",
"The file key can be pulled from the URL. https://www.figma.com/file/\\{filekey\\}/sampleFilename\n",
"\n",
"Node IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.\n",
"Node IDs are also available in the URL. Click on anything and look for the '?node-id=\\{node_id\\}' param.\n",
"\n",
"Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens"
]

View File

@ -44,7 +44,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Individual NFT\n",
"- metadata={'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'}"
"- metadata=\\{'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'\\}"
]
},
{

View File

@ -47,7 +47,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Mongo Document\n",
"- metadata={'database': '[database_name]', 'collection': '[collection_name]'}"
"- metadata=\\{'database': '[database_name]', 'collection': '[collection_name]'\\}"
]
},
{

View File

@ -32,7 +32,7 @@
"source": [
"It's best to store your RSpace API key as an environment variable. \n",
"\n",
" RSPACE_API_KEY=<YOUR_KEY>\n",
" RSPACE_API_KEY=&lt;YOUR_KEY&gt;\n",
"\n",
"You'll also need to set the URL of your RSpace installation e.g.\n",
"\n",

View File

@ -15,7 +15,7 @@
"\n",
"## 🧑 Instructions for ingesting your own dataset\n",
"\n",
"Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n",
"Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option (\\{your_slack_domain\\}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n",
"\n",
"The download will produce a `.zip` file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).\n",
"\n",

View File

@ -76,7 +76,7 @@
"source": [
"To bypass SSL verification errors during fetching, you can set the \"verify\" option:\n",
"\n",
"loader.requests_kwargs = {'verify':False}\n",
"`loader.requests_kwargs = {'verify':False}`\n",
"\n",
"### Initialization with multiple pages\n",
"\n",

View File

@ -277,7 +277,7 @@
"id": "af08575f",
"metadata": {},
"source": [
"You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:"
"You can send your pipeline directly over the wire to your model, but this will only work for small models (&lt;2 Gb), and will be pretty slow:"
]
},
{

View File

@ -101,7 +101,7 @@ pressure station temperature time visibility
*/
Thought:The "temperature" column in the "air" table is relevant to the question. I can query the average temperature between the specified dates.
Action: sql_db_query
Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'"
Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time &lt;= '2022-10-20'"
Observation: [(68.0,)]
Thought:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0.
Final Answer: 68.0

View File

@ -190,7 +190,7 @@
" \n",
"Let's use LangChain's expression language (LCEL) to illustrate this. Any prompt here will do, we will optimize the final prompt with DSPy.\n",
"\n",
"Considering that, let's just keep it to the barebones: **Given {context}, answer the question {question} as a tweet.**"
"Considering that, let's just keep it to the barebones: **Given \\{context\\}, answer the question \\{question\\} as a tweet.**"
]
},
{

View File

@ -6,9 +6,9 @@
The Figma API requires an `access token`, `node_ids`, and a `file key`.
The `file key` can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename
The `file key` can be pulled from the URL. https://www.figma.com/file/\{filekey\}/sampleFilename
`Node IDs` are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.
`Node IDs` are also available in the URL. Click on anything and look for the '?node-id=\{node_id\}' param.
`Access token` [instructions](https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens).

View File

@ -60,7 +60,7 @@ Xinference client.
For local deployment, the endpoint will be http://localhost:9997.
For cluster deployment, the endpoint will be http://${supervisor_host}:9997.
For cluster deployment, the endpoint will be http://$\{supervisor_host\}:9997.
Then, you need to launch a model. You can specify the model names and other attributes

View File

@ -81,7 +81,7 @@
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format \\{username\\}/\\{repo-name\\}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`."
]

View File

@ -80,7 +80,7 @@
"\n",
"* **GITLAB_URL** - The URL hosted Gitlab. Defaults to \"https://gitlab.com\". \n",
"* **GITLAB_PERSONAL_ACCESS_TOKEN**- The personal access token you created in the last step\n",
"* **GITLAB_REPOSITORY**- The name of the Gitlab repository you want your bot to act upon. Must follow the format {username}/{repo-name}.\n",
"* **GITLAB_REPOSITORY**- The name of the Gitlab repository you want your bot to act upon. Must follow the format \\{username\\}/\\{repo-name\\}.\n",
"* **GITLAB_BRANCH**- The branch where the bot will make its commits. Defaults to 'main.'\n",
"* **GITLAB_BASE_BRANCH**- The base branch of your repo, usually either 'main' or 'master.' This is where merge requests will base from. Defaults to 'main.'\n"
]

View File

@ -31,7 +31,7 @@
"- Configure the action by specifying the necessary details, such as the playlist name,\n",
"e.g., \"Songs from AI\".\n",
"- Reference the JSON Payload received by the Webhook in your action. For the Spotify\n",
"scenario, choose \"{{JsonPayload}}\" as your search query.\n",
"scenario, choose `{{JsonPayload}}` as your search query.\n",
"- Tap the \"Create Action\" button to save your action settings.\n",
"- Once you have finished configuring your action, click the \"Finish\" button to\n",
"complete the setup.\n",

View File

@ -137,7 +137,7 @@
"source": [
"#### Load API Keys and Access Tokens\n",
"\n",
"To use tools that require authentication, you have to store the corresponding access credentials in your environment in the format \"{tool name}_{authentication string}\" where the authentication string is one of [\"API_KEY\", \"SECRET_KEY\", \"SUBSCRIPTION_KEY\", \"ACCESS_KEY\"] for API keys or [\"ACCESS_TOKEN\", \"SECRET_TOKEN\"] for authentication tokens. Examples are \"OPENAI_API_KEY\", \"BING_SUBSCRIPTION_KEY\", \"AIRTABLE_ACCESS_TOKEN\"."
"To use tools that require authentication, you have to store the corresponding access credentials in your environment in the format `\"{tool name}_{authentication string}\"` where the authentication string is one of [\"API_KEY\", \"SECRET_KEY\", \"SUBSCRIPTION_KEY\", \"ACCESS_KEY\"] for API keys or [\"ACCESS_TOKEN\", \"SECRET_TOKEN\"] for authentication tokens. Examples are \"OPENAI_API_KEY\", \"BING_SUBSCRIPTION_KEY\", \"AIRTABLE_ACCESS_TOKEN\"."
]
},
{

View File

@ -564,7 +564,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
"In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as `{'tensor_db': True}` during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
]
},
{

View File

@ -377,7 +377,7 @@
}
},
"source": [
"If you encounter any problems during use, please feel free to contact <xingshaomin.xsm@alibaba-inc.com>, and we will do our best to provide you with assistance and support.\n"
"If you encounter any problems during use, please feel free to contact xingshaomin.xsm@alibaba-inc.com, and we will do our best to provide you with assistance and support.\n"
]
}
],

View File

@ -142,7 +142,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Please feel free to contact <liuboyao@baidu.com> or <chenweixu01@baidu.com> if you encounter any problems during use, and we will do our best to support you."
"Please feel free to contact liuboyao@baidu.com or chenweixu01@baidu.com if you encounter any problems during use, and we will do our best to support you."
]
}
],

View File

@ -474,7 +474,7 @@
"# Other Notes\n",
">* More documentation can be found at [LangChain-MongoDB](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/) site\n",
">* This feature is Generally Available and ready for production deployments.\n",
">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n",
">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to &lt;=0.0.304\n",
"> "
]
},

View File

@ -254,8 +254,8 @@
"|----------|-------------------------|\n",
"| \\$eq | Equality (==) |\n",
"| \\$ne | Inequality (!=) |\n",
"| \\$lt | Less than (<) |\n",
"| \\$lte | Less than or equal (<=) |\n",
"| \\$lt | Less than (&lt;) |\n",
"| \\$lte | Less than or equal (&lt;=) |\n",
"| \\$gt | Greater than (>) |\n",
"| \\$gte | Greater than or equal (>=) |\n",
"| \\$in | Special Cased (in) |\n",

View File

@ -371,8 +371,8 @@
"|----------|-------------------------|\n",
"| `$eq` | Equality (==) |\n",
"| `$ne` | Inequality (!=) |\n",
"| `$lt` | Less than (<) |\n",
"| `$lte` | Less than or equal (<=) |\n",
"| `$lt` | Less than (&lt;) |\n",
"| `$lte` | Less than or equal (&lt;=) |\n",
"| `$gt` | Greater than (>) |\n",
"| `$gte` | Greater than or equal (>=) |\n",
"| `$in` | Contained in a set of given values (in) |\n",

View File

@ -40,57 +40,57 @@ well as updating the `langchain_core.pydantic_v1` and `langchain.pydantic_v1` im
| Package | Latest | Recommended constraint |
|--------------------------|--------|------------------------|
| langchain | 0.3.0 | >=0.3,<0.4 |
| langchain-community | 0.3.0 | >=0.3,<0.4 |
| langchain-text-splitters | 0.3.0 | >=0.3,<0.4 |
| langchain-core | 0.3.0 | >=0.3,<0.4 |
| langchain-experimental | 0.3.0 | >=0.3,<0.4 |
| langchain | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-community | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-text-splitters | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-core | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-experimental | 0.3.0 | >=0.3,&lt;0.4 |
### Downstream packages
| Package | Latest | Recommended constraint |
|-----------|--------|------------------------|
| langgraph | 0.2.20 | >=0.2.20,<0.3 |
| langserve | 0.3.0 | >=0.3,<0.4 |
| langgraph | 0.2.20 | >=0.2.20,&lt;0.3 |
| langserve | 0.3.0 | >=0.3,&lt;0.4 |
### Integration packages
| Package | Latest | Recommended constraint |
| -------------------------------------- | ------- | -------------------------- |
| langchain-ai21 | 0.2.0 | >=0.2,<0.3 |
| langchain-aws | 0.2.0 | >=0.2,<0.3 |
| langchain-anthropic | 0.2.0 | >=0.2,<0.3 |
| langchain-astradb | 0.4.1 | >=0.4.1,<0.5 |
| langchain-azure-dynamic-sessions | 0.2.0 | >=0.2,<0.3 |
| langchain-box | 0.2.0 | >=0.2,<0.3 |
| langchain-chroma | 0.1.4 | >=0.1.4,<0.2 |
| langchain-cohere | 0.3.0 | >=0.3,<0.4 |
| langchain-elasticsearch | 0.3.0 | >=0.3,<0.4 |
| langchain-exa | 0.2.0 | >=0.2,<0.3 |
| langchain-fireworks | 0.2.0 | >=0.2,<0.3 |
| langchain-groq | 0.2.0 | >=0.2,<0.3 |
| langchain-google-community | 2.0.0 | >=2,<3 |
| langchain-google-genai | 2.0.0 | >=2,<3 |
| langchain-google-vertexai | 2.0.0 | >=2,<3 |
| langchain-huggingface | 0.1.0 | >=0.1,<0.2 |
| langchain-ibm | 0.2.0 | >=0.2,<0.3 |
| langchain-milvus | 0.1.6 | >=0.1.6,<0.2 |
| langchain-mistralai | 0.2.0 | >=0.2,<0.3 |
| langchain-mongodb | 0.2.0 | >=0.2,<0.3 |
| langchain-nomic | 0.1.3 | >=0.1.3,<0.2 |
| langchain-ollama | 0.2.0 | >=0.2,<0.3 |
| langchain-openai | 0.2.0 | >=0.2,<0.3 |
| langchain-pinecone | 0.2.0 | >=0.2,<0.3 |
| langchain-postgres | 0.0.13 | >=0.0.13,<0.1 |
| langchain-prompty | 0.1.0 | >=0.1,<0.2 |
| langchain-qdrant | 0.1.4 | >=0.1.4,<0.2 |
| langchain-redis | 0.1.0 | >=0.1,<0.2 |
| langchain-sema4 | 0.2.0 | >=0.2,<0.3 |
| langchain-together | 0.2.0 | >=0.2,<0.3 |
| langchain-unstructured | 0.1.4 | >=0.1.4,<0.2 |
| langchain-upstage | 0.3.0 | >=0.3,<0.4 |
| langchain-voyageai | 0.2.0 | >=0.2,<0.3 |
| langchain-weaviate | 0.0.3 | >=0.0.3,<0.1 |
| langchain-ai21 | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-aws | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-anthropic | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-astradb | 0.4.1 | >=0.4.1,&lt;0.5 |
| langchain-azure-dynamic-sessions | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-box | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-chroma | 0.1.4 | >=0.1.4,&lt;0.2 |
| langchain-cohere | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-elasticsearch | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-exa | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-fireworks | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-groq | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-google-community | 2.0.0 | >=2,&lt;3 |
| langchain-google-genai | 2.0.0 | >=2,&lt;3 |
| langchain-google-vertexai | 2.0.0 | >=2,&lt;3 |
| langchain-huggingface | 0.1.0 | >=0.1,&lt;0.2 |
| langchain-ibm | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-milvus | 0.1.6 | >=0.1.6,&lt;0.2 |
| langchain-mistralai | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-mongodb | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-nomic | 0.1.3 | >=0.1.3,&lt;0.2 |
| langchain-ollama | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-openai | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-pinecone | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-postgres | 0.0.13 | >=0.0.13,&lt;0.1 |
| langchain-prompty | 0.1.0 | >=0.1,&lt;0.2 |
| langchain-qdrant | 0.1.4 | >=0.1.4,&lt;0.2 |
| langchain-redis | 0.1.0 | >=0.1,&lt;0.2 |
| langchain-sema4 | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-together | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-unstructured | 0.1.4 | >=0.1.4,&lt;0.2 |
| langchain-upstage | 0.3.0 | >=0.3,&lt;0.4 |
| langchain-voyageai | 0.2.0 | >=0.2,&lt;0.3 |
| langchain-weaviate | 0.0.3 | >=0.0.3,&lt;0.1 |
Once you've updated to recent versions of the packages, you may need to address the following issues stemming from the internal switch from Pydantic v1 to Pydantic v2:

View File

@ -31,6 +31,20 @@ class EscapePreprocessor(Preprocessor):
r"[\1](\2.md)",
cell.source,
)
elif cell.cell_type == "code":
# escape ``` in code
cell.source = cell.source.replace("```", r"\`\`\`")
# escape ``` in output
if "outputs" in cell:
for output in cell["outputs"]:
if "text" in output:
output["text"] = output["text"].replace("```", r"\`\`\`")
if "data" in output:
for key, value in output["data"].items():
if isinstance(value, str):
output["data"][key] = value.replace("```", r"\`\`\`")
return cell, resources