diff --git a/docs/Makefile b/docs/Makefile index 6645d85bdf2..0835c326bf9 100644 --- a/docs/Makefile +++ b/docs/Makefile @@ -46,7 +46,7 @@ generate-files: $(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR) - wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md + curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\<=/g' > $(INTERMEDIATE_DIR)/langserve.md $(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/ copy-infra: diff --git a/docs/docs/additional_resources/arxiv_references.mdx b/docs/docs/additional_resources/arxiv_references.mdx index bbd292c5ffa..461399688f6 100644 --- a/docs/docs/additional_resources/arxiv_references.mdx +++ b/docs/docs/additional_resources/arxiv_references.mdx @@ -451,8 +451,7 @@ steps of the thought-process and explore other directions from there. To verify the effectiveness of the proposed technique, we implemented a ToT-based solver for the Sudoku Puzzle. Experimental results show that the ToT framework can significantly increase the success rate of Sudoku puzzle solving. Our -implementation of the ToT-based Sudoku solver is available on GitHub: -\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}. +implementation of the ToT-based Sudoku solver is available on [GitHub](https://github.com/jieyilong/tree-of-thought-puzzle-solver). ## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models diff --git a/docs/docs/concepts.mdx b/docs/docs/concepts.mdx index 05fc8f810ff..0e5fdc500b8 100644 --- a/docs/docs/concepts.mdx +++ b/docs/docs/concepts.mdx @@ -732,10 +732,10 @@ of the object. If you're creating a custom chain or runnable, you need to remember to propagate request time callbacks to any child objects. -:::important Async in Python<=3.10 +:::important Async in Python<=3.10 Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables -and is running `async` in python<=3.10, will have to propagate callbacks to child +and is running `async` in python<=3.10, will have to propagate callbacks to child objects manually. This is because LangChain cannot automatically propagate callbacks to child objects in this case. diff --git a/docs/docs/how_to/callbacks_custom_events.ipynb b/docs/docs/how_to/callbacks_custom_events.ipynb index 71a05914bdd..565a90ae623 100644 --- a/docs/docs/how_to/callbacks_custom_events.ipynb +++ b/docs/docs/how_to/callbacks_custom_events.ipynb @@ -38,9 +38,9 @@ "\n", "\n", ":::caution COMPATIBILITY\n", - "LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n", + "LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n", "\n", - "If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n", + "If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n", "\n", "If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n", ":::" @@ -115,7 +115,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In python <= 3.10, you must propagate the config manually!" + "In python <= 3.10, you must propagate the config manually!" ] }, { diff --git a/docs/docs/how_to/custom_chat_model.ipynb b/docs/docs/how_to/custom_chat_model.ipynb index 7aaa3c94c7c..2f9d83ec6b6 100644 --- a/docs/docs/how_to/custom_chat_model.ipynb +++ b/docs/docs/how_to/custom_chat_model.ipynb @@ -39,7 +39,7 @@ "| `AIMessageChunk` / `HumanMessageChunk` / ... | Chunk variant of each type of message. |\n", "\n", "\n", - "::: {.callout-note}\n", + ":::{.callout-note}\n", "`ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.\n", "\n", "This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema.\n", diff --git a/docs/docs/how_to/parallel.ipynb b/docs/docs/how_to/parallel.ipynb index eedaea41731..cbce8c33841 100644 --- a/docs/docs/how_to/parallel.ipynb +++ b/docs/docs/how_to/parallel.ipynb @@ -118,7 +118,7 @@ "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", "metadata": {}, "source": [ - "::: {.callout-tip}\n", + ":::{.callout-tip}\n", "Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n", ":::\n", "\n", diff --git a/docs/docs/how_to/streaming.ipynb b/docs/docs/how_to/streaming.ipynb index 85d1ba1d5e6..520c867ecb7 100644 --- a/docs/docs/how_to/streaming.ipynb +++ b/docs/docs/how_to/streaming.ipynb @@ -517,7 +517,7 @@ "id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d", "metadata": {}, "source": [ - ":::{.callout-note}\n", + ":::note\n", "Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n", "\n", "We're focusing on streaming concepts, not necessarily the results of the chains.\n", @@ -689,27 +689,27 @@ "Below is a reference table that shows some events that might be emitted by the various Runnable objects.\n", "\n", "\n", - ":::{.callout-note}\n", + ":::note\n", "When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.\n", ":::\n", "\n", "| event | name | chunk | input | output |\n", "|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n", - "| on_chat_model_start | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | |\n", + "| on_chat_model_start | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | |\n", "| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") | | |\n", - "| on_chat_model_end | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content=\"hello world\") |\n", - "| on_llm_start | [model name] | | {'input': 'hello'} | |\n", + "| on_chat_model_end | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | AIMessageChunk(content=\"hello world\") |\n", + "| on_llm_start | [model name] | | \\{'input': 'hello'\\} | |\n", "| on_llm_stream | [model name] | 'Hello' | | |\n", "| on_llm_end | [model name] | | 'Hello human!' | |\n", "| on_chain_start | format_docs | | | |\n", "| on_chain_stream | format_docs | \"hello world!, goodbye world!\" | | |\n", "| on_chain_end | format_docs | | [Document(...)] | \"hello world!, goodbye world!\" |\n", - "| on_tool_start | some_tool | | {\"x\": 1, \"y\": \"2\"} | |\n", - "| on_tool_end | some_tool | | | {\"x\": 1, \"y\": \"2\"} |\n", - "| on_retriever_start | [retriever name] | | {\"query\": \"hello\"} | |\n", - "| on_retriever_end | [retriever name] | | {\"query\": \"hello\"} | [Document(...), ..] |\n", - "| on_prompt_start | [template_name] | | {\"question\": \"hello\"} | |\n", - "| on_prompt_end | [template_name] | | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |" + "| on_tool_start | some_tool | | \\{\"x\": 1, \"y\": \"2\"\\} | |\n", + "| on_tool_end | some_tool | | | \\{\"x\": 1, \"y\": \"2\"\\} |\n", + "| on_retriever_start | [retriever name] | | \\{\"query\": \"hello\"\\} | |\n", + "| on_retriever_end | [retriever name] | | \\{\"query\": \"hello\"\\} | [Document(...), ..] |\n", + "| on_prompt_start | [template_name] | | \\{\"question\": \"hello\"\\} | |\n", + "| on_prompt_end | [template_name] | | \\{\"question\": \"hello\"\\} | ChatPromptValue(messages: [SystemMessage, ...]) |" ] }, { diff --git a/docs/docs/how_to/tool_stream_events.ipynb b/docs/docs/how_to/tool_stream_events.ipynb index 3cf3f1ae21d..84809f1230f 100644 --- a/docs/docs/how_to/tool_stream_events.ipynb +++ b/docs/docs/how_to/tool_stream_events.ipynb @@ -20,9 +20,9 @@ "\n", ":::caution Compatibility\n", "\n", - "LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n", + "LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n", "\n", - "If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n", + "If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n", "\n", "If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n", "\n", diff --git a/docs/docs/integrations/chat/anthropic_functions.ipynb b/docs/docs/integrations/chat/anthropic_functions.ipynb index 1c61725fe95..5c913c82a5f 100644 --- a/docs/docs/integrations/chat/anthropic_functions.ipynb +++ b/docs/docs/integrations/chat/anthropic_functions.ipynb @@ -17,7 +17,7 @@ "source": [ "# [Deprecated] Experimental Anthropic Tools Wrapper\n", "\n", - "::: {.callout-warning}\n", + ":::{.callout-warning}\n", "\n", "The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.15`.\n", "\n", @@ -118,7 +118,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv", "language": "python", "name": "python3" }, @@ -132,7 +132,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.11.4" } }, "nbformat": 4, diff --git a/docs/docs/integrations/document_loaders/blockchain.ipynb b/docs/docs/integrations/document_loaders/blockchain.ipynb index d0850311f23..438ad099bf5 100644 --- a/docs/docs/integrations/document_loaders/blockchain.ipynb +++ b/docs/docs/integrations/document_loaders/blockchain.ipynb @@ -44,7 +44,7 @@ "The output takes the following format:\n", "\n", "- pageContent= Individual NFT\n", - "- metadata={'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'})" + "- metadata=\\{'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'\\}" ] }, { diff --git a/docs/docs/integrations/document_loaders/confluence.ipynb b/docs/docs/integrations/document_loaders/confluence.ipynb index ea8e0973b6d..c7f04f0bae5 100644 --- a/docs/docs/integrations/document_loaders/confluence.ipynb +++ b/docs/docs/integrations/document_loaders/confluence.ipynb @@ -19,7 +19,7 @@ "\n", "You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n", "\n", - "Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces//pages/\n" + "Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>\n" ] }, { diff --git a/docs/docs/integrations/document_loaders/figma.ipynb b/docs/docs/integrations/document_loaders/figma.ipynb index fd4cc237c5d..cccc3dac5c9 100644 --- a/docs/docs/integrations/document_loaders/figma.ipynb +++ b/docs/docs/integrations/document_loaders/figma.ipynb @@ -40,9 +40,9 @@ "source": [ "The Figma API Requires an access token, node_ids, and a file key.\n", "\n", - "The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename\n", + "The file key can be pulled from the URL. https://www.figma.com/file/\\{filekey\\}/sampleFilename\n", "\n", - "Node IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.\n", + "Node IDs are also available in the URL. Click on anything and look for the '?node-id=\\{node_id\\}' param.\n", "\n", "Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens" ] diff --git a/docs/docs/integrations/document_loaders/mintbase.ipynb b/docs/docs/integrations/document_loaders/mintbase.ipynb index 2aaaac0d915..3b72cfbe49b 100644 --- a/docs/docs/integrations/document_loaders/mintbase.ipynb +++ b/docs/docs/integrations/document_loaders/mintbase.ipynb @@ -44,7 +44,7 @@ "The output takes the following format:\n", "\n", "- pageContent= Individual NFT\n", - "- metadata={'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'}" + "- metadata=\\{'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'\\}" ] }, { diff --git a/docs/docs/integrations/document_loaders/mongodb.ipynb b/docs/docs/integrations/document_loaders/mongodb.ipynb index e538114d5c5..511b753a54f 100644 --- a/docs/docs/integrations/document_loaders/mongodb.ipynb +++ b/docs/docs/integrations/document_loaders/mongodb.ipynb @@ -47,7 +47,7 @@ "The output takes the following format:\n", "\n", "- pageContent= Mongo Document\n", - "- metadata={'database': '[database_name]', 'collection': '[collection_name]'}" + "- metadata=\\{'database': '[database_name]', 'collection': '[collection_name]'\\}" ] }, { diff --git a/docs/docs/integrations/document_loaders/rspace.ipynb b/docs/docs/integrations/document_loaders/rspace.ipynb index fc1d8aeb9e0..04976c9d7d9 100644 --- a/docs/docs/integrations/document_loaders/rspace.ipynb +++ b/docs/docs/integrations/document_loaders/rspace.ipynb @@ -32,7 +32,7 @@ "source": [ "It's best to store your RSpace API key as an environment variable. \n", "\n", - " RSPACE_API_KEY=\n", + " RSPACE_API_KEY=<YOUR_KEY>\n", "\n", "You'll also need to set the URL of your RSpace installation e.g.\n", "\n", diff --git a/docs/docs/integrations/document_loaders/slack.ipynb b/docs/docs/integrations/document_loaders/slack.ipynb index 71f85999821..648ecda4e86 100644 --- a/docs/docs/integrations/document_loaders/slack.ipynb +++ b/docs/docs/integrations/document_loaders/slack.ipynb @@ -15,7 +15,7 @@ "\n", "## 🧑 Instructions for ingesting your own dataset\n", "\n", - "Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n", + "Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option (\\{your_slack_domain\\}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n", "\n", "The download will produce a `.zip` file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/web_base.ipynb b/docs/docs/integrations/document_loaders/web_base.ipynb index 6eceb401ee5..52589cf4b2a 100644 --- a/docs/docs/integrations/document_loaders/web_base.ipynb +++ b/docs/docs/integrations/document_loaders/web_base.ipynb @@ -76,7 +76,7 @@ "source": [ "To bypass SSL verification errors during fetching, you can set the \"verify\" option:\n", "\n", - "loader.requests_kwargs = {'verify':False}\n", + "`loader.requests_kwargs = {'verify':False}`\n", "\n", "### Initialization with multiple pages\n", "\n", diff --git a/docs/docs/integrations/llms/runhouse.ipynb b/docs/docs/integrations/llms/runhouse.ipynb index d4ff8e80dd0..b5c9e01c8c5 100644 --- a/docs/docs/integrations/llms/runhouse.ipynb +++ b/docs/docs/integrations/llms/runhouse.ipynb @@ -277,7 +277,7 @@ "id": "af08575f", "metadata": {}, "source": [ - "You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:" + "You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:" ] }, { diff --git a/docs/docs/integrations/providers/cnosdb.mdx b/docs/docs/integrations/providers/cnosdb.mdx index 4c5316a5e31..6d65ce29cf8 100644 --- a/docs/docs/integrations/providers/cnosdb.mdx +++ b/docs/docs/integrations/providers/cnosdb.mdx @@ -101,7 +101,7 @@ pressure station temperature time visibility */ Thought:The "temperature" column in the "air" table is relevant to the question. I can query the average temperature between the specified dates. Action: sql_db_query -Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'" +Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'" Observation: [(68.0,)] Thought:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0. Final Answer: 68.0 diff --git a/docs/docs/integrations/providers/dspy.ipynb b/docs/docs/integrations/providers/dspy.ipynb index 43b3f8eeeab..0fc6e02910e 100644 --- a/docs/docs/integrations/providers/dspy.ipynb +++ b/docs/docs/integrations/providers/dspy.ipynb @@ -190,7 +190,7 @@ " \n", "Let's use LangChain's expression language (LCEL) to illustrate this. Any prompt here will do, we will optimize the final prompt with DSPy.\n", "\n", - "Considering that, let's just keep it to the barebones: **Given {context}, answer the question {question} as a tweet.**" + "Considering that, let's just keep it to the barebones: **Given \\{context\\}, answer the question \\{question\\} as a tweet.**" ] }, { diff --git a/docs/docs/integrations/providers/figma.mdx b/docs/docs/integrations/providers/figma.mdx index 6b108aaa21e..d907a481411 100644 --- a/docs/docs/integrations/providers/figma.mdx +++ b/docs/docs/integrations/providers/figma.mdx @@ -6,9 +6,9 @@ The Figma API requires an `access token`, `node_ids`, and a `file key`. -The `file key` can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename +The `file key` can be pulled from the URL. https://www.figma.com/file/\{filekey\}/sampleFilename -`Node IDs` are also available in the URL. Click on anything and look for the '?node-id={node_id}' param. +`Node IDs` are also available in the URL. Click on anything and look for the '?node-id=\{node_id\}' param. `Access token` [instructions](https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens). diff --git a/docs/docs/integrations/providers/xinference.mdx b/docs/docs/integrations/providers/xinference.mdx index 07aefb3b952..97f32b6c1a0 100644 --- a/docs/docs/integrations/providers/xinference.mdx +++ b/docs/docs/integrations/providers/xinference.mdx @@ -60,7 +60,7 @@ Xinference client. For local deployment, the endpoint will be http://localhost:9997. -For cluster deployment, the endpoint will be http://${supervisor_host}:9997. +For cluster deployment, the endpoint will be http://$\{supervisor_host\}:9997. Then, you need to launch a model. You can specify the model names and other attributes diff --git a/docs/docs/integrations/tools/github.ipynb b/docs/docs/integrations/tools/github.ipynb index d9cfd289ecf..a0f61cc7aeb 100644 --- a/docs/docs/integrations/tools/github.ipynb +++ b/docs/docs/integrations/tools/github.ipynb @@ -81,7 +81,7 @@ "\n", "* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n", "* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n", - "* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n", + "* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format \\{username\\}/\\{repo-name\\}. *Make sure the app has been added to this repository first!*\n", "* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n", "* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`." ] diff --git a/docs/docs/integrations/tools/gitlab.ipynb b/docs/docs/integrations/tools/gitlab.ipynb index b22faeeb88e..622622d9add 100644 --- a/docs/docs/integrations/tools/gitlab.ipynb +++ b/docs/docs/integrations/tools/gitlab.ipynb @@ -80,7 +80,7 @@ "\n", "* **GITLAB_URL** - The URL hosted Gitlab. Defaults to \"https://gitlab.com\". \n", "* **GITLAB_PERSONAL_ACCESS_TOKEN**- The personal access token you created in the last step\n", - "* **GITLAB_REPOSITORY**- The name of the Gitlab repository you want your bot to act upon. Must follow the format {username}/{repo-name}.\n", + "* **GITLAB_REPOSITORY**- The name of the Gitlab repository you want your bot to act upon. Must follow the format \\{username\\}/\\{repo-name\\}.\n", "* **GITLAB_BRANCH**- The branch where the bot will make its commits. Defaults to 'main.'\n", "* **GITLAB_BASE_BRANCH**- The base branch of your repo, usually either 'main' or 'master.' This is where merge requests will base from. Defaults to 'main.'\n" ] diff --git a/docs/docs/integrations/tools/ifttt.ipynb b/docs/docs/integrations/tools/ifttt.ipynb index 4cd3bf9f687..564dea0edaa 100644 --- a/docs/docs/integrations/tools/ifttt.ipynb +++ b/docs/docs/integrations/tools/ifttt.ipynb @@ -31,7 +31,7 @@ "- Configure the action by specifying the necessary details, such as the playlist name,\n", "e.g., \"Songs from AI\".\n", "- Reference the JSON Payload received by the Webhook in your action. For the Spotify\n", - "scenario, choose \"{{JsonPayload}}\" as your search query.\n", + "scenario, choose `{{JsonPayload}}` as your search query.\n", "- Tap the \"Create Action\" button to save your action settings.\n", "- Once you have finished configuring your action, click the \"Finish\" button to\n", "complete the setup.\n", diff --git a/docs/docs/integrations/tools/lemonai.ipynb b/docs/docs/integrations/tools/lemonai.ipynb index d169c62a97d..eb532597569 100644 --- a/docs/docs/integrations/tools/lemonai.ipynb +++ b/docs/docs/integrations/tools/lemonai.ipynb @@ -137,7 +137,7 @@ "source": [ "#### Load API Keys and Access Tokens\n", "\n", - "To use tools that require authentication, you have to store the corresponding access credentials in your environment in the format \"{tool name}_{authentication string}\" where the authentication string is one of [\"API_KEY\", \"SECRET_KEY\", \"SUBSCRIPTION_KEY\", \"ACCESS_KEY\"] for API keys or [\"ACCESS_TOKEN\", \"SECRET_TOKEN\"] for authentication tokens. Examples are \"OPENAI_API_KEY\", \"BING_SUBSCRIPTION_KEY\", \"AIRTABLE_ACCESS_TOKEN\"." + "To use tools that require authentication, you have to store the corresponding access credentials in your environment in the format `\"{tool name}_{authentication string}\"` where the authentication string is one of [\"API_KEY\", \"SECRET_KEY\", \"SUBSCRIPTION_KEY\", \"ACCESS_KEY\"] for API keys or [\"ACCESS_TOKEN\", \"SECRET_TOKEN\"] for authentication tokens. Examples are \"OPENAI_API_KEY\", \"BING_SUBSCRIPTION_KEY\", \"AIRTABLE_ACCESS_TOKEN\"." ] }, { diff --git a/docs/docs/integrations/vectorstores/activeloop_deeplake.ipynb b/docs/docs/integrations/vectorstores/activeloop_deeplake.ipynb index 12c63b3f7dd..f06f311088e 100644 --- a/docs/docs/integrations/vectorstores/activeloop_deeplake.ipynb +++ b/docs/docs/integrations/vectorstores/activeloop_deeplake.ipynb @@ -564,7 +564,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps." + "In order to utilize Deep Lake's Managed Tensor Database, it is necessary to specify the runtime parameter as `{'tensor_db': True}` during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps." ] }, { diff --git a/docs/docs/integrations/vectorstores/alibabacloud_opensearch.ipynb b/docs/docs/integrations/vectorstores/alibabacloud_opensearch.ipynb index 1d001f5510c..751d05941f4 100644 --- a/docs/docs/integrations/vectorstores/alibabacloud_opensearch.ipynb +++ b/docs/docs/integrations/vectorstores/alibabacloud_opensearch.ipynb @@ -377,7 +377,7 @@ } }, "source": [ - "If you encounter any problems during use, please feel free to contact , and we will do our best to provide you with assistance and support.\n" + "If you encounter any problems during use, please feel free to contact xingshaomin.xsm@alibaba-inc.com, and we will do our best to provide you with assistance and support.\n" ] } ], diff --git a/docs/docs/integrations/vectorstores/baiducloud_vector_search.ipynb b/docs/docs/integrations/vectorstores/baiducloud_vector_search.ipynb index 7fcc076db4d..86591c08c18 100644 --- a/docs/docs/integrations/vectorstores/baiducloud_vector_search.ipynb +++ b/docs/docs/integrations/vectorstores/baiducloud_vector_search.ipynb @@ -142,7 +142,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Please feel free to contact or if you encounter any problems during use, and we will do our best to support you." + "Please feel free to contact liuboyao@baidu.com or chenweixu01@baidu.com if you encounter any problems during use, and we will do our best to support you." ] } ], diff --git a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb index 140dfe179ed..2b49b21fb94 100644 --- a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb +++ b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb @@ -474,7 +474,7 @@ "# Other Notes\n", ">* More documentation can be found at [LangChain-MongoDB](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/) site\n", ">* This feature is Generally Available and ready for production deployments.\n", - ">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n", + ">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n", "> " ] }, diff --git a/docs/docs/integrations/vectorstores/pgvector.ipynb b/docs/docs/integrations/vectorstores/pgvector.ipynb index ebfa261edc5..db6979de967 100644 --- a/docs/docs/integrations/vectorstores/pgvector.ipynb +++ b/docs/docs/integrations/vectorstores/pgvector.ipynb @@ -254,8 +254,8 @@ "|----------|-------------------------|\n", "| \\$eq | Equality (==) |\n", "| \\$ne | Inequality (!=) |\n", - "| \\$lt | Less than (<) |\n", - "| \\$lte | Less than or equal (<=) |\n", + "| \\$lt | Less than (<) |\n", + "| \\$lte | Less than or equal (<=) |\n", "| \\$gt | Greater than (>) |\n", "| \\$gte | Greater than or equal (>=) |\n", "| \\$in | Special Cased (in) |\n", diff --git a/docs/docs/integrations/vectorstores/sap_hanavector.ipynb b/docs/docs/integrations/vectorstores/sap_hanavector.ipynb index 1e8dc1b5529..90a0ed5e668 100644 --- a/docs/docs/integrations/vectorstores/sap_hanavector.ipynb +++ b/docs/docs/integrations/vectorstores/sap_hanavector.ipynb @@ -371,8 +371,8 @@ "|----------|-------------------------|\n", "| `$eq` | Equality (==) |\n", "| `$ne` | Inequality (!=) |\n", - "| `$lt` | Less than (<) |\n", - "| `$lte` | Less than or equal (<=) |\n", + "| `$lt` | Less than (<) |\n", + "| `$lte` | Less than or equal (<=) |\n", "| `$gt` | Greater than (>) |\n", "| `$gte` | Greater than or equal (>=) |\n", "| `$in` | Contained in a set of given values (in) |\n", diff --git a/docs/docs/versions/v0_3/index.mdx b/docs/docs/versions/v0_3/index.mdx index 0e5bfe971c0..25a3874b573 100644 --- a/docs/docs/versions/v0_3/index.mdx +++ b/docs/docs/versions/v0_3/index.mdx @@ -40,57 +40,57 @@ well as updating the `langchain_core.pydantic_v1` and `langchain.pydantic_v1` im | Package | Latest | Recommended constraint | |--------------------------|--------|------------------------| -| langchain | 0.3.0 | >=0.3,<0.4 | -| langchain-community | 0.3.0 | >=0.3,<0.4 | -| langchain-text-splitters | 0.3.0 | >=0.3,<0.4 | -| langchain-core | 0.3.0 | >=0.3,<0.4 | -| langchain-experimental | 0.3.0 | >=0.3,<0.4 | +| langchain | 0.3.0 | >=0.3,<0.4 | +| langchain-community | 0.3.0 | >=0.3,<0.4 | +| langchain-text-splitters | 0.3.0 | >=0.3,<0.4 | +| langchain-core | 0.3.0 | >=0.3,<0.4 | +| langchain-experimental | 0.3.0 | >=0.3,<0.4 | ### Downstream packages | Package | Latest | Recommended constraint | |-----------|--------|------------------------| -| langgraph | 0.2.20 | >=0.2.20,<0.3 | -| langserve | 0.3.0 | >=0.3,<0.4 | +| langgraph | 0.2.20 | >=0.2.20,<0.3 | +| langserve | 0.3.0 | >=0.3,<0.4 | ### Integration packages | Package | Latest | Recommended constraint | | -------------------------------------- | ------- | -------------------------- | -| langchain-ai21 | 0.2.0 | >=0.2,<0.3 | -| langchain-aws | 0.2.0 | >=0.2,<0.3 | -| langchain-anthropic | 0.2.0 | >=0.2,<0.3 | -| langchain-astradb | 0.4.1 | >=0.4.1,<0.5 | -| langchain-azure-dynamic-sessions | 0.2.0 | >=0.2,<0.3 | -| langchain-box | 0.2.0 | >=0.2,<0.3 | -| langchain-chroma | 0.1.4 | >=0.1.4,<0.2 | -| langchain-cohere | 0.3.0 | >=0.3,<0.4 | -| langchain-elasticsearch | 0.3.0 | >=0.3,<0.4 | -| langchain-exa | 0.2.0 | >=0.2,<0.3 | -| langchain-fireworks | 0.2.0 | >=0.2,<0.3 | -| langchain-groq | 0.2.0 | >=0.2,<0.3 | -| langchain-google-community | 2.0.0 | >=2,<3 | -| langchain-google-genai | 2.0.0 | >=2,<3 | -| langchain-google-vertexai | 2.0.0 | >=2,<3 | -| langchain-huggingface | 0.1.0 | >=0.1,<0.2 | -| langchain-ibm | 0.2.0 | >=0.2,<0.3 | -| langchain-milvus | 0.1.6 | >=0.1.6,<0.2 | -| langchain-mistralai | 0.2.0 | >=0.2,<0.3 | -| langchain-mongodb | 0.2.0 | >=0.2,<0.3 | -| langchain-nomic | 0.1.3 | >=0.1.3,<0.2 | -| langchain-ollama | 0.2.0 | >=0.2,<0.3 | -| langchain-openai | 0.2.0 | >=0.2,<0.3 | -| langchain-pinecone | 0.2.0 | >=0.2,<0.3 | -| langchain-postgres | 0.0.13 | >=0.0.13,<0.1 | -| langchain-prompty | 0.1.0 | >=0.1,<0.2 | -| langchain-qdrant | 0.1.4 | >=0.1.4,<0.2 | -| langchain-redis | 0.1.0 | >=0.1,<0.2 | -| langchain-sema4 | 0.2.0 | >=0.2,<0.3 | -| langchain-together | 0.2.0 | >=0.2,<0.3 | -| langchain-unstructured | 0.1.4 | >=0.1.4,<0.2 | -| langchain-upstage | 0.3.0 | >=0.3,<0.4 | -| langchain-voyageai | 0.2.0 | >=0.2,<0.3 | -| langchain-weaviate | 0.0.3 | >=0.0.3,<0.1 | +| langchain-ai21 | 0.2.0 | >=0.2,<0.3 | +| langchain-aws | 0.2.0 | >=0.2,<0.3 | +| langchain-anthropic | 0.2.0 | >=0.2,<0.3 | +| langchain-astradb | 0.4.1 | >=0.4.1,<0.5 | +| langchain-azure-dynamic-sessions | 0.2.0 | >=0.2,<0.3 | +| langchain-box | 0.2.0 | >=0.2,<0.3 | +| langchain-chroma | 0.1.4 | >=0.1.4,<0.2 | +| langchain-cohere | 0.3.0 | >=0.3,<0.4 | +| langchain-elasticsearch | 0.3.0 | >=0.3,<0.4 | +| langchain-exa | 0.2.0 | >=0.2,<0.3 | +| langchain-fireworks | 0.2.0 | >=0.2,<0.3 | +| langchain-groq | 0.2.0 | >=0.2,<0.3 | +| langchain-google-community | 2.0.0 | >=2,<3 | +| langchain-google-genai | 2.0.0 | >=2,<3 | +| langchain-google-vertexai | 2.0.0 | >=2,<3 | +| langchain-huggingface | 0.1.0 | >=0.1,<0.2 | +| langchain-ibm | 0.2.0 | >=0.2,<0.3 | +| langchain-milvus | 0.1.6 | >=0.1.6,<0.2 | +| langchain-mistralai | 0.2.0 | >=0.2,<0.3 | +| langchain-mongodb | 0.2.0 | >=0.2,<0.3 | +| langchain-nomic | 0.1.3 | >=0.1.3,<0.2 | +| langchain-ollama | 0.2.0 | >=0.2,<0.3 | +| langchain-openai | 0.2.0 | >=0.2,<0.3 | +| langchain-pinecone | 0.2.0 | >=0.2,<0.3 | +| langchain-postgres | 0.0.13 | >=0.0.13,<0.1 | +| langchain-prompty | 0.1.0 | >=0.1,<0.2 | +| langchain-qdrant | 0.1.4 | >=0.1.4,<0.2 | +| langchain-redis | 0.1.0 | >=0.1,<0.2 | +| langchain-sema4 | 0.2.0 | >=0.2,<0.3 | +| langchain-together | 0.2.0 | >=0.2,<0.3 | +| langchain-unstructured | 0.1.4 | >=0.1.4,<0.2 | +| langchain-upstage | 0.3.0 | >=0.3,<0.4 | +| langchain-voyageai | 0.2.0 | >=0.2,<0.3 | +| langchain-weaviate | 0.0.3 | >=0.0.3,<0.1 | Once you've updated to recent versions of the packages, you may need to address the following issues stemming from the internal switch from Pydantic v1 to Pydantic v2: diff --git a/docs/scripts/notebook_convert.py b/docs/scripts/notebook_convert.py index 0830274459b..58ea11be860 100644 --- a/docs/scripts/notebook_convert.py +++ b/docs/scripts/notebook_convert.py @@ -31,6 +31,20 @@ class EscapePreprocessor(Preprocessor): r"[\1](\2.md)", cell.source, ) + + elif cell.cell_type == "code": + # escape ``` in code + cell.source = cell.source.replace("```", r"\`\`\`") + # escape ``` in output + if "outputs" in cell: + for output in cell["outputs"]: + if "text" in output: + output["text"] = output["text"].replace("```", r"\`\`\`") + if "data" in output: + for key, value in output["data"].items(): + if isinstance(value, str): + output["data"][key] = value.replace("```", r"\`\`\`") + return cell, resources