docs: fix more broken links (#27806)

Fix some broken links
This commit is contained in:
Eugene Yurtsev 2024-10-31 15:46:39 -04:00 committed by GitHub
parent c572d663f9
commit 71f590de50
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 21 additions and 21 deletions

View File

@ -152,7 +152,7 @@ A semantic cache introduces a dependency on another model on the critical path o
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times. However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times.
Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details. Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.
## Related resources ## Related resources
@ -165,4 +165,4 @@ Please see the [how to cache chat model responses](/docs/how_to/#chat-model-cach
* [Tool calling](/docs/concepts#tool-calling) * [Tool calling](/docs/concepts#tool-calling)
* [Multimodality](/docs/concepts/multimodality) * [Multimodality](/docs/concepts/multimodality)
* [Structured outputs](/docs/concepts#structured_output) * [Structured outputs](/docs/concepts#structured_output)
* [Tokens](/docs/concepts/tokens) * [Tokens](/docs/concepts/tokens)

View File

@ -15,7 +15,7 @@ This guide covers the main concepts and methods of the Runnable interface, which
The Runnable way defines a standard interface that allows a Runnable component to be: The Runnable way defines a standard interface that allows a Runnable component to be:
* [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output. * [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output.
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable/): Multiple inputs are efficiently transformed into outputs. * [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable): Multiple inputs are efficiently transformed into outputs.
* [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced. * [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced.
* Inspected: Schematic information about Runnable's input, output, and configuration can be accessed. * Inspected: Schematic information about Runnable's input, output, and configuration can be accessed.
* Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines. * Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines.

View File

@ -141,7 +141,7 @@ See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more
You can use the `RunnableConfig` object to pass custom run time values to tools. You can use the `RunnableConfig` object to pass custom run time values to tools.
If you need to access the [RunnableConfig](/docs/concepts/runnables/#RunnableConfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature. If you need to access the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
```python ```python
from langchain_core.runnables import RunnableConfig from langchain_core.runnables import RunnableConfig

View File

@ -186,6 +186,6 @@ See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
| Name | When to use | Description | | Name | When to use | Description |
|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). | | [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). |
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. | | [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |

View File

@ -18,7 +18,7 @@
"# Build an Agent with AgentExecutor (Legacy)\n", "# Build an Agent with AgentExecutor (Legacy)\n",
"\n", "\n",
":::important\n", ":::important\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n", "This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/architecture/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
":::\n", ":::\n",
"\n", "\n",
"By themselves, language models can't take actions - they just output text.\n", "By themselves, language models can't take actions - they just output text.\n",
@ -802,7 +802,7 @@
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n", "That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n", "\n",
":::important\n", ":::important\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n", "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
":::\n", ":::\n",
"\n", "\n",
"If you want to continue using LangChain agents, some good advanced guides are:\n", "If you want to continue using LangChain agents, some good advanced guides are:\n",

View File

@ -686,7 +686,7 @@
"source": [ "source": [
"### Agent constructor\n", "### Agent constructor\n",
"\n", "\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n",
"Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic." "Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic."
] ]
}, },

View File

@ -556,7 +556,7 @@
"id": "498d893b-ceaa-47ff-a9d8-4faa60702715", "id": "498d893b-ceaa-47ff-a9d8-4faa60702715",
"metadata": {}, "metadata": {},
"source": [ "source": [
"For more on few shot prompting when using tool calling, see [here](/docs/how_to/function_calling/#Few-shot-prompting)." "For more on few shot prompting when using tool calling, see [here](/docs/how_to/tools_few_shot/)."
] ]
}, },
{ {

View File

@ -17,7 +17,7 @@
"source": [ "source": [
"# ChatClovaX\n", "# ChatClovaX\n",
"\n", "\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/#chat-models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n", "This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"\n", "\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n", "[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
"\n", "\n",

View File

@ -17,7 +17,7 @@
"source": [ "source": [
"# ChatWriter\n", "# ChatWriter\n",
"\n", "\n",
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/#chat-models).\n", "This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
"\n", "\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n", "Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
"\n", "\n",

View File

@ -9,7 +9,7 @@
"**[SambaNova](https://sambanova.ai/)'s** [Sambastudio](https://sambanova.ai/technology/full-stack-ai-platform) is a platform that allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.\n", "**[SambaNova](https://sambanova.ai/)'s** [Sambastudio](https://sambanova.ai/technology/full-stack-ai-platform) is a platform that allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.\n",
"\n", "\n",
":::caution\n", ":::caution\n",
"You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/#llms). We recommend you to use the [chat completion models](/docs/concepts/#chat-models).\n", "You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/text_llms). We recommend you to use the [chat completion models](/docs/concepts/chat_models).\n",
"\n", "\n",
"You may be looking for [SambaStudio Chat Models](/docs/integrations/chat/sambastudio/) .\n", "You may be looking for [SambaStudio Chat Models](/docs/integrations/chat/sambastudio/) .\n",
":::\n", ":::\n",

View File

@ -9,7 +9,7 @@ sidebar_class_name: hidden
LangChain simplifies every stage of the LLM application lifecycle: LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/). - **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/).
Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support. Use [LangGraph](/docs/concepts/architecture/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. - **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/). - **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).

View File

@ -8,7 +8,7 @@ The following may help resolve this error:
- Double-check your prompt template to ensure that it is correct. - Double-check your prompt template to ensure that it is correct.
- If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`). - If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`).
- If you are using a [`MessagesPlaceholder`](/docs/concepts/messages/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. - If you are using a [`MessagesPlaceholder`](/docs/concepts/prompt_templates/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects.
- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`). - If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`).
- Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected. - Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected.
- If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect. - If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.

View File

@ -370,7 +370,7 @@
"source": [ "source": [
"## Create the agent\n", "## Create the agent\n",
"\n", "\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n",
"Currently, we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic.\n" "Currently, we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic.\n"
] ]
}, },
@ -719,7 +719,7 @@
"We've also added in memory so you can have a conversation with them.\n", "We've also added in memory so you can have a conversation with them.\n",
"Agents are a complex topic with lots to learn! \n", "Agents are a complex topic with lots to learn! \n",
"\n", "\n",
"For more information on Agents, please check out the [LangGraph](/docs/concepts/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides." "For more information on Agents, please check out the [LangGraph](/docs/concepts/architecture/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides."
] ]
}, },
{ {

View File

@ -29,7 +29,7 @@
"\n", "\n",
"- Debugging and tracing your application using [LangSmith](https://docs.smith.langchain.com/)\n", "- Debugging and tracing your application using [LangSmith](https://docs.smith.langchain.com/)\n",
"\n", "\n",
"- Deploying your application with [LangServe](/docs/concepts/#langserve)\n", "- Deploying your application with [LangServe](/docs/concepts/architecture/#langserve)\n",
"\n", "\n",
"Let's dive in!\n", "Let's dive in!\n",
"\n", "\n",

View File

@ -817,7 +817,7 @@
"source": [ "source": [
"### Agent constructor\n", "### Agent constructor\n",
"\n", "\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n",
"Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic." "Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic."
] ]
}, },

View File

@ -494,7 +494,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We will use a prebuilt [LangGraph](/docs/concepts/#langgraph) agent to build our agent" "We will use a prebuilt [LangGraph](/docs/concepts/architecture/#langgraph) agent to build our agent"
] ]
}, },
{ {

View File

@ -14,7 +14,7 @@ hide_table_of_contents: true
# Key-value stores # Key-value stores
[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data. [Key-value stores](/docs/concepts/key_value_stores) are used by other LangChain components to store and retrieve data.
:::info :::info

View File

@ -182,7 +182,7 @@ custom_edit_url:
# Tools # Tools
[Tools](/docs/concepts/#tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. [Tools](/docs/concepts/tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
A [toolkit](/docs/concepts#toolkits) is a collection of tools meant to be used together. A [toolkit](/docs/concepts#toolkits) is a collection of tools meant to be used together.