diff --git a/docs/docs/concepts/chat_models.mdx b/docs/docs/concepts/chat_models.mdx index 1c264fcac16..966d89acd5b 100644 --- a/docs/docs/concepts/chat_models.mdx +++ b/docs/docs/concepts/chat_models.mdx @@ -152,7 +152,7 @@ A semantic cache introduces a dependency on another model on the critical path o However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times. -Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details. +Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details. ## Related resources @@ -165,4 +165,4 @@ Please see the [how to cache chat model responses](/docs/how_to/#chat-model-cach * [Tool calling](/docs/concepts#tool-calling) * [Multimodality](/docs/concepts/multimodality) * [Structured outputs](/docs/concepts#structured_output) -* [Tokens](/docs/concepts/tokens) \ No newline at end of file +* [Tokens](/docs/concepts/tokens) diff --git a/docs/docs/concepts/runnables.mdx b/docs/docs/concepts/runnables.mdx index 4a383e623a3..896f38b9159 100644 --- a/docs/docs/concepts/runnables.mdx +++ b/docs/docs/concepts/runnables.mdx @@ -15,7 +15,7 @@ This guide covers the main concepts and methods of the Runnable interface, which The Runnable way defines a standard interface that allows a Runnable component to be: * [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output. -* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable/): Multiple inputs are efficiently transformed into outputs. +* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable): Multiple inputs are efficiently transformed into outputs. * [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced. * Inspected: Schematic information about Runnable's input, output, and configuration can be accessed. * Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines. diff --git a/docs/docs/concepts/tools.mdx b/docs/docs/concepts/tools.mdx index 5c079808bd6..5b820b5f2e7 100644 --- a/docs/docs/concepts/tools.mdx +++ b/docs/docs/concepts/tools.mdx @@ -141,7 +141,7 @@ See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more You can use the `RunnableConfig` object to pass custom run time values to tools. -If you need to access the [RunnableConfig](/docs/concepts/runnables/#RunnableConfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature. +If you need to access the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature. ```python from langchain_core.runnables import RunnableConfig diff --git a/docs/docs/concepts/vectorstores.mdx b/docs/docs/concepts/vectorstores.mdx index a42ccf45a41..aa5bcae7a2c 100644 --- a/docs/docs/concepts/vectorstores.mdx +++ b/docs/docs/concepts/vectorstores.mdx @@ -186,6 +186,6 @@ See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details. | Name | When to use | Description | |-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------| | [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). | -| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. | +| [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. | diff --git a/docs/docs/how_to/agent_executor.ipynb b/docs/docs/how_to/agent_executor.ipynb index 647e4c6a117..1c357632630 100644 --- a/docs/docs/how_to/agent_executor.ipynb +++ b/docs/docs/how_to/agent_executor.ipynb @@ -18,7 +18,7 @@ "# Build an Agent with AgentExecutor (Legacy)\n", "\n", ":::important\n", - "This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n", + "This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/architecture/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n", ":::\n", "\n", "By themselves, language models can't take actions - they just output text.\n", @@ -802,7 +802,7 @@ "That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n", "\n", ":::important\n", - "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n", + "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n", ":::\n", "\n", "If you want to continue using LangChain agents, some good advanced guides are:\n", diff --git a/docs/docs/how_to/qa_chat_history_how_to.ipynb b/docs/docs/how_to/qa_chat_history_how_to.ipynb index bf0dfb4a84d..d9c1aadb4cd 100644 --- a/docs/docs/how_to/qa_chat_history_how_to.ipynb +++ b/docs/docs/how_to/qa_chat_history_how_to.ipynb @@ -686,7 +686,7 @@ "source": [ "### Agent constructor\n", "\n", - "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", + "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n", "Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic." ] }, diff --git a/docs/docs/how_to/structured_output.ipynb b/docs/docs/how_to/structured_output.ipynb index 3e333e340b1..e1d6f2b68e8 100644 --- a/docs/docs/how_to/structured_output.ipynb +++ b/docs/docs/how_to/structured_output.ipynb @@ -556,7 +556,7 @@ "id": "498d893b-ceaa-47ff-a9d8-4faa60702715", "metadata": {}, "source": [ - "For more on few shot prompting when using tool calling, see [here](/docs/how_to/function_calling/#Few-shot-prompting)." + "For more on few shot prompting when using tool calling, see [here](/docs/how_to/tools_few_shot/)." ] }, { diff --git a/docs/docs/integrations/chat/naver.ipynb b/docs/docs/integrations/chat/naver.ipynb index 64924cb5637..651eab58721 100644 --- a/docs/docs/integrations/chat/naver.ipynb +++ b/docs/docs/integrations/chat/naver.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatClovaX\n", "\n", - "This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/#chat-models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n", + "This notebook provides a quick overview for getting started with Naver’s HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n", "\n", "[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n", "\n", diff --git a/docs/docs/integrations/chat/writer.ipynb b/docs/docs/integrations/chat/writer.ipynb index 455f8820ca9..a76752ef2f6 100644 --- a/docs/docs/integrations/chat/writer.ipynb +++ b/docs/docs/integrations/chat/writer.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatWriter\n", "\n", - "This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/#chat-models).\n", + "This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n", "\n", "Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n", "\n", diff --git a/docs/docs/integrations/llms/sambastudio.ipynb b/docs/docs/integrations/llms/sambastudio.ipynb index dd977f37c95..a173842df8e 100644 --- a/docs/docs/integrations/llms/sambastudio.ipynb +++ b/docs/docs/integrations/llms/sambastudio.ipynb @@ -9,7 +9,7 @@ "**[SambaNova](https://sambanova.ai/)'s** [Sambastudio](https://sambanova.ai/technology/full-stack-ai-platform) is a platform that allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/#llms). We recommend you to use the [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/text_llms). We recommend you to use the [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [SambaStudio Chat Models](/docs/integrations/chat/sambastudio/) .\n", ":::\n", diff --git a/docs/docs/introduction.mdx b/docs/docs/introduction.mdx index 7f5afbee628..c9422417a84 100644 --- a/docs/docs/introduction.mdx +++ b/docs/docs/introduction.mdx @@ -9,7 +9,7 @@ sidebar_class_name: hidden LangChain simplifies every stage of the LLM application lifecycle: - **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/). -Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support. +Use [LangGraph](/docs/concepts/architecture/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support. - **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. - **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/). diff --git a/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx b/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx index 74647e4b06b..5a25f24b094 100644 --- a/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx +++ b/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx @@ -8,7 +8,7 @@ The following may help resolve this error: - Double-check your prompt template to ensure that it is correct. - If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`). -- If you are using a [`MessagesPlaceholder`](/docs/concepts/messages/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. +- If you are using a [`MessagesPlaceholder`](/docs/concepts/prompt_templates/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. - If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`). - Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected. - If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect. diff --git a/docs/docs/tutorials/agents.ipynb b/docs/docs/tutorials/agents.ipynb index 22d9a37bd91..2f1671e7c79 100644 --- a/docs/docs/tutorials/agents.ipynb +++ b/docs/docs/tutorials/agents.ipynb @@ -370,7 +370,7 @@ "source": [ "## Create the agent\n", "\n", - "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", + "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n", "Currently, we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic.\n" ] }, @@ -719,7 +719,7 @@ "We've also added in memory so you can have a conversation with them.\n", "Agents are a complex topic with lots to learn! \n", "\n", - "For more information on Agents, please check out the [LangGraph](/docs/concepts/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides." + "For more information on Agents, please check out the [LangGraph](/docs/concepts/architecture/#langgraph) documentation. This has it's own set of concepts, tutorials, and how-to guides." ] }, { diff --git a/docs/docs/tutorials/llm_chain.ipynb b/docs/docs/tutorials/llm_chain.ipynb index 0a04d91876f..fea5ec33967 100644 --- a/docs/docs/tutorials/llm_chain.ipynb +++ b/docs/docs/tutorials/llm_chain.ipynb @@ -29,7 +29,7 @@ "\n", "- Debugging and tracing your application using [LangSmith](https://docs.smith.langchain.com/)\n", "\n", - "- Deploying your application with [LangServe](/docs/concepts/#langserve)\n", + "- Deploying your application with [LangServe](/docs/concepts/architecture/#langserve)\n", "\n", "Let's dive in!\n", "\n", diff --git a/docs/docs/tutorials/qa_chat_history.ipynb b/docs/docs/tutorials/qa_chat_history.ipynb index 8aff4c3a014..2752e9d5cfa 100644 --- a/docs/docs/tutorials/qa_chat_history.ipynb +++ b/docs/docs/tutorials/qa_chat_history.ipynb @@ -817,7 +817,7 @@ "source": [ "### Agent constructor\n", "\n", - "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n", + "Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n", "Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic." ] }, diff --git a/docs/docs/tutorials/sql_qa.ipynb b/docs/docs/tutorials/sql_qa.ipynb index 18a8063f866..5c8042ae4a0 100644 --- a/docs/docs/tutorials/sql_qa.ipynb +++ b/docs/docs/tutorials/sql_qa.ipynb @@ -494,7 +494,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will use a prebuilt [LangGraph](/docs/concepts/#langgraph) agent to build our agent" + "We will use a prebuilt [LangGraph](/docs/concepts/architecture/#langgraph) agent to build our agent" ] }, { diff --git a/docs/scripts/kv_store_feat_table.py b/docs/scripts/kv_store_feat_table.py index dd7fc1ca203..6730ec650d0 100644 --- a/docs/scripts/kv_store_feat_table.py +++ b/docs/scripts/kv_store_feat_table.py @@ -14,7 +14,7 @@ hide_table_of_contents: true # Key-value stores -[Key-value stores](/docs/concepts/#key-value-stores) are used by other LangChain components to store and retrieve data. +[Key-value stores](/docs/concepts/key_value_stores) are used by other LangChain components to store and retrieve data. :::info diff --git a/docs/scripts/tool_feat_table.py b/docs/scripts/tool_feat_table.py index 981fe2f7065..3bc00ce6af2 100644 --- a/docs/scripts/tool_feat_table.py +++ b/docs/scripts/tool_feat_table.py @@ -182,7 +182,7 @@ custom_edit_url: # Tools -[Tools](/docs/concepts/#tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. +[Tools](/docs/concepts/tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. A [toolkit](/docs/concepts#toolkits) is a collection of tools meant to be used together.