diff --git a/docs/docs/how_to/index.mdx b/docs/docs/how_to/index.mdx
index 673b71c18d5..1a60c637e5e 100644
--- a/docs/docs/how_to/index.mdx
+++ b/docs/docs/how_to/index.mdx
@@ -47,7 +47,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting)
-- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
+- [How to: few-shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
@@ -64,8 +64,8 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model.
-- [How to: use few shot examples](/docs/how_to/few_shot_examples)
-- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
+- [How to: use few-shot examples](/docs/how_to/few_shot_examples)
+- [How to: use few-shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
@@ -168,7 +168,7 @@ See [supported integrations](/docs/integrations/vectorstores/) for details on ge
Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
-- [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing)
+- [How to: reindex data to keep your vectorstore in sync with the underlying data source](/docs/how_to/indexing)
### Tools
@@ -178,7 +178,7 @@ LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pa
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
- [How to: use chat models to call tools](/docs/how_to/tool_calling)
- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
-- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
+- [How to: pass runtime values to tools](/docs/how_to/tool_runtime)
- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
- [How to: handle tool errors](/docs/how_to/tools_error)
- [How to: force models to call a tool](/docs/how_to/tool_choice)
@@ -297,7 +297,7 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
-- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
+- [How to: add a semantic layer over a database](/docs/how_to/graph_semantic)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
### Summarization
diff --git a/docs/docs/integrations/chat/ollama.ipynb b/docs/docs/integrations/chat/ollama.ipynb
index c9441b46c51..52f93087a72 100644
--- a/docs/docs/integrations/chat/ollama.ipynb
+++ b/docs/docs/integrations/chat/ollama.ipynb
@@ -17,7 +17,7 @@
"source": [
"# ChatOllama\n",
"\n",
- "[Ollama](https://ollama.com/) allows you to run open-source large language models, such as `got-oss`, locally.\n",
+ "[Ollama](https://ollama.com/) allows you to run open-source large language models, such as `gpt-oss`, locally.\n",
"\n",
"`ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.\n",
"\n",
diff --git a/docs/docusaurus.config.js b/docs/docusaurus.config.js
index c076c78024e..700338bc495 100644
--- a/docs/docusaurus.config.js
+++ b/docs/docusaurus.config.js
@@ -142,8 +142,7 @@ const config = {
respectPrefersColorScheme: true,
},
announcementBar: {
- content:
- 'Our Building Ambient Agents with LangGraph course is now available on LangChain Academy!',
+ content: "Our new LangChain Academy Course Deep Research with LangGraph is now live! Enroll for free.",
backgroundColor: "#d0c9fe",
},
prism: {
diff --git a/libs/core/README.md b/libs/core/README.md
index fb09de946ec..50b9ebf27aa 100644
--- a/libs/core/README.md
+++ b/libs/core/README.md
@@ -21,13 +21,13 @@ For full documentation see the [API reference](https://python.langchain.com/api_
## 1️⃣ Core Interface: Runnables
-The concept of a Runnable is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them
+The concept of a `Runnable` is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them
-- a common invocation interface (invoke, batch, stream, etc.)
+- a common invocation interface (`invoke()`, `batch()`, `stream()`, etc.)
- built-in utilities for retries, fallbacks, schemas and runtime configurability
-- easy deployment with [LangServe](https://github.com/langchain-ai/langserve)
+- easy deployment with [LangGraph](https://github.com/langchain-ai/langgraph)
-For more check out the [runnable docs](https://python.langchain.com/docs/expression_language/interface). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.
+For more check out the [runnable docs](https://python.langchain.com/docs/concepts/runnables/). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.
You can use LangChain Core objects in two ways:
@@ -51,7 +51,7 @@ LangChain Expression Language (LCEL) is a _declarative language_ for composing L
LangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support.
-For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/).
+For more check out the [LCEL docs](https://python.langchain.com/docs/concepts/lcel/).

@@ -59,8 +59,6 @@ For more advanced use cases, also check out [LangGraph](https://github.com/langc
## 📕 Releases & Versioning
-`langchain-core` is currently on version `0.1.x`.
-
As `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in `langchain_core.beta`. The reason for `langchain_core.beta` is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.
Minor version increases will occur for:
diff --git a/libs/langchain/README.md b/libs/langchain/README.md
index ec2870ae6cf..a1fd0a5e7d7 100644
--- a/libs/langchain/README.md
+++ b/libs/langchain/README.md
@@ -3,28 +3,21 @@
⚡ Building applications with LLMs through composability ⚡
[](https://github.com/langchain-ai/langchain/releases)
-[](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml)
-[](https://github.com/langchain-ai/langchain/actions/workflows/test.yml)
[](https://pepy.tech/project/langchain)
[](https://opensource.org/licenses/MIT)
[](https://twitter.com/langchainai)
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[](https://codespaces.new/langchain-ai/langchain)
[](https://star-history.com/#langchain-ai/langchain)
-[](https://libraries.io/github/langchain-ai/langchain)
-[](https://github.com/langchain-ai/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
-Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
`pip install langchain`
-or
-`pip install langsmith && conda install langchain -c conda-forge`
## 🤔 What is this?
@@ -34,22 +27,22 @@ This library aims to assist in the development of those types of applications. C
**❓ Question answering with RAG**
-- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
+- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
-- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
+- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
-- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
+- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
## 📖 Documentation
-Please see [here](https://python.langchain.com) for full documentation on:
+Please see [our full documentation](https://python.langchain.com) on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
@@ -79,7 +72,7 @@ Agents involve an LLM making decisions about which Actions to take, taking that
**🧐 Evaluation:**
-[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
+Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
diff --git a/libs/langchain_v1/README.md b/libs/langchain_v1/README.md
index ec2870ae6cf..a1fd0a5e7d7 100644
--- a/libs/langchain_v1/README.md
+++ b/libs/langchain_v1/README.md
@@ -3,28 +3,21 @@
⚡ Building applications with LLMs through composability ⚡
[](https://github.com/langchain-ai/langchain/releases)
-[](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml)
-[](https://github.com/langchain-ai/langchain/actions/workflows/test.yml)
[](https://pepy.tech/project/langchain)
[](https://opensource.org/licenses/MIT)
[](https://twitter.com/langchainai)
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[](https://codespaces.new/langchain-ai/langchain)
[](https://star-history.com/#langchain-ai/langchain)
-[](https://libraries.io/github/langchain-ai/langchain)
-[](https://github.com/langchain-ai/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
-Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
`pip install langchain`
-or
-`pip install langsmith && conda install langchain -c conda-forge`
## 🤔 What is this?
@@ -34,22 +27,22 @@ This library aims to assist in the development of those types of applications. C
**❓ Question answering with RAG**
-- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
+- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
-- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
+- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
-- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
+- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
## 📖 Documentation
-Please see [here](https://python.langchain.com) for full documentation on:
+Please see [our full documentation](https://python.langchain.com) on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
@@ -79,7 +72,7 @@ Agents involve an LLM making decisions about which Actions to take, taking that
**🧐 Evaluation:**
-[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
+Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
diff --git a/libs/standard-tests/README.md b/libs/standard-tests/README.md
index fe34759e3ac..b30f9fdbf2c 100644
--- a/libs/standard-tests/README.md
+++ b/libs/standard-tests/README.md
@@ -18,12 +18,6 @@ Pip:
pip install -U langchain-tests
```
-Poetry:
-
-```bash
-poetry add langchain-tests
-```
-
uv:
```bash
diff --git a/libs/text-splitters/README.md b/libs/text-splitters/README.md
index fbbfc34f5d6..4c1a04785ae 100644
--- a/libs/text-splitters/README.md
+++ b/libs/text-splitters/README.md
@@ -14,12 +14,10 @@ pip install langchain-text-splitters
LangChain Text Splitters contains utilities for splitting into chunks a wide variety of text documents.
For full documentation see the [API reference](https://python.langchain.com/api_reference/text_splitters/index.html)
-and the [Text Splitters](https://python.langchain.com/docs/modules/data_connection/document_transformers/) module in the main docs.
+and the [Text Splitters](https://python.langchain.com/docs/how_to/#text-splitters) module in the main docs.
## 📕 Releases & Versioning
-`langchain-text-splitters` is currently on version `0.0.x`.
-
Minor version increases will occur for:
- Breaking changes for any public interfaces NOT marked `beta`