docs(docs): fix grammar, capitalization, and style issues across documentation (#32503)

**Changes made:**
- Fix 'Async programming with langchain' → 'Async programming with
LangChain'
- Fix 'Langchain asynchronous APIs' → 'LangChain asynchronous APIs'
- Fix 'How to: init any model' → 'How to: initialize any model'
- Fix 'async programming with Langchain' → 'async programming with
LangChain'
- Fix 'How to propagate callbacks constructor' → 'How to propagate
callbacks to the constructor'
- Fix 'How to add a semantic layer over graph database' → 'How to add a
semantic layer over a graph database'
- Fix 'Build a Question/Answering system' → 'Build a Question-Answering
system'

**Why is this change needed?**
- Improves documentation clarity and readability
- Maintains consistent LangChain branding throughout the docs
- Fixes grammar issues that could confuse users
- Follows proper documentation standards

**Files changed:**
- `docs/docs/concepts/async.mdx`
- `docs/docs/concepts/tools.mdx`
- `docs/docs/how_to/index.mdx`
- `docs/docs/how_to/callbacks_constructor.ipynb`
- `docs/docs/how_to/graph_semantic.ipynb`
- `docs/docs/tutorials/sql_qa.ipynb`

**Issue:** N/A (documentation improvements)

**Dependencies:** None

**Twitter handle:** https://x.com/mishraravibhush

Co-authored-by: Mason Daugherty <mason@langchain.dev>
This commit is contained in:
mishraravibhushan
2025-08-11 23:02:28 +05:30
committed by GitHub
parent e5d0a4e4d6
commit 7db9e60601
3 changed files with 4 additions and 4 deletions

View File

@@ -1,4 +1,4 @@
# Async programming with langchain # Async programming with LangChain
:::info Prerequisites :::info Prerequisites
* [Runnable interface](/docs/concepts/runnables) * [Runnable interface](/docs/concepts/runnables)
@@ -12,7 +12,7 @@ You are expected to be familiar with asynchronous programming in Python before r
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming. This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming.
::: :::
## Langchain asynchronous APIs ## LangChain asynchronous APIs
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications. Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.

View File

@@ -31,7 +31,7 @@ The key attributes that correspond to the tool's **schema**:
The key methods to execute the function associated with the **tool**: The key methods to execute the function associated with the **tool**:
- **invoke**: Invokes the tool with the given arguments. - **invoke**: Invokes the tool with the given arguments.
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async). - **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with LangChain](/docs/concepts/async).
## Create tools using the `@tool` decorator ## Create tools using the `@tool` decorator

View File

@@ -34,7 +34,7 @@ These are the core building blocks you can use when building applications.
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message. [Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider. See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/) - [How to: initialize any model in one line](/docs/how_to/chat_models_universal_init/)
- [How to: work with local models](/docs/how_to/local_llms) - [How to: work with local models](/docs/how_to/local_llms)
- [How to: do function/tool calling](/docs/how_to/tool_calling) - [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output) - [How to: get models to return structured output](/docs/how_to/structured_output)