docs(docs): fixed typos in documentations (#32661)

Minor typo fixes. (Not linked to current open issues)
This commit is contained in:
Maitrey Talware
2025-08-25 07:02:53 -07:00
committed by GitHub
parent 1819c73d10
commit 622337a297
8 changed files with 19 additions and 21 deletions

View File

@@ -58,7 +58,7 @@ applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
@@ -67,8 +67,7 @@ framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
@@ -83,4 +82,4 @@ concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.

View File

@@ -4,9 +4,9 @@ LangChain has a large ecosystem of integrations with various external resources
## Best practices
When building such applications developers should remember to follow good security practices:
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
@@ -67,8 +67,7 @@ All out of scope targets defined by huntr as well as:
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
* Code documented with security notices. This will be decided on a case by
case basis, but likely will not be eligible for a bounty as the code is already
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).

View File

@@ -97,7 +97,7 @@ def skip_private_members(app, what, name, obj, skip, options):
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
# don't document default init
return True
return None

View File

@@ -88,7 +88,7 @@
"The following may help resolve this error:\n",
"\n",
"- Ensure that all inputs to chat models are an array of LangChain message classes or a supported message-like.\n",
" - Check that there is no stringification or other unexpected transformation occuring.\n",
" - Check that there is no stringification or other unexpected transformation occurring.\n",
"- Check the error's stack trace and add log or debugger statements."
]
},

View File

@@ -999,7 +999,7 @@ async def _astream_events_implementation_v2(
continue
# If it's the end event corresponding to the root runnable
# we dont include the input in the event since it's guaranteed
# we don't include the input in the event since it's guaranteed
# to be included in the first event.
if (
event["run_id"] == first_event_run_id

View File

@@ -14,7 +14,7 @@ pip install langchain-openai
## Chat model
See a [usage example](http://python.langchain.com/docs/integrations/chat/openai).
See a [usage example](https://python.langchain.com/docs/integrations/chat/openai).
```python
from langchain_openai import ChatOpenAI
@@ -26,11 +26,11 @@ If you are using a model hosted on `Azure`, you should use different wrapper for
from langchain_openai import AzureChatOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [AzureChatOpenAI](http://python.langchain.com/docs/integrations/chat/azure_chat_openai)
For a more detailed walkthrough of the `Azure` wrapper, see [AzureChatOpenAI](https://python.langchain.com/docs/integrations/chat/azure_chat_openai)
## Text Embedding Model
See a [usage example](http://python.langchain.com/docs/integrations/text_embedding/openai)
See a [usage example](https://python.langchain.com/docs/integrations/text_embedding/openai)
```python
from langchain_openai import OpenAIEmbeddings
@@ -46,7 +46,7 @@ For a more detailed walkthrough of the `Azure` wrapper, see [AzureOpenAIEmbeddin
## LLM (Legacy)
LLM refers to the legacy text-completion models that preceded chat models. See a [usage example](http://python.langchain.com/docs/integrations/llms/openai).
LLM refers to the legacy text-completion models that preceded chat models. See a [usage example](https://python.langchain.com/docs/integrations/llms/openai).
```python
from langchain_openai import OpenAI
@@ -58,4 +58,4 @@ If you are using a model hosted on `Azure`, you should use different wrapper for
from langchain_openai import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [Azure OpenAI](http://python.langchain.com/docs/integrations/llms/azure_openai)
For a more detailed walkthrough of the `Azure` wrapper, see [Azure OpenAI](https://python.langchain.com/docs/integrations/llms/azure_openai)

View File

@@ -31,7 +31,7 @@ class DocumentIndexerTestSuite(ABC):
"""Get the index."""
def test_upsert_documents_has_no_ids(self, index: DocumentIndex) -> None:
"""Verify that there is not parameter called ids in upsert."""
"""Verify that there is no parameter called ids in upsert."""
signature = inspect.signature(index.upsert)
assert "ids" not in signature.parameters
@@ -67,7 +67,7 @@ class DocumentIndexerTestSuite(ABC):
)
def test_upsert_some_ids(self, index: DocumentIndex) -> None:
"""Test an upsert where some docs have ids and some dont."""
"""Test an upsert where some docs have ids and some don't."""
foo_uuid = str(uuid.UUID(int=7))
documents = [
Document(id=foo_uuid, page_content="foo", metadata={"id": 1}),
@@ -257,7 +257,7 @@ class AsyncDocumentIndexTestSuite(ABC):
)
async def test_upsert_some_ids(self, index: DocumentIndex) -> None:
"""Test an upsert where some docs have ids and some dont."""
"""Test an upsert where some docs have ids and some don't."""
foo_uuid = str(uuid.UUID(int=7))
documents = [
Document(id=foo_uuid, page_content="foo", metadata={"id": 1}),

View File

@@ -414,7 +414,7 @@ class ExperimentalMarkdownSyntaxTextSplitter:
self._complete_chunk_doc()
# I don't see why `return_each_line` is a necessary feature of this splitter.
# It's easy enough to to do outside of the class and the caller can have more
# It's easy enough to do outside of the class and the caller can have more
# control over it.
if self.return_each_line:
return [