mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-18 21:09:00 +00:00
mv module integrations docs (#8101)
This commit is contained in:
parent
8ea840432f
commit
c8c8635dc9
@ -51,7 +51,7 @@ Walkthroughs and best-practices for common end-to-end use cases, like:
|
||||
Learn best practices for developing with LangChain.
|
||||
|
||||
### [Ecosystem](/docs/ecosystem/)
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/ecosystem/dependents).
|
||||
|
||||
### [Additional resources](/docs/additional_resources/)
|
||||
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
|
||||
|
@ -19,7 +19,7 @@ This prompt can include things like:
|
||||
2. Background context for the agent (useful for giving it more context on the types of tasks it's being asked to do)
|
||||
3. Prompting strategies to invoke better reasoning (the most famous/widely used being [ReAct](https://arxiv.org/abs/2210.03629))
|
||||
|
||||
LangChain provides a few different agent types to get started.
|
||||
LangChain provides a few different types of agents to get started.
|
||||
Even then, you will likely want to customize those agents with parts (1) and (2).
|
||||
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/)
|
||||
|
||||
|
@ -3,8 +3,8 @@ sidebar_position: 3
|
||||
---
|
||||
# Toolkits
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
|
||||
:::
|
||||
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,6 +3,10 @@ sidebar_position: 2
|
||||
---
|
||||
# Tools
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
|
||||
:::
|
||||
|
||||
Tools are interfaces that an agent can use to interact with the world.
|
||||
|
||||
## Get started
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,6 +3,10 @@ sidebar_position: 5
|
||||
---
|
||||
# Callbacks
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/callbacks/) for documentation on built-in callbacks integrations with 3rd-party tools.
|
||||
:::
|
||||
|
||||
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
|
||||
|
||||
import GetStarted from "@snippets/modules/callbacks/get_started.mdx"
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,6 +3,10 @@ sidebar_position: 0
|
||||
---
|
||||
# Document loaders
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools.
|
||||
:::
|
||||
|
||||
Use document loaders to load data from a source as `Document`'s. A `Document` is a piece of text
|
||||
and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text
|
||||
contents of any web page, or even for loading a transcript of a YouTube video.
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -3,6 +3,10 @@ sidebar_position: 1
|
||||
---
|
||||
# Document transformers
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/document_transformers/) for documentation on built-in document transformer integrations with 3rd-party tools.
|
||||
:::
|
||||
|
||||
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example
|
||||
is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain
|
||||
has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
|
||||
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,6 +3,10 @@ sidebar_position: 4
|
||||
---
|
||||
# Retrievers
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/retrievers/) for documentation on built-in retriever integrations with 3rd-party tools.
|
||||
:::
|
||||
|
||||
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
|
||||
A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used
|
||||
as the backbone of a retriever, but there are other types of retrievers as well.
|
||||
|
@ -3,6 +3,10 @@ sidebar_position: 2
|
||||
---
|
||||
# Text embedding models
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
|
||||
:::
|
||||
|
||||
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
|
||||
|
||||
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -3,6 +3,10 @@ sidebar_position: 3
|
||||
---
|
||||
# Vector stores
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
|
||||
:::
|
||||
|
||||
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
|
||||
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
|
||||
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -6,6 +6,10 @@ sidebar_position: 3
|
||||
|
||||
🚧 _Docs under construction_ 🚧
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
|
||||
:::
|
||||
|
||||
By default, Chains and Agents are stateless,
|
||||
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
|
||||
In some applications, like chatbots, it is essential
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,18 +3,16 @@ sidebar_position: 1
|
||||
---
|
||||
# Chat models
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/chat/) for documentation on built-in integrations with chat model providers.
|
||||
:::
|
||||
|
||||
Chat models are a variation on language models.
|
||||
While chat models use language models under the hood, the interface they expose is a bit different.
|
||||
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
|
||||
|
||||
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
|
||||
|
||||
The following sections of documentation are provided:
|
||||
|
||||
- **How-to guides**: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.
|
||||
|
||||
- **Integrations**: How to use different chat model providers (OpenAI, Anthropic, etc).
|
||||
|
||||
## Get started
|
||||
|
||||
import GetStarted from "@snippets/modules/model_io/models/chat/get_started.mdx"
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
@ -3,15 +3,13 @@ sidebar_position: 0
|
||||
---
|
||||
# LLMs
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/llms/) for documentation on built-in integrations with LLM providers.
|
||||
:::
|
||||
|
||||
Large Language Models (LLMs) are a core component of LangChain.
|
||||
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
|
||||
|
||||
For more detailed documentation check out our:
|
||||
|
||||
- **How-to guides**: Walkthroughs of core functionality, like streaming, async, etc.
|
||||
|
||||
- **Integrations**: How to use different LLM providers (OpenAI, Anthropic, etc.)
|
||||
|
||||
## Get started
|
||||
|
||||
There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the `LLM` class is designed to provide a standard interface for all of them.
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
File diff suppressed because it is too large
Load Diff
9
docs/extras/integrations/callbacks/index.mdx
Normal file
9
docs/extras/integrations/callbacks/index.mdx
Normal file
@ -0,0 +1,9 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Callbacks
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
9
docs/extras/integrations/chat/index.mdx
Normal file
9
docs/extras/integrations/chat/index.mdx
Normal file
@ -0,0 +1,9 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Chat models
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user