mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-01 02:43:37 +00:00
docs providers
update (#18336)
Formatted pages into a consistent form. Added descriptions and links when needed.
This commit is contained in:
parent
68be5a7658
commit
d43fa2eab1
@ -1,35 +1,38 @@
|
|||||||
# Activeloop Deep Lake
|
# Activeloop Deep Lake
|
||||||
This page covers how to use the Deep Lake ecosystem within LangChain.
|
|
||||||
|
>[Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it
|
||||||
|
> as a vector store.
|
||||||
|
|
||||||
## Why Deep Lake?
|
## Why Deep Lake?
|
||||||
|
|
||||||
- More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.
|
- More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.
|
||||||
- Not only stores embeddings, but also the original data with automatic version control.
|
- Not only stores embeddings, but also the original data with automatic version control.
|
||||||
- Truly serverless. Doesn't require another service and can be used with major cloud providers (AWS S3, GCS, etc.)
|
- Truly serverless. Doesn't require another service and can be used with major cloud providers (`AWS S3`, `GCS`, etc.)
|
||||||
|
|
||||||
|
`Activeloop Deep Lake` supports `SelfQuery Retrieval`:
|
||||||
Activeloop Deep Lake supports SelfQuery Retrieval:
|
|
||||||
[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
|
[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
|
||||||
|
|
||||||
|
|
||||||
## More Resources
|
## More Resources
|
||||||
|
|
||||||
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
|
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
|
||||||
2. [Twitter the-algorithm codebase analysis with Deep Lake](https://github.com/langchain-ai/langchain/blob/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb)
|
2. [Twitter the-algorithm codebase analysis with Deep Lake](https://github.com/langchain-ai/langchain/blob/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb)
|
||||||
3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
|
3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
|
||||||
4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
|
4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
- Install the Python package with `pip install deeplake`
|
|
||||||
|
|
||||||
## Wrappers
|
Install the Python package:
|
||||||
|
|
||||||
### VectorStore
|
```bash
|
||||||
|
pip install deeplake
|
||||||
|
```
|
||||||
|
|
||||||
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.
|
|
||||||
|
|
||||||
To import this vectorstore:
|
## VectorStore
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from langchain_community.vectorstores import DeepLake
|
from langchain_community.vectorstores import DeepLake
|
||||||
```
|
```
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/vectorstores/activeloop_deeplake).
|
||||||
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](/docs/integrations/vectorstores/activeloop_deeplake)
|
|
||||||
|
@ -1,16 +1,42 @@
|
|||||||
# AI21 Labs
|
# AI21 Labs
|
||||||
|
|
||||||
This page covers how to use the AI21 ecosystem within LangChain.
|
>[AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural
|
||||||
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
|
> Language Processing (NLP), which develops AI systems
|
||||||
|
> that can understand and generate natural language.
|
||||||
|
|
||||||
|
This page covers how to use the `AI21` ecosystem within `LangChain`.
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
|
|
||||||
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
|
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
|
||||||
|
- Install the Python package:
|
||||||
|
|
||||||
## Wrappers
|
```bash
|
||||||
|
pip install langchain-ai21
|
||||||
|
```
|
||||||
|
|
||||||
### LLM
|
## LLMs
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/llms/ai21).
|
||||||
|
|
||||||
There exists an AI21 LLM wrapper, which you can access with
|
|
||||||
```python
|
```python
|
||||||
from langchain_community.llms import AI21
|
from langchain_community.llms import AI21
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Chat models
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/chat/ai21).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain_ai21 import ChatAI21
|
||||||
|
```
|
||||||
|
|
||||||
|
## Embedding models
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/text_embedding/ai21).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain_ai21 import AI21Embeddings
|
||||||
|
```
|
||||||
|
|
||||||
|
@ -1,15 +1,31 @@
|
|||||||
# AnalyticDB
|
# AnalyticDB
|
||||||
|
|
||||||
|
>[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview)
|
||||||
|
> is a massively parallel processing (MPP) data warehousing service
|
||||||
|
> from [Alibaba Cloud](https://www.alibabacloud.com/)
|
||||||
|
>that is designed to analyze large volumes of data online.
|
||||||
|
|
||||||
|
>`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database`
|
||||||
|
> project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB
|
||||||
|
> for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and
|
||||||
|
> Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and
|
||||||
|
> column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a
|
||||||
|
> high performance level and supports highly concurrent.
|
||||||
|
|
||||||
This page covers how to use the AnalyticDB ecosystem within LangChain.
|
This page covers how to use the AnalyticDB ecosystem within LangChain.
|
||||||
|
|
||||||
### VectorStore
|
## Installation and Setup
|
||||||
|
|
||||||
There exists a wrapper around AnalyticDB, allowing you to use it as a vectorstore,
|
You need to install the `sqlalchemy` python package.
|
||||||
whether for semantic search or example selection.
|
|
||||||
|
```bash
|
||||||
|
pip install sqlalchemy
|
||||||
|
```
|
||||||
|
|
||||||
|
## VectorStore
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/vectorstores/analyticdb).
|
||||||
|
|
||||||
To import this vectorstore:
|
|
||||||
```python
|
```python
|
||||||
from langchain_community.vectorstores import AnalyticDB
|
from langchain_community.vectorstores import AnalyticDB
|
||||||
```
|
```
|
||||||
|
|
||||||
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb)
|
|
||||||
|
@ -1,8 +1,11 @@
|
|||||||
# Annoy
|
# Annoy
|
||||||
|
|
||||||
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
|
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`)
|
||||||
## Installation and Setup
|
> is a C++ library with Python bindings to search for points in space that are
|
||||||
|
> close to a given query point. It also creates large read-only file-based data
|
||||||
|
> structures that are mapped into memory so that many processes may share the same data.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install annoy
|
pip install annoy
|
||||||
|
@ -3,11 +3,12 @@
|
|||||||
>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
|
>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
|
||||||
It delivers lightning-fast analytics on real-time data at scale.
|
It delivers lightning-fast analytics on real-time data at scale.
|
||||||
|
|
||||||
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
|
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance
|
||||||
|
> in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/).
|
||||||
|
> Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install pymysql
|
pip install pymysql
|
||||||
```
|
```
|
||||||
|
@ -1,16 +1,13 @@
|
|||||||
# Apify
|
# Apify
|
||||||
|
|
||||||
This page covers how to use [Apify](https://apify.com) within LangChain.
|
|
||||||
|
|
||||||
## Overview
|
>[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
|
||||||
|
>which provides an [ecosystem](https://apify.com/store) of more than a thousand
|
||||||
Apify is a cloud platform for web scraping and data extraction,
|
>ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
|
||||||
which provides an [ecosystem](https://apify.com/store) of more than a thousand
|
|
||||||
ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
|
|
||||||
|
|
||||||
[](https://apify.com/store)
|
[](https://apify.com/store)
|
||||||
|
|
||||||
This integration enables you run Actors on the Apify platform and load their results into LangChain to feed your vector
|
This integration enables you run Actors on the `Apify` platform and load their results into LangChain to feed your vector
|
||||||
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
|
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
|
||||||
blogs, or knowledge bases.
|
blogs, or knowledge bases.
|
||||||
|
|
||||||
@ -22,9 +19,7 @@ blogs, or knowledge bases.
|
|||||||
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
|
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
|
||||||
|
|
||||||
|
|
||||||
## Wrappers
|
## Utility
|
||||||
|
|
||||||
### Utility
|
|
||||||
|
|
||||||
You can use the `ApifyWrapper` to run Actors on the Apify platform.
|
You can use the `ApifyWrapper` to run Actors on the Apify platform.
|
||||||
|
|
||||||
@ -35,7 +30,7 @@ from langchain_community.utilities import ApifyWrapper
|
|||||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
|
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
|
||||||
|
|
||||||
|
|
||||||
### Loader
|
## Document loader
|
||||||
|
|
||||||
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
|
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
|
||||||
|
|
||||||
|
@ -1,17 +1,19 @@
|
|||||||
# ArangoDB
|
# ArangoDB
|
||||||
|
|
||||||
>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere.
|
>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to
|
||||||
|
> drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere.
|
||||||
|
|
||||||
## Dependencies
|
## Installation and Setup
|
||||||
|
|
||||||
Install the [ArangoDB Python Driver](https://github.com/ArangoDB-Community/python-arango) package with
|
Install the [ArangoDB Python Driver](https://github.com/ArangoDB-Community/python-arango) package with
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install python-arango
|
pip install python-arango
|
||||||
```
|
```
|
||||||
|
|
||||||
## Graph QA Chain
|
## Graph QA Chain
|
||||||
|
|
||||||
Connect your ArangoDB Database with a chat model to get insights on your data.
|
Connect your `ArangoDB` Database with a chat model to get insights on your data.
|
||||||
|
|
||||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
|
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
|
||||||
|
|
||||||
|
@ -11,31 +11,19 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"[Arthur](https://arthur.ai) is a model monitoring and observability platform.\n",
|
">[Arthur](https://arthur.ai) is a model monitoring and observability platform.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The following guide shows how to run a registered chat LLM with the Arthur callback handler to automatically log model inferences to Arthur.\n",
|
"The following guide shows how to run a registered chat LLM with the Arthur callback handler to automatically log model inferences to Arthur.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"If you do not have a model currently onboarded to Arthur, visit our [onboarding guide for generative text models](https://docs.arthur.ai/user-guide/walkthroughs/model-onboarding/generative_text_onboarding.html). For more information about how to use the Arthur SDK, visit our [docs](https://docs.arthur.ai/)."
|
"If you do not have a model currently onboarded to Arthur, visit our [onboarding guide for generative text models](https://docs.arthur.ai/user-guide/walkthroughs/model-onboarding/generative_text_onboarding.html). For more information about how to use the `Arthur SDK`, visit our [docs](https://docs.arthur.ai/)."
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 2,
|
|
||||||
"metadata": {
|
|
||||||
"id": "y8ku6X96sebl"
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from langchain.callbacks import ArthurCallbackHandler\n",
|
|
||||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
|
||||||
"from langchain_core.messages import HumanMessage\n",
|
|
||||||
"from langchain_openai import ChatOpenAI"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
"## Installation and Setup\n",
|
||||||
|
"\n",
|
||||||
"Place Arthur credentials here"
|
"Place Arthur credentials here"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -52,6 +40,27 @@
|
|||||||
"arthur_model_id = \"your-arthur-model-id-here\""
|
"arthur_model_id = \"your-arthur-model-id-here\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Callback handler"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {
|
||||||
|
"id": "y8ku6X96sebl"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain.callbacks import ArthurCallbackHandler\n",
|
||||||
|
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||||
|
"from langchain_core.messages import HumanMessage\n",
|
||||||
|
"from langchain_openai import ChatOpenAI"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -191,9 +200,9 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.10.11"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 1
|
"nbformat_minor": 4
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user