mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-03 19:57:51 +00:00
docs: ecosystem/integrations
update 2 (#5282)
# docs: ecosystem/integrations update 2 #5219 - part 1 The second part of this update (parts are independent of each other! no overlap): - added diffbot.md - updated confluence.ipynb; added confluence.md - updated college_confidential.md - updated openai.md - added blackboard.md - added bilibili.md - added azure_blob_storage.md - added azlyrics.md - added aws_s3.md ## Who can review? @hwchase17@agola11 @agola11 @vowelparrot @dev2049
This commit is contained in:
parent
ccb6238de1
commit
a3598193a0
25
docs/integrations/aws_s3.md
Normal file
25
docs/integrations/aws_s3.md
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
# AWS S3 Directory
|
||||||
|
|
||||||
|
>[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) is an object storage service.
|
||||||
|
|
||||||
|
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
|
||||||
|
|
||||||
|
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
|
||||||
|
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install boto3
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example for S3DirectoryLoader](../modules/indexes/document_loaders/examples/aws_s3_directory.ipynb).
|
||||||
|
|
||||||
|
See a [usage example for S3FileLoader](../modules/indexes/document_loaders/examples/aws_s3_file.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader
|
||||||
|
```
|
16
docs/integrations/azlyrics.md
Normal file
16
docs/integrations/azlyrics.md
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
# AZLyrics
|
||||||
|
|
||||||
|
>[AZLyrics](https://www.azlyrics.com/) is a large, legal, every day growing collection of lyrics.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
There isn't any special setup for it.
|
||||||
|
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/azlyrics.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import AZLyricsLoader
|
||||||
|
```
|
36
docs/integrations/azure_blob_storage.md
Normal file
36
docs/integrations/azure_blob_storage.md
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
# Azure Blob Storage
|
||||||
|
|
||||||
|
>[Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
|
||||||
|
|
||||||
|
>[Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed
|
||||||
|
> file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol,
|
||||||
|
> Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`.
|
||||||
|
|
||||||
|
`Azure Blob Storage` is designed for:
|
||||||
|
- Serving images or documents directly to a browser.
|
||||||
|
- Storing files for distributed access.
|
||||||
|
- Streaming video and audio.
|
||||||
|
- Writing to log files.
|
||||||
|
- Storing data for backup and restore, disaster recovery, and archiving.
|
||||||
|
- Storing data for analysis by an on-premises or Azure-hosted service.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install azure-storage-blob
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example for the Azure Blob Storage](../modules/indexes/document_loaders/examples/azure_blob_storage_container.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import AzureBlobStorageContainerLoader
|
||||||
|
```
|
||||||
|
|
||||||
|
See a [usage example for the Azure Files](../modules/indexes/document_loaders/examples/azure_blob_storage_file.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import AzureBlobStorageFileLoader
|
||||||
|
```
|
17
docs/integrations/bilibili.md
Normal file
17
docs/integrations/bilibili.md
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
# BiliBili
|
||||||
|
|
||||||
|
>[Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install bilibili-api-python
|
||||||
|
```
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/bilibili.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import BiliBiliLoader
|
||||||
|
```
|
22
docs/integrations/blackboard.md
Normal file
22
docs/integrations/blackboard.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Blackboard
|
||||||
|
|
||||||
|
>[Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the `Blackboard Learning Management System`)
|
||||||
|
> is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
|
||||||
|
> The software features course management, customizable open architecture, and scalable design that allows
|
||||||
|
> integration with student information systems and authentication protocols. It may be installed on local servers,
|
||||||
|
> hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services.
|
||||||
|
> Its main purposes are stated to include the addition of online elements to courses traditionally delivered
|
||||||
|
> face-to-face and development of completely online courses with few or no face-to-face meetings.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
There isn't any special setup for it.
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/blackboard.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import BlackboardLoader
|
||||||
|
|
||||||
|
```
|
16
docs/integrations/college_confidential.md
Normal file
16
docs/integrations/college_confidential.md
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
# College Confidential
|
||||||
|
|
||||||
|
>[College Confidential](https://www.collegeconfidential.com/) gives information on 3,800+ colleges and universities.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
There isn't any special setup for it.
|
||||||
|
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/college_confidential.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import CollegeConfidentialLoader
|
||||||
|
```
|
22
docs/integrations/confluence.md
Normal file
22
docs/integrations/confluence.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Confluence
|
||||||
|
|
||||||
|
>[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities.
|
||||||
|
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install atlassian-python-api
|
||||||
|
```
|
||||||
|
|
||||||
|
We need to set up `username/api_key` or `Oauth2 login`.
|
||||||
|
See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).
|
||||||
|
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/confluence.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import ConfluenceLoader
|
||||||
|
```
|
18
docs/integrations/diffbot.md
Normal file
18
docs/integrations/diffbot.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
# Diffbot
|
||||||
|
|
||||||
|
>[Diffbot](https://docs.diffbot.com/docs) is a service to read web pages. Unlike traditional web scraping tools,
|
||||||
|
> `Diffbot` doesn't require any rules to read the content on a page.
|
||||||
|
>It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
|
||||||
|
>The result is a website transformed into clean-structured data (like JSON or CSV), ready for your application.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
Read [instructions](https://docs.diffbot.com/reference/authentication) how to get the Diffbot API Token.
|
||||||
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/diffbot.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders import DiffbotLoader
|
||||||
|
```
|
@ -1,40 +1,50 @@
|
|||||||
# OpenAI
|
# OpenAI
|
||||||
|
|
||||||
This page covers how to use the OpenAI ecosystem within LangChain.
|
>[OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory
|
||||||
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
|
> consisting of the non-profit `OpenAI Incorporated`
|
||||||
|
> and its for-profit subsidiary corporation `OpenAI Limited Partnership`.
|
||||||
|
> `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI.
|
||||||
|
> `OpenAI` systems run on an `Azure`-based supercomputing platform from `Microsoft`.
|
||||||
|
|
||||||
|
>The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
|
||||||
|
>
|
||||||
|
>[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
- Install the Python SDK with `pip install openai`
|
- Install the Python SDK with
|
||||||
|
```bash
|
||||||
|
pip install openai
|
||||||
|
```
|
||||||
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
||||||
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it with `pip install tiktoken`
|
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it
|
||||||
|
```bash
|
||||||
|
pip install tiktoken
|
||||||
|
```
|
||||||
|
|
||||||
## Wrappers
|
|
||||||
|
|
||||||
### LLM
|
## LLM
|
||||||
|
|
||||||
There exists an OpenAI LLM wrapper, which you can access with
|
|
||||||
```python
|
```python
|
||||||
from langchain.llms import OpenAI
|
from langchain.llms import OpenAI
|
||||||
```
|
```
|
||||||
|
|
||||||
If you are using a model hosted on Azure, you should use different wrapper for that:
|
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
||||||
```python
|
```python
|
||||||
from langchain.llms import AzureOpenAI
|
from langchain.llms import AzureOpenAI
|
||||||
```
|
```
|
||||||
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Embeddings
|
## Text Embedding Model
|
||||||
|
|
||||||
There exists an OpenAI Embeddings wrapper, which you can access with
|
|
||||||
```python
|
```python
|
||||||
from langchain.embeddings import OpenAIEmbeddings
|
from langchain.embeddings import OpenAIEmbeddings
|
||||||
```
|
```
|
||||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
||||||
|
|
||||||
|
|
||||||
### Tokenizer
|
## Tokenizer
|
||||||
|
|
||||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||||
for OpenAI LLMs.
|
for OpenAI LLMs.
|
||||||
@ -46,10 +56,18 @@ CharacterTextSplitter.from_tiktoken_encoder(...)
|
|||||||
```
|
```
|
||||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/tiktoken.ipynb)
|
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/tiktoken.ipynb)
|
||||||
|
|
||||||
### Moderation
|
## Chain
|
||||||
You can also access the OpenAI content moderation endpoint with
|
|
||||||
|
See a [usage example](../modules/chains/examples/moderation.ipynb).
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from langchain.chains import OpenAIModerationChain
|
from langchain.chains import OpenAIModerationChain
|
||||||
```
|
```
|
||||||
For a more detailed walkthrough of this, see [this notebook](../modules/chains/examples/moderation.ipynb)
|
|
||||||
|
## Document Loader
|
||||||
|
|
||||||
|
See a [usage example](../modules/indexes/document_loaders/examples/chatgpt_loader.ipynb).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.document_loaders.chatgpt import ChatGPTLoader
|
||||||
|
```
|
||||||
|
@ -8,13 +8,11 @@
|
|||||||
"\n",
|
"\n",
|
||||||
">[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. \n",
|
">[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"A loader for `Confluence` pages.\n",
|
"A loader for `Confluence` pages currently supports both `username/api_key` and `Oauth2 login`.\n",
|
||||||
|
"See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This currently supports both `username/api_key` and `Oauth2 login`.\n",
|
"Specify a list `page_id`-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n",
|
"You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n",
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
">It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.\n",
|
">It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.\n",
|
||||||
">The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.\n",
|
">The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This covers how to extract HTML documents from a list of URLs using the [Diffbot extract API](https://www.diffbot.com/products/extract/), into a document format that we can use downstream."
|
"This covers how to extract HTML documents from a list of URLs using the [Diffbot extract API](https://www.diffbot.com/products/extract/), into a document format that we can use downstream.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -31,7 +31,9 @@
|
|||||||
"id": "6fffec88",
|
"id": "6fffec88",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The Diffbot Extract API Requires an API token. Once you have it, you can extract the data from the previous URLs\n"
|
"The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.\n",
|
||||||
|
"\n",
|
||||||
|
"Read [instructions](https://docs.diffbot.com/reference/authentication) how to get the Diffbot API Token."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
Loading…
Reference in New Issue
Block a user