docs: integrations reference updates 13 (#25711)

Added missed provider pages and links. Fixed inconsistent formatting.
Added arxiv references to docstirngs.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
This commit is contained in:
Leonid Ganeline 2024-09-02 15:08:50 -07:00 committed by GitHub
parent 64dfdaa924
commit 150251fd49
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
10 changed files with 111 additions and 6 deletions

View File

@ -0,0 +1,26 @@
# FalkorDB
>[FalkorDB](https://www.falkordb.com/) is a creator of the [FalkorDB](https://docs.falkordb.com/),
> a low-latency Graph Database that delivers knowledge to GenAI.
## Installation and Setup
See [installation instructions here](/docs/integrations/graphs/falkordb/).
## Graphs
See a [usage example](/docs/integrations/graphs/falkordb).
```python
from langchain_community.graphs import FalkorDBGraph
```
## Chains
See a [usage example](/docs/integrations/graphs/falkordb).
```python
from langchain_community.chains.graph_qa.falkordb import FalkorDBQAChain
```

View File

@ -0,0 +1,22 @@
# FireCrawl
>[FireCrawl](https://firecrawl.dev/?ref=langchain) crawls and converts any website into LLM-ready data.
> It crawls all accessible subpages and give you clean markdown
> and metadata for each. No sitemap required.
## Installation and Setup
Install the python SDK:
```bash
pip install firecrawl-py
```
## Document loader
See a [usage example](/docs/integrations/document_loaders/firecrawl).
```python
from langchain_community.document_loaders import FireCrawlLoader
```

View File

@ -0,0 +1,32 @@
# Friendli AI
>[Friendli AI](https://friendli.ai/) is a company that fine-tunes, deploys LLMs,
> and serves a wide range of Generative AI use cases.
## Installation and setup
- Install the integration package:
```
pip install friendli-client
```
- Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token,
and set it as the `FRIENDLI_TOKEN` environment.
## Chat models
See a [usage example](/docs/integrations/chat/friendli).
```python
from langchain_community.chat_models.friendli import ChatFriendli
```
## LLMs
See a [usage example](/docs/integrations/llms/friendli).
```python
from langchain_community.llms.friendli import Friendli
```

View File

@ -13,6 +13,19 @@ Install the Python partner package:
pip install langchain-qdrant
```
## Embedding models
### FastEmbedSparse
```python
from langchain_qdrant import FastEmbedSparse
```
### SparseEmbeddings
```python
from langchain_qdrant import SparseEmbeddings
```
## Vector Store

View File

@ -7,7 +7,9 @@
"source": [
"# Faiss\n",
"\n",
">[Facebook AI Similarity Search (FAISS)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n",
">[Facebook AI Similarity Search (FAISS)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also includes supporting code for evaluation and parameter tuning.\n",
">\n",
">See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper.\n",
"\n",
"You can find the FAISS documentation at [this page](https://faiss.ai/).\n",
"\n",
@ -528,7 +530,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@ -7,7 +7,9 @@
"source": [
"# Faiss (Async)\n",
"\n",
">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n",
">[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also includes supporting code for evaluation and parameter tuning.\n",
">\n",
">See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper.\n",
"\n",
"[Faiss documentation](https://faiss.ai/).\n",
"\n",

View File

@ -12,6 +12,7 @@ MIN_VERSION = "0.2.0"
class FastEmbedEmbeddings(BaseModel, Embeddings):
"""Qdrant FastEmbedding models.
FastEmbed is a lightweight, fast, Python library built for embedding generation.
See more documentation at:
* https://github.com/qdrant/fastembed/

View File

@ -74,6 +74,8 @@ def _len_check_if_sized(x: Any, y: Any, x_name: str, y_name: str) -> None:
class FAISS(VectorStore):
"""FAISS vector store integration.
See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper.
Setup:
Install ``langchain_community`` and ``faiss-cpu`` python packages.

View File

@ -73,7 +73,10 @@ def _low_confidence_spans(
class FlareChain(Chain):
"""Chain that combines a retriever, a question generator,
and a response generator."""
and a response generator.
See [Active Retrieval Augmented Generation](https://arxiv.org/abs/2305.06983) paper.
"""
question_generator_chain: Runnable
"""Chain that generates questions from uncertain spans."""

View File

@ -1,10 +1,12 @@
# RAG - AWS Bedrock
# RAG - AWS Bedrock, FAISS
This template is designed to connect with the `AWS Bedrock` service, a managed server that offers a set of foundation models.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
For additional context on the RAG pipeline, refer to [these notebooks](https://github.com/aws-samples/amazon-bedrock-workshop/tree/main/02_KnowledgeBases_and_RAG).
See [The FAISS Library](https://arxiv.org/pdf/2401.08281) paper for more details.
## Environment Setup