mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-06 21:43:44 +00:00
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find resources they are looking for. To accomplish this, there are now three main sections: - Getting Started: steps for getting started, walking through most core functionality - Modules: these are different modules of functionality that langchain provides. Each part here has a "getting started", "how to", "key concepts" and "reference" section (except in a few select cases where it didnt easily fit). - Use Cases: this is to separate use cases (like summarization, question answering, evaluation, etc) from the modules, and provide a different entry point to the code base. There is also a full reference section, as well as extra resources (glossary, gallery, etc) Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
This commit is contained in:
16
docs/ecosystem/ai21.md
Normal file
16
docs/ecosystem/ai21.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# AI21 Labs
|
||||
|
||||
This page covers how to use the AI21 ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an AI21 LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import AI21
|
||||
```
|
25
docs/ecosystem/cohere.md
Normal file
25
docs/ecosystem/cohere.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Cohere
|
||||
|
||||
This page covers how to use the Cohere ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install cohere`
|
||||
- Get an Cohere api key and set it as an environment variable (`COHERE_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an Cohere LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import Cohere
|
||||
```
|
||||
|
||||
### Embeddings
|
||||
|
||||
There exists an Cohere Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
|
32
docs/ecosystem/google_search.md
Normal file
32
docs/ecosystem/google_search.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Google Search Wrapper
|
||||
|
||||
This page covers how to use the Google Search API within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install requirements with `pip install google-api-python-client`
|
||||
- Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
|
||||
- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively
|
||||
|
||||
## Wrappers
|
||||
|
||||
### Utility
|
||||
|
||||
There exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:
|
||||
|
||||
```python
|
||||
from langchain.utilities import GoogleSearchAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
You can also easily load this wrapper as a Tool (to use with an Agent).
|
||||
You can do this with:
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
tools = load_tools(["google-search"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
19
docs/ecosystem/hazy_research.md
Normal file
19
docs/ecosystem/hazy_research.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Hazy Research
|
||||
|
||||
This page covers how to use the Hazy Research ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- To use the `manifest`, install it with `pip install manifest-ml`
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an LLM wrapper around Hazy Research's `manifest` library.
|
||||
`manifest` is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more.
|
||||
|
||||
To use this wrapper:
|
||||
```python
|
||||
from langchain.llms.manifest import ManifestWrapper
|
||||
```
|
68
docs/ecosystem/huggingface.md
Normal file
68
docs/ecosystem/huggingface.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Hugging Face
|
||||
|
||||
This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
If you want to work with the Hugging Face Hub:
|
||||
- Install the Python SDK with `pip install huggingface_hub`
|
||||
- Get an OpenAI api key and set it as an environment variable (`HUGGINGFACEHUB_API_TOKEN`)
|
||||
|
||||
If you want work with Hugging Face python libraries:
|
||||
- Install `pip install transformers` for working with models and tokenizers
|
||||
- Install `pip install datasets` for working with datasets
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
|
||||
Note that these wrappers only work for the following tasks: `text2text-generation`, `text-generation`
|
||||
|
||||
To use the local pipeline wrapper:
|
||||
```python
|
||||
from langchain.llms import HuggingFacePipeline
|
||||
```
|
||||
|
||||
To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.llms import HuggingFaceHub
|
||||
```
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb)
|
||||
|
||||
|
||||
### Embeddings
|
||||
|
||||
There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
|
||||
Note that these wrappers only work for `sentence-transformers` models.
|
||||
|
||||
To use the local pipeline wrapper:
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceEmbeddings
|
||||
```
|
||||
|
||||
To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
|
||||
|
||||
### Tokenizer
|
||||
|
||||
There are several places you can use tokenizers available through the `transformers` package.
|
||||
By default, it is used to count tokens for all LLMs.
|
||||
|
||||
You can also use it to count tokens when splitting documents with
|
||||
```python
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_huggingface_tokenizer(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/textsplitter.ipynb)
|
||||
|
||||
|
||||
### Datasets
|
||||
|
||||
Hugging Face has lots of great datasets that can be used to evaluate your LLM chains.
|
||||
|
||||
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.ipynb)
|
17
docs/ecosystem/nlpcloud.md
Normal file
17
docs/ecosystem/nlpcloud.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# NLPCloud
|
||||
|
||||
This page covers how to use the NLPCloud ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install nlpcloud`
|
||||
- Get an NLPCloud api key and set it as an environment variable (`NLPCLOUD_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an NLPCloud LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import NLPCloud
|
||||
```
|
55
docs/ecosystem/openai.md
Normal file
55
docs/ecosystem/openai.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# OpenAI
|
||||
|
||||
This page covers how to use the OpenAI ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install openai`
|
||||
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
||||
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it with `pip install tiktoken`
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an OpenAI LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import OpenAI
|
||||
```
|
||||
|
||||
If you are using a model hosted on Azure, you should use different wrapper for that:
|
||||
```python
|
||||
from langchain.llms import AzureOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb)
|
||||
|
||||
|
||||
|
||||
### Embeddings
|
||||
|
||||
There exists an OpenAI Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import OpenAIEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
|
||||
|
||||
|
||||
### Tokenizer
|
||||
|
||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||
for OpenAI LLMs.
|
||||
|
||||
You can also use it to count tokens when splitting documents with
|
||||
```python
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_tiktoken_encoder(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/textsplitter.ipynb)
|
||||
|
||||
### Moderation
|
||||
You can also access the OpenAI content moderation endpoint with
|
||||
|
||||
```python
|
||||
from langchain.chains import OpenAIModerationChain
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/chains/examples/moderation.ipynb)
|
20
docs/ecosystem/pinecone.md
Normal file
20
docs/ecosystem/pinecone.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Pinecone
|
||||
|
||||
This page covers how to use the Pinecone ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install pinecone-client`
|
||||
## Wrappers
|
||||
|
||||
### VectorStore
|
||||
|
||||
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
To import this vectorstore:
|
||||
```python
|
||||
from langchain.vectorstores import Pinecone
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/utils/combine_docs_examples/vectorstores.ipynb)
|
31
docs/ecosystem/serpapi.md
Normal file
31
docs/ecosystem/serpapi.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# SerpAPI
|
||||
|
||||
This page covers how to use the SerpAPI search APIs within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install requirements with `pip install google-search-results`
|
||||
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### Utility
|
||||
|
||||
There exists a SerpAPI utility which wraps this API. To import this utility:
|
||||
|
||||
```python
|
||||
from langchain.utilities import SerpAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
You can also easily load this wrapper as a Tool (to use with an Agent).
|
||||
You can do this with:
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
tools = load_tools(["serpapi"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
33
docs/ecosystem/weaviate.md
Normal file
33
docs/ecosystem/weaviate.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Weaviate
|
||||
|
||||
This page covers how to use the Weaviate ecosystem within LangChain.
|
||||
|
||||
What is Weaviate?
|
||||
|
||||
**Weaviate in a nutshell:**
|
||||
- Weaviate is an open-source database of the type vector search engine.
|
||||
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
|
||||
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
|
||||
- Weaviate has a GraphQL-API to access your data easily.
|
||||
- We aim to bring your vector search set up to production to query in mere milliseconds (check our [open source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case).
|
||||
- Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes.
|
||||
|
||||
**Weaviate in detail:**
|
||||
|
||||
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install weaviate-client`
|
||||
## Wrappers
|
||||
|
||||
### VectorStore
|
||||
|
||||
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
To import this vectorstore:
|
||||
```python
|
||||
from langchain.vectorstores import Weaviate
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/utils/combine_docs_examples/vectorstores.ipynb)
|
Reference in New Issue
Block a user