Harrison/improve data augmented generation docs (#390)

Co-authored-by: cameronccohen <cameron.c.cohen@gmail.com>
Co-authored-by: Cameron Cohen <cameron.cohen@quantco.com>
This commit is contained in:
Harrison Chase
2022-12-20 22:24:08 -05:00
committed by GitHub
parent ad4414b59f
commit c104d507bf
40 changed files with 1237 additions and 265 deletions

21
docs/reference/chains.rst Normal file
View File

@@ -0,0 +1,21 @@
Chains
==============
One big part of chains is all the utilities that can be used as part of them.
Here is some reference documentation for the utilities natively supported by LangChain.
.. toctree::
:maxdepth: 1
:glob:
modules/python
modules/serpapi
With those utilities in mind, here are the reference docs for all the chains in LangChain.
.. toctree::
:maxdepth: 1
:glob:
modules/chains

View File

@@ -0,0 +1,13 @@
Data Augmented Generation
=========================
The reference guides here all relate to components necessary for data augmented generation.
.. toctree::
:maxdepth: 1
:glob:
modules/text_splitter
modules/docstore
modules/embeddings
modules/vectorstore

View File

@@ -0,0 +1,30 @@
# Installation Options
LangChain is available on PyPi, so to it is easily installable with:
```
pip install langchain
```
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
```
pip install langchain[llms]
```
To install all modules needed for all integrations, run:
```
pip install langchain[all]
```
Note that if you are using `zsh`, you'll need to quote square brackets when passing them as an argument to a command, for example:
```
pip install 'langchain[all]'
```

View File

@@ -0,0 +1,33 @@
# Integration Reference
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
- _OpenAI_:
- Install requirements with `pip install openai`
- Get an OpenAI api key and either set it as an environment variable (`OPENAI_API_KEY`) or pass it to the LLM constructor as `openai_api_key`.
- _Cohere_:
- Install requirements with `pip install cohere`
- Get a Cohere api key and either set it as an environment variable (`COHERE_API_KEY`) or pass it to the LLM constructor as `cohere_api_key`.
- _HuggingFace Hub_
- Install requirements with `pip install huggingface_hub`
- Get a HuggingFace Hub api token and either set it as an environment variable (`HUGGINGFACEHUB_API_TOKEN`) or pass it to the LLM constructor as `huggingfacehub_api_token`.
- _SerpAPI_:
- Install requirements with `pip install google-search-results`
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`) or pass it to the LLM constructor as `serpapi_api_key`.
- _NatBot_:
- Install requirements with `pip install playwright`
- _Wikipedia_:
- Install requirements with `pip install wikipedia`
- _Elasticsearch_:
- Install requirements with `pip install elasticsearch`
- Set up Elasticsearch backend. If you want to do locally, [this](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/getting-started.html) is a good guide.
- _FAISS_:
- Install requirements with `pip install faiss` for Python 3.7 and `pip install faiss-cpu` for Python 3.10+.
- _Manifest_:
- Install requirements with `pip install manifest-ml` (Note: this is only available in Python 3.8+ currently).
If you are using the `NLTKTextSplitter` or the `SpacyTextSplitter`, you will also need to install the appropriate models. For example, if you want to use the `SpacyTextSplitter`, you will need to install the `en_core_web_sm` model with `python -m spacy download en_core_web_sm`. Similarly, if you want to use the `NLTKTextSplitter`, you will need to install the `punkt` model with `python -m nltk.downloader punkt`.

View File

@@ -0,0 +1,7 @@
:mod:`langchain.agents`
===============================
.. automodule:: langchain.agents
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
:mod:`langchain.chains`
=======================
.. automodule:: langchain.chains
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.docstore`
=============================
.. automodule:: langchain.docstore
:members:
:undoc-members:

View File

@@ -0,0 +1,5 @@
:mod:`langchain.embeddings`
===========================
.. automodule:: langchain.embeddings
:members:

View File

@@ -0,0 +1,5 @@
:mod:`langchain.prompts.example_selector`
=========================================
.. automodule:: langchain.prompts.example_selector
:members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.llms`
=======================
.. automodule:: langchain.llms
:members:
:special-members: __call__

View File

@@ -0,0 +1,5 @@
:mod:`langchain.prompts`
========================
.. automodule:: langchain.prompts
:members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.python`
=============================
.. automodule:: langchain.python
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.serpapi`
=============================
.. automodule:: langchain.serpapi
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.text_splitter`
==============================
.. automodule:: langchain.text_splitter
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
:mod:`langchain.vectorstores`
=============================
.. automodule:: langchain.vectorstores
:members:
:undoc-members:

View File

@@ -0,0 +1,12 @@
LLMs & Prompts
==============
The reference guides here all relate to objects for working with LLMs and Prompts.
.. toctree::
:maxdepth: 1
:glob:
modules/prompt
modules/example_selector
modules/llms