mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-31 20:19:43 +00:00
rm .html from local doc links (#12293)
This commit is contained in:
parent
04d58018e1
commit
aa212c3d0e
@ -18,7 +18,7 @@ import CodeBlock from "@theme/CodeBlock";
|
||||
</Tabs>
|
||||
|
||||
|
||||
For more details, see our [Installation guide](/docs/get_started/installation.html).
|
||||
For more details, see our [Installation guide](/docs/get_started/installation).
|
||||
|
||||
## Environment setup
|
||||
|
||||
|
@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl
|
||||
|
||||
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
|
||||
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve.html)
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve)
|
||||
- [BentoML](https://github.com/bentoml/BentoML)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm.html)
|
||||
- [Modal](/docs/ecosystem/integrations/modal.html)
|
||||
- [Jina](/docs/ecosystem/integrations/jina.html#deployment)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm)
|
||||
- [Modal](/docs/ecosystem/integrations/modal)
|
||||
- [Jina](/docs/ecosystem/integrations/jina#deployment)
|
||||
|
||||
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
|
||||
|
||||
|
@ -14,7 +14,7 @@
|
||||
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
||||
"> from data labeling to model monitoring.\n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>"
|
||||
]
|
||||
|
@ -13,7 +13,7 @@
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
|
||||
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Pandas DataFrame\n",
|
||||
"\n",
|
||||
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html) DataFrame."
|
||||
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index) DataFrame."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -5,10 +5,10 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Psychic\n",
|
||||
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic.html) for more details.\n",
|
||||
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic) for more details.\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic.html)\n",
|
||||
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic)\n",
|
||||
"2. Log into the [Psychic dashboard](https://dashboard.psychic.dev/) and get your secret key\n",
|
||||
"3. Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify."
|
||||
]
|
||||
|
@ -319,7 +319,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Standard Cache\n",
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses."
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -405,7 +405,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Semantic Cache\n",
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses and evaluate hits based on semantic similarity."
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -730,7 +730,7 @@
|
||||
},
|
||||
"source": [
|
||||
"## `Momento` Cache\n",
|
||||
"Use [Momento](/docs/ecosystem/integrations/momento.html) to cache prompts and responses.\n",
|
||||
"Use [Momento](/docs/ecosystem/integrations/momento) to cache prompts and responses.\n",
|
||||
"\n",
|
||||
"Requires momento to use, uncomment below to install:"
|
||||
]
|
||||
|
@ -75,9 +75,9 @@ from langchain.llms.sagemaker_endpoint import ContentHandlerBase
|
||||
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
|
||||
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
|
||||
|
||||
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory.html).
|
||||
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
|
||||
|
||||
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file.html).
|
||||
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader
|
||||
|
@ -83,7 +83,7 @@ First, we need to install several python packages.
|
||||
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
|
||||
```
|
||||
|
||||
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive.html).
|
||||
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import GoogleDriveLoader
|
||||
@ -182,7 +182,7 @@ There exists a `GoogleSearchAPIWrapper` utility which wraps this API. To import
|
||||
from langchain.utilities import GoogleSearchAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search).
|
||||
|
||||
We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:
|
||||
|
||||
|
@ -70,13 +70,13 @@ from langchain.chat_models import AzureChatOpenAI
|
||||
pip install azure-storage-blob
|
||||
```
|
||||
|
||||
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container.html).
|
||||
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import AzureBlobStorageContainerLoader
|
||||
```
|
||||
|
||||
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file.html).
|
||||
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import AzureBlobStorageFileLoader
|
||||
|
@ -25,4 +25,4 @@ pip install pyairtable
|
||||
from langchain.document_loaders import AirtableLoader
|
||||
```
|
||||
|
||||
See an [example](/docs/integrations/document_loaders/airtable.html).
|
||||
See an [example](/docs/integrations/document_loaders/airtable).
|
||||
|
@ -12,4 +12,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import AnalyticDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb.html)
|
||||
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb)
|
||||
|
@ -32,7 +32,7 @@ You can use the `ApifyWrapper` to run Actors on the Apify platform.
|
||||
from langchain.utilities import ApifyWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
|
||||
|
||||
|
||||
### Loader
|
||||
@ -43,4 +43,4 @@ You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
|
||||
from langchain.document_loaders import ApifyDatasetLoader
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset.html).
|
||||
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
|
||||
|
@ -13,7 +13,7 @@ pip install python-arango
|
||||
|
||||
Connect your ArangoDB Database with a chat model to get insights on your data.
|
||||
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa.html).
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
|
||||
|
||||
```python
|
||||
from arango import ArangoClient
|
||||
|
@ -22,7 +22,7 @@ If you don't you can refer to [Argilla - 🚀 Quickstart](https://docs.argilla.i
|
||||
|
||||
## Tracking
|
||||
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla.html).
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla).
|
||||
|
||||
```python
|
||||
from langchain.callbacks import ArgillaCallbackHandler
|
||||
|
@ -18,7 +18,7 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Chroma
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma.html)
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma)
|
||||
|
||||
## Retriever
|
||||
|
||||
|
@ -25,7 +25,7 @@ from langchain.llms import Clarifai
|
||||
llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
||||
```
|
||||
|
||||
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai.html).
|
||||
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai).
|
||||
|
||||
|
||||
### Text Embedding Models
|
||||
@ -37,7 +37,7 @@ There is a Clarifai Embedding model in LangChain, which you can access with:
|
||||
from langchain.embeddings import ClarifaiEmbeddings
|
||||
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
||||
```
|
||||
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai.html).
|
||||
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai).
|
||||
|
||||
## Vectorstore
|
||||
|
||||
|
@ -27,7 +27,7 @@ There exists an Cohere Embedding model, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere)
|
||||
|
||||
## Retriever
|
||||
|
||||
|
@ -20,7 +20,7 @@
|
||||
"source": [
|
||||
"In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with [Comet](https://www.comet.com/site/?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook). \n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>\n",
|
||||
"\n",
|
||||
|
@ -54,4 +54,4 @@ llm = CTransformers(model='marella/gpt-2-ggml', config=config)
|
||||
|
||||
See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters.
|
||||
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers.html).
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers).
|
||||
|
@ -21,4 +21,4 @@ You may import the vectorstore by:
|
||||
from langchain.vectorstores import DashVector
|
||||
```
|
||||
|
||||
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector.html)
|
||||
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector)
|
||||
|
@ -33,11 +33,11 @@ See [MLflow AI Gateway](/docs/integrations/providers/mlflow_ai_gateway).
|
||||
Databricks as an LLM provider
|
||||
-----------------------------
|
||||
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
|
||||
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
|
||||
|
||||
Databricks Dolly
|
||||
----------------
|
||||
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
|
@ -16,4 +16,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Dingo
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo.html)
|
||||
For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo)
|
||||
|
@ -20,4 +20,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Epsilla
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla.html)
|
||||
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla)
|
@ -20,7 +20,7 @@ There exists a GoldenQueryAPIWrapper utility which wraps this API. To import thi
|
||||
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query).
|
||||
|
||||
### Tool
|
||||
|
||||
|
@ -59,7 +59,7 @@ So the final answer is: El Palmar, Spain
|
||||
'El Palmar, Spain'
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper).
|
||||
|
||||
### Tool
|
||||
|
||||
|
@ -45,4 +45,4 @@ model("Once upon a time, ", callbacks=callbacks)
|
||||
|
||||
You can find links to model file downloads in the [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository.
|
||||
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all)
|
||||
|
@ -24,4 +24,4 @@ There exists an Gradient Embedding model, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import GradientEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient)
|
||||
|
@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.llms import HuggingFaceHub
|
||||
```
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub.html)
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub)
|
||||
|
||||
|
||||
### Embeddings
|
||||
|
@ -28,7 +28,7 @@ you don't, follow the next steps to start it:
|
||||
|
||||
## Using Infino
|
||||
|
||||
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino.html).
|
||||
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino).
|
||||
|
||||
```python
|
||||
from langchain.callbacks import InfinoCallbackHandler
|
||||
|
@ -15,7 +15,7 @@ There exists a Jina Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import JinaEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina)
|
||||
|
||||
## Deployment
|
||||
|
||||
|
@ -20,4 +20,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import LanceDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb.html)
|
||||
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb)
|
||||
|
@ -15,7 +15,7 @@ There exists a LlamaCpp LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import LlamaCpp
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp)
|
||||
|
||||
### Embeddings
|
||||
|
||||
@ -23,4 +23,4 @@ There exists a LlamaCpp Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import LlamaCppEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp)
|
||||
|
@ -28,4 +28,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Marqo
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo.html)
|
||||
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)
|
||||
|
@ -22,4 +22,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Milvus
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
|
||||
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus)
|
||||
|
@ -11,7 +11,7 @@ Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information)
|
||||
## LLM
|
||||
|
||||
There exists a Minimax LLM wrapper, which you can access with
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax.html).
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax).
|
||||
|
||||
```python
|
||||
from langchain.llms import Minimax
|
||||
@ -19,7 +19,7 @@ from langchain.llms import Minimax
|
||||
|
||||
## Chat Models
|
||||
|
||||
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax.html)
|
||||
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax)
|
||||
|
||||
```python
|
||||
from langchain.chat_models import MiniMaxChat
|
||||
|
@ -1,9 +1,9 @@
|
||||
# MLflow AI Gateway
|
||||
|
||||
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index.html) service is a powerful tool designed to streamline the usage and management of various large
|
||||
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index) service is a powerful tool designed to streamline the usage and management of various large
|
||||
> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
|
||||
> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
|
||||
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
|
||||
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index) for more details.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
@ -52,7 +52,7 @@ mlflow gateway start --config-path /path/to/config.yaml
|
||||
> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
|
||||
> models in the pyfunc flavor.
|
||||
|
||||
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html).
|
||||
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain).
|
||||
|
||||
|
||||
|
||||
|
@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# MLflow\n",
|
||||
"\n",
|
||||
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow.html) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
|
||||
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to track your LangChain experiments into your `MLflow Server`"
|
||||
]
|
||||
|
@ -50,10 +50,10 @@ Momento can be used as a distributed memory store for LLMs.
|
||||
|
||||
### Chat Message History Memory
|
||||
|
||||
See [this notebook](/docs/integrations/memory/momento_chat_message_history.html) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
|
||||
## Vector Store
|
||||
|
||||
Momento Vector Index (MVI) can be used as a vector store.
|
||||
|
||||
See [this notebook](/docs/integrations/vectorstores/momento_vector_index.html) for a walkthrough of how to use MVI as a vector store.
|
||||
See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store.
|
||||
|
@ -31,7 +31,7 @@ db = SQLDatabase.from_uri(conn_str)
|
||||
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
|
||||
```
|
||||
|
||||
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite.html) documentation on how to use.
|
||||
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite) documentation on how to use.
|
||||
|
||||
|
||||
## LLMCache
|
||||
|
@ -63,4 +63,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import MyScale
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale.html)
|
||||
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale)
|
||||
|
@ -29,7 +29,7 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Neo4jVector
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector.html)
|
||||
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector)
|
||||
|
||||
### GraphCypherQAChain
|
||||
|
||||
@ -41,7 +41,7 @@ from langchain.graphs import Neo4jGraph
|
||||
from langchain.chains import GraphCypherQAChain
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa.html)
|
||||
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa)
|
||||
|
||||
### Constructing a knowledge graph from text
|
||||
|
||||
@ -55,4 +55,4 @@ from langchain.graphs import Neo4jGraph
|
||||
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
|
||||
```
|
||||
|
||||
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer.html)
|
||||
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer)
|
||||
|
@ -12,14 +12,14 @@ All instructions are in examples below.
|
||||
|
||||
We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`.
|
||||
|
||||
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion.html).
|
||||
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion).
|
||||
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import NotionDirectoryLoader
|
||||
```
|
||||
|
||||
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb.html).
|
||||
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb).
|
||||
|
||||
|
||||
```python
|
||||
|
@ -67,4 +67,4 @@ llm("What is the difference between a duck and a goose? And why there are so man
|
||||
### Usage
|
||||
|
||||
For a more detailed walkthrough of the OpenLLM Wrapper, see the
|
||||
[example notebook](/docs/integrations/llms/openllm.html)
|
||||
[example notebook](/docs/integrations/llms/openllm)
|
||||
|
@ -18,4 +18,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import OpenSearchVectorSearch
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch.html)
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch)
|
||||
|
@ -29,7 +29,7 @@ There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import
|
||||
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap).
|
||||
|
||||
### Tool
|
||||
|
||||
|
@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
|
||||
|
||||
### Usage
|
||||
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector.html)
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector)
|
||||
|
@ -21,4 +21,4 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Pinecone
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone.html)
|
||||
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone)
|
||||
|
@ -40,10 +40,10 @@ for res in llm_results.generations:
|
||||
```
|
||||
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
|
||||
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai) LLM, except that
|
||||
- all your requests will be logged to your PromptLayer account
|
||||
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
|
||||
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai) and `PromptLayerOpenAIChat`
|
||||
|
@ -16,4 +16,4 @@ There is a basic wrapper around `SemaDB` collections allowing you to use it as a
|
||||
from langchain.vectorstores import SemaDB
|
||||
```
|
||||
|
||||
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb.html).
|
||||
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb).
|
@ -16,7 +16,7 @@ view these connections from the dashboard and retrieve data using the server-sid
|
||||
|
||||
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
|
||||
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps.
|
||||
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic.html)
|
||||
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic)
|
||||
|
||||
|
||||
## Advantages vs Other Document Loaders
|
||||
|
@ -24,4 +24,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Qdrant
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant.html)
|
||||
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)
|
||||
|
@ -103,7 +103,7 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Redis
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis.html).
|
||||
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis).
|
||||
|
||||
### Retriever
|
||||
|
||||
@ -114,7 +114,7 @@ Redis can be used to persist LLM conversations.
|
||||
|
||||
#### Vector Store Retriever Memory
|
||||
|
||||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory.html).
|
||||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory).
|
||||
|
||||
#### Chat Message History Memory
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history.html).
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).
|
||||
|
@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse.html)
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse)
|
||||
|
||||
## Self-hosted Embeddings
|
||||
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
|
||||
@ -26,4 +26,4 @@ the `SelfHostedEmbedding` class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted.html)
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted)
|
||||
|
@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
|
||||
from langchain.utilities import SerpAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi).
|
||||
|
||||
### Tool
|
||||
|
||||
|
@ -19,4 +19,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import SKLearnVectorStore
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn.html).
|
||||
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn).
|
||||
|
@ -13,7 +13,7 @@ pip install spacy
|
||||
|
||||
## Text Splitter
|
||||
|
||||
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token.html#spacy).
|
||||
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy).
|
||||
|
||||
```python
|
||||
from langchain.text_splitter import SpacyTextSplitter
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/spreedly.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/spreedly).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/stripe.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/stripe).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
@ -19,4 +19,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Tair
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair.html)
|
||||
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair)
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/telegram.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/telegram).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
@ -12,4 +12,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import TencentVectorDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb.html)
|
||||
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb)
|
||||
|
@ -10,7 +10,7 @@
|
||||
pip install py-trello beautifulsoup4
|
||||
```
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/trello.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/trello).
|
||||
|
||||
|
||||
## Document Loader
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Typesense
|
||||
|
||||
> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either
|
||||
> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
|
||||
> [self-host](https://typesense.org/docs/guide/install-typesense#option-2-local-machine-self-hosting) or run
|
||||
> on [Typesense Cloud](https://cloud.typesense.org/).
|
||||
> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also
|
||||
> focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
|
||||
|
@ -39,4 +39,4 @@ langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
|
||||
Upstash Redis can be used to persist LLM conversations.
|
||||
|
||||
#### Chat Message History Memory
|
||||
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history.html).
|
||||
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history).
|
||||
|
@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Chat Over Documents with Vectara\n",
|
||||
"\n",
|
||||
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.html) notebook, but using Vectara as the vector database."
|
||||
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db) notebook, but using Vectara as the vector database."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -35,4 +35,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Weaviate
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate.html)
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate)
|
||||
|
@ -25,7 +25,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
|
||||
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha).
|
||||
|
||||
### Tool
|
||||
|
||||
|
@ -93,10 +93,10 @@ llm(
|
||||
### Usage
|
||||
|
||||
For more information and detailed examples, refer to the
|
||||
[example for xinference LLMs](/docs/integrations/llms/xinference.html)
|
||||
[example for xinference LLMs](/docs/integrations/llms/xinference)
|
||||
|
||||
### Embeddings
|
||||
|
||||
Xinference also supports embedding queries and documents. See
|
||||
[example for xinference embeddings](/docs/integrations/text_embedding/xinference.html)
|
||||
[example for xinference embeddings](/docs/integrations/text_embedding/xinference)
|
||||
for a more detailed demo.
|
@ -19,4 +19,4 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Milvus
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz.html)
|
||||
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz)
|
||||
|
@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"This notebook demonstrates a sample composition of the `Speak`, `Klarna`, and `Spoonacluar` APIs.\n",
|
||||
"\n",
|
||||
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi.html) notebook.\n",
|
||||
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi) notebook.\n",
|
||||
"\n",
|
||||
"### First, import dependencies and load the LLM"
|
||||
]
|
||||
|
@ -379,7 +379,7 @@
|
||||
"agent.run(\n",
|
||||
" \"\"\"\n",
|
||||
"who bought the most expensive ticket?\n",
|
||||
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe.html\n",
|
||||
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe\n",
|
||||
"\"\"\"\n",
|
||||
")"
|
||||
]
|
||||
|
@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# Apify\n",
|
||||
"\n",
|
||||
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify.html) for LangChain.\n",
|
||||
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify) for LangChain.\n",
|
||||
"\n",
|
||||
"[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,\n",
|
||||
"which provides an [ecosystem](https://apify.com/store) of more than a thousand\n",
|
||||
@ -72,7 +72,7 @@
|
||||
"source": [
|
||||
"Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.\n",
|
||||
"\n",
|
||||
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](/docs/integrations/document_loaders/apify_dataset.html). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
|
||||
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](/docs/integrations/document_loaders/apify_dataset). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# Typesense\n",
|
||||
"\n",
|
||||
"> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run on [Typesense Cloud](https://cloud.typesense.org/).\n",
|
||||
"> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either [self-host](https://typesense.org/docs/guide/install-typesense#option-2-local-machine-self-hosting) or run on [Typesense Cloud](https://cloud.typesense.org/).\n",
|
||||
">\n",
|
||||
"> Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.\n",
|
||||
">\n",
|
||||
|
@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Custom agent with tool retrieval\n",
|
||||
"\n",
|
||||
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent.html) and assumes familiarity with how agents work.\n",
|
||||
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent) and assumes familiarity with how agents work.\n",
|
||||
"\n",
|
||||
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
|
||||
"\n",
|
||||
|
@ -9,8 +9,8 @@
|
||||
"\n",
|
||||
"This notebook goes over adding memory to **both** an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
|
||||
"\n",
|
||||
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory.html)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
|
||||
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
|
||||
"\n",
|
||||
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. The summarization tool also needs access to the conversation memory."
|
||||
]
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Callbacks for custom chains
|
||||
|
||||
When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains.
|
||||
`_call`, `_generate`, `_run`, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called `run_manager` which is bound to that run, and contains the logging methods that can be used by that object (i.e. `on_llm_new_token`). This is useful when constructing a custom chain. See this guide for more information on how to [create custom chains and use callbacks inside them](/docs/modules/chains/how_to/custom_chain.html).
|
||||
`_call`, `_generate`, `_run`, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called `run_manager` which is bound to that run, and contains the logging methods that can be used by that object (i.e. `on_llm_new_token`). This is useful when constructing a custom chain. See this guide for more information on how to [create custom chains and use callbacks inside them](/docs/modules/chains/how_to/custom_chain).
|
||||
|
||||
|
||||
|
@ -9,7 +9,7 @@
|
||||
"\n",
|
||||
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
|
||||
"\n",
|
||||
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering.html). Async support for other chains is on the roadmap."
|
||||
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering). Async support for other chains is on the roadmap."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -147,7 +147,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools.html)."
|
||||
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
This covers how to load all documents in a directory.
|
||||
|
||||
Under the hood, by default this uses the [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file.html).
|
||||
Under the hood, by default this uses the [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import DirectoryLoader
|
||||
|
@ -9,8 +9,8 @@
|
||||
"\n",
|
||||
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
|
||||
"\n",
|
||||
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory.html)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
|
||||
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
|
||||
"\n",
|
||||
"In order to add a memory to an agent we are going to perform the following steps:\n",
|
||||
"\n",
|
||||
|
@ -9,9 +9,9 @@
|
||||
"\n",
|
||||
"This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
|
||||
"\n",
|
||||
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory.html)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
|
||||
"- [Memory in Agent](/docs/modules/memory/how_to/agent_with_memory.html)\n",
|
||||
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
|
||||
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
|
||||
"- [Memory in Agent](/docs/modules/memory/how_to/agent_with_memory)\n",
|
||||
"\n",
|
||||
"In order to add a memory with an external message store to an agent we are going to do the following steps:\n",
|
||||
"\n",
|
||||
|
@ -1115,8 +1115,8 @@
|
||||
"To learn more about the SQL Agent and how it works we refer to the [SQL Agent Toolkit](/docs/integrations/toolkits/sql_database) documentation.\n",
|
||||
"\n",
|
||||
"You can also check Agents for other document types:\n",
|
||||
"- [Pandas Agent](/docs/integrations/toolkits/pandas.html)\n",
|
||||
"- [CSV Agent](/docs/integrations/toolkits/csv.html)"
|
||||
"- [Pandas Agent](/docs/integrations/toolkits/pandas)\n",
|
||||
"- [CSV Agent](/docs/integrations/toolkits/csv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -42,7 +42,7 @@ qa.run(query)
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Chain Type
|
||||
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](/docs/modules/chains/additional/question_answering.html).
|
||||
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](/docs/modules/chains/additional/question_answering).
|
||||
|
||||
There are two ways to load different chain types. First, you can specify the chain type argument in the `from_chain_type` method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to `map_reduce`.
|
||||
|
||||
@ -65,7 +65,7 @@ qa.run(query)
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](/docs/modules/chains/additional/question_answering.html)) and then pass that directly to the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
|
||||
The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](/docs/modules/chains/additional/question_answering)) and then pass that directly to the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
|
||||
|
||||
|
||||
```python
|
||||
@ -89,7 +89,7 @@ qa.run(query)
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Custom Prompts
|
||||
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](/docs/modules/chains/additional/question_answering.html)
|
||||
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](/docs/modules/chains/additional/question_answering)
|
||||
|
||||
|
||||
```python
|
||||
|
Loading…
Reference in New Issue
Block a user