Compare commits

..

5 Commits

Author SHA1 Message Date
William Fu-Hinthorn
d05f99930c silly mypy fun 2023-10-24 21:59:11 -07:00
William Fu-Hinthorn
19b07f6d61 update 2023-10-24 19:08:28 -07:00
William Fu-Hinthorn
5c9b679dde Runnable Traceable 2023-10-24 17:32:34 -07:00
William Fu-Hinthorn
1cd73e8dbd merge 2023-10-24 17:22:13 -07:00
William Fu-Hinthorn
d085f51c5d update 2023-10-24 13:17:50 -07:00
426 changed files with 1743 additions and 59980 deletions

View File

@@ -77,7 +77,7 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs
There are two separate projects in this repository:
- `langchain`: core langchain code, abstractions, and use cases
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
- `langchain.experimental`: see the [Experimental README](../libs/experimental/README.md) for more information.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
@@ -129,7 +129,7 @@ To run unit tests in Docker:
make docker_tests
```
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
There are also [integration tests and code-coverage](../libs/langchain/tests/README.md) available.
### Formatting and Linting

View File

@@ -4,7 +4,7 @@ Example code for building applications with LangChain, with an emphasis on more
Notebook | Description
:- | :-
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a sql database using an open source llm (llama2), specifically demonstrated on a sqlite database containing nba rosters.
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
@@ -12,14 +12,14 @@ Notebook | Description
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl api.
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
@@ -37,7 +37,7 @@ Notebook | Description
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question answering system by incorporating openai functions into a retrieval pipeline.
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
@@ -46,7 +46,7 @@ Notebook | Description
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the twitter algorithm with the help of gpt4 and activeloop's deep lake.
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialoguesimulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.

View File

@@ -14,7 +14,6 @@ cd ../_dist
poetry run python scripts/model_feat_table.py
poetry run nbdoc_build --srcdir docs
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
poetry run python scripts/generate_api_reference_links.py
yarn install
yarn start

View File

@@ -18,7 +18,7 @@ import CodeBlock from "@theme/CodeBlock";
</Tabs>
For more details, see our [Installation guide](/docs/get_started/installation).
For more details, see our [Installation guide](/docs/get_started/installation.html).
## Environment setup

View File

@@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
- [Ray Serve](/docs/ecosystem/integrations/ray_serve)
- [Ray Serve](/docs/ecosystem/integrations/ray_serve.html)
- [BentoML](https://github.com/bentoml/BentoML)
- [OpenLLM](/docs/ecosystem/integrations/openllm)
- [Modal](/docs/ecosystem/integrations/modal)
- [Jina](/docs/ecosystem/integrations/jina#deployment)
- [OpenLLM](/docs/ecosystem/integrations/openllm.html)
- [Modal](/docs/ecosystem/integrations/modal.html)
- [Jina](/docs/ecosystem/integrations/jina.html#deployment)
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.

View File

@@ -14,7 +14,7 @@
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
"> from data labeling to model monitoring.\n",
"\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]

View File

@@ -5,7 +5,7 @@
"id": "a9ab2a39-7c2d-4119-9dc7-8035fdfba3cb",
"metadata": {},
"source": [
"# LangSmith Chat Datasets\n",
"# Fine-Tuning on LangSmith Chat Datasets\n",
"\n",
"This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.\n",
"The process is simple and comprises 3 steps.\n",
@@ -271,7 +271,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
"id": "a9ab2a39-7c2d-4119-9dc7-8035fdfba3cb",
"metadata": {},
"source": [
"# LangSmith LLM Runs\n",
"# Fine-Tuning on LangSmith LLM Runs\n",
"\n",
"This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.\n",
"The process is simple and comprises 3 steps.\n",
@@ -421,7 +421,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Pandas DataFrame\n",
"\n",
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index) DataFrame."
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html) DataFrame."
]
},
{

View File

@@ -5,10 +5,10 @@
"metadata": {},
"source": [
"# Psychic\n",
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic) for more details.\n",
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic.html) for more details.\n",
"\n",
"## Prerequisites\n",
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic)\n",
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic.html)\n",
"2. Log into the [Psychic dashboard](https://dashboard.psychic.dev/) and get your secret key\n",
"3. Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify."
]

View File

@@ -319,7 +319,7 @@
"metadata": {},
"source": [
"### Standard Cache\n",
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses."
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses."
]
},
{
@@ -405,7 +405,7 @@
"metadata": {},
"source": [
"### Semantic Cache\n",
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
{
@@ -730,7 +730,7 @@
},
"source": [
"## `Momento` Cache\n",
"Use [Momento](/docs/ecosystem/integrations/momento) to cache prompts and responses.\n",
"Use [Momento](/docs/ecosystem/integrations/momento.html) to cache prompts and responses.\n",
"\n",
"Requires momento to use, uncomment below to install:"
]

View File

@@ -75,9 +75,9 @@ from langchain.llms.sagemaker_endpoint import ContentHandlerBase
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory.html).
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file.html).
```python
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader

View File

@@ -83,7 +83,7 @@ First, we need to install several python packages.
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
```
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive).
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive.html).
```python
from langchain.document_loaders import GoogleDriveLoader
@@ -182,7 +182,7 @@ There exists a `GoogleSearchAPIWrapper` utility which wraps this API. To import
from langchain.utilities import GoogleSearchAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search.html).
We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:

View File

@@ -70,13 +70,13 @@ from langchain.chat_models import AzureChatOpenAI
pip install azure-storage-blob
```
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container).
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container.html).
```python
from langchain.document_loaders import AzureBlobStorageContainerLoader
```
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file).
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file.html).
```python
from langchain.document_loaders import AzureBlobStorageFileLoader

View File

@@ -25,4 +25,4 @@ pip install pyairtable
from langchain.document_loaders import AirtableLoader
```
See an [example](/docs/integrations/document_loaders/airtable).
See an [example](/docs/integrations/document_loaders/airtable.html).

View File

@@ -12,4 +12,4 @@ To import this vectorstore:
from langchain.vectorstores import AnalyticDB
```
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb)
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb.html)

View File

@@ -32,7 +32,7 @@ You can use the `ApifyWrapper` to run Actors on the Apify platform.
from langchain.utilities import ApifyWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify.html).
### Loader
@@ -43,4 +43,4 @@ You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
from langchain.document_loaders import ApifyDatasetLoader
```
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset.html).

View File

@@ -13,7 +13,7 @@ pip install python-arango
Connect your ArangoDB Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa.html).
```python
from arango import ArangoClient

View File

@@ -22,7 +22,7 @@ If you don't you can refer to [Argilla - 🚀 Quickstart](https://docs.argilla.i
## Tracking
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla).
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla.html).
```python
from langchain.callbacks import ArgillaCallbackHandler

View File

@@ -18,7 +18,7 @@ whether for semantic search or example selection.
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma)
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma.html)
## Retriever

View File

@@ -25,7 +25,7 @@ from langchain.llms import Clarifai
llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
```
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai).
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai.html).
### Text Embedding Models
@@ -37,7 +37,7 @@ There is a Clarifai Embedding model in LangChain, which you can access with:
from langchain.embeddings import ClarifaiEmbeddings
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
```
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai).
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai.html).
## Vectorstore

View File

@@ -27,7 +27,7 @@ There exists an Cohere Embedding model, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere.html)
## Retriever

View File

@@ -20,7 +20,7 @@
"source": [
"In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with [Comet](https://www.comet.com/site/?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook). \n",
"\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking.html\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>\n",
"\n",

View File

@@ -54,4 +54,4 @@ llm = CTransformers(model='marella/gpt-2-ggml', config=config)
See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters.
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers).
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers.html).

View File

@@ -21,4 +21,4 @@ You may import the vectorstore by:
from langchain.vectorstores import DashVector
```
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector)
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector.html)

View File

@@ -33,11 +33,11 @@ See [MLflow AI Gateway](/docs/integrations/providers/mlflow_ai_gateway).
Databricks as an LLM provider
-----------------------------
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
Databricks Dolly
----------------
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub) for instructions to access it through the Hugging Face Hub integration with LangChain.
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.

View File

@@ -16,4 +16,4 @@ To import this vectorstore:
from langchain.vectorstores import Dingo
```
For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo)
For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo.html)

View File

@@ -20,4 +20,4 @@ To import this vectorstore:
from langchain.vectorstores import Epsilla
```
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla)
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla.html)

View File

@@ -20,7 +20,7 @@ There exists a GoldenQueryAPIWrapper utility which wraps this API. To import thi
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query.html).
### Tool

View File

@@ -59,7 +59,7 @@ So the final answer is: El Palmar, Spain
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper.html).
### Tool

View File

@@ -45,4 +45,4 @@ model("Once upon a time, ", callbacks=callbacks)
You can find links to model file downloads in the [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository.
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all.html)

View File

@@ -24,4 +24,4 @@ There exists an Gradient Embedding model, which you can access with
```python
from langchain.embeddings import GradientEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient.html)

View File

@@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.llms import HuggingFaceHub
```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub)
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub.html)
### Embeddings

View File

@@ -28,7 +28,7 @@ you don't, follow the next steps to start it:
## Using Infino
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino).
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino.html).
```python
from langchain.callbacks import InfinoCallbackHandler

View File

@@ -15,7 +15,7 @@ There exists a Jina Embeddings wrapper, which you can access with
```python
from langchain.embeddings import JinaEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina.html)
## Deployment

View File

@@ -20,4 +20,4 @@ To import this vectorstore:
from langchain.vectorstores import LanceDB
```
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb)
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb.html)

View File

@@ -15,7 +15,7 @@ There exists a LlamaCpp LLM wrapper, which you can access with
```python
from langchain.llms import LlamaCpp
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp.html)
### Embeddings
@@ -23,4 +23,4 @@ There exists a LlamaCpp Embeddings wrapper, which you can access with
```python
from langchain.embeddings import LlamaCppEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp)
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp.html)

View File

@@ -28,4 +28,4 @@ To import this vectorstore:
from langchain.vectorstores import Marqo
```
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo.html)

View File

@@ -22,4 +22,4 @@ To import this vectorstore:
from langchain.vectorstores import Milvus
```
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus)
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)

View File

@@ -11,7 +11,7 @@ Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information)
## LLM
There exists a Minimax LLM wrapper, which you can access with
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax).
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax.html).
```python
from langchain.llms import Minimax
@@ -19,7 +19,7 @@ from langchain.llms import Minimax
## Chat Models
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax)
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax.html)
```python
from langchain.chat_models import MiniMaxChat

View File

@@ -1,9 +1,9 @@
# MLflow AI Gateway
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index) service is a powerful tool designed to streamline the usage and management of various large
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index.html) service is a powerful tool designed to streamline the usage and management of various large
> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index) for more details.
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
## Installation and Setup
@@ -52,7 +52,7 @@ mlflow gateway start --config-path /path/to/config.yaml
> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
> models in the pyfunc flavor.
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain).
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html).

View File

@@ -7,7 +7,7 @@
"source": [
"# MLflow\n",
"\n",
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow.html) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
"\n",
"This notebook goes over how to track your LangChain experiments into your `MLflow Server`"
]

View File

@@ -50,10 +50,10 @@ Momento can be used as a distributed memory store for LLMs.
### Chat Message History Memory
See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history.
See [this notebook](/docs/integrations/memory/momento_chat_message_history.html) for a walkthrough of how to use Momento as a memory store for chat message history.
## Vector Store
Momento Vector Index (MVI) can be used as a vector store.
See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store.
See [this notebook](/docs/integrations/vectorstores/momento_vector_index.html) for a walkthrough of how to use MVI as a vector store.

View File

@@ -31,7 +31,7 @@ db = SQLDatabase.from_uri(conn_str)
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
```
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite) documentation on how to use.
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite.html) documentation on how to use.
## LLMCache

View File

@@ -63,4 +63,4 @@ To import this vectorstore:
from langchain.vectorstores import MyScale
```
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale)
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale.html)

View File

@@ -29,7 +29,7 @@ To import this vectorstore:
from langchain.vectorstores import Neo4jVector
```
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector)
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector.html)
### GraphCypherQAChain
@@ -41,7 +41,7 @@ from langchain.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
```
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa)
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa.html)
### Constructing a knowledge graph from text
@@ -55,4 +55,4 @@ from langchain.graphs import Neo4jGraph
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
```
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer)
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer.html)

View File

@@ -12,14 +12,14 @@ All instructions are in examples below.
We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`.
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion).
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion.html).
```python
from langchain.document_loaders import NotionDirectoryLoader
```
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb).
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb.html).
```python

View File

@@ -67,4 +67,4 @@ llm("What is the difference between a duck and a goose? And why there are so man
### Usage
For a more detailed walkthrough of the OpenLLM Wrapper, see the
[example notebook](/docs/integrations/llms/openllm)
[example notebook](/docs/integrations/llms/openllm.html)

View File

@@ -18,4 +18,4 @@ To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch)
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch.html)

View File

@@ -29,7 +29,7 @@ There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap.html).
### Tool

View File

@@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
### Usage
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector)
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector.html)

View File

@@ -21,4 +21,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone)
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone.html)

View File

@@ -40,10 +40,10 @@ for res in llm_results.generations:
```
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai) LLM, except that
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai) and `PromptLayerOpenAIChat`
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`

View File

@@ -16,4 +16,4 @@ There is a basic wrapper around `SemaDB` collections allowing you to use it as a
from langchain.vectorstores import SemaDB
```
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb).
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb.html).

View File

@@ -16,7 +16,7 @@ view these connections from the dashboard and retrieve data using the server-sid
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps.
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic)
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic.html)
## Advantages vs Other Document Loaders

View File

@@ -24,4 +24,4 @@ To import this vectorstore:
from langchain.vectorstores import Qdrant
```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant.html)

View File

@@ -103,7 +103,7 @@ To import this vectorstore:
from langchain.vectorstores import Redis
```
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis).
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis.html).
### Retriever
@@ -114,7 +114,7 @@ Redis can be used to persist LLM conversations.
#### Vector Store Retriever Memory
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory).
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory.html).
#### Chat Message History Memory
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history.html).

View File

@@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse)
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse.html)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
@@ -26,4 +26,4 @@ the `SelfHostedEmbedding` class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted)
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted.html)

View File

@@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi.html).
### Tool

View File

@@ -19,4 +19,4 @@ To import this vectorstore:
from langchain.vectorstores import SKLearnVectorStore
```
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn).
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn.html).

View File

@@ -13,7 +13,7 @@ pip install spacy
## Text Splitter
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy).
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token.html#spacy).
```python
from langchain.text_splitter import SpacyTextSplitter

View File

@@ -4,7 +4,7 @@
## Installation and Setup
See [setup instructions](/docs/integrations/document_loaders/spreedly).
See [setup instructions](/docs/integrations/document_loaders/spreedly.html).
## Document Loader

View File

@@ -5,7 +5,7 @@
## Installation and Setup
See [setup instructions](/docs/integrations/document_loaders/stripe).
See [setup instructions](/docs/integrations/document_loaders/stripe.html).
## Document Loader

View File

@@ -19,4 +19,4 @@ To import this vectorstore:
from langchain.vectorstores import Tair
```
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair)
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair.html)

View File

@@ -5,7 +5,7 @@
## Installation and Setup
See [setup instructions](/docs/integrations/document_loaders/telegram).
See [setup instructions](/docs/integrations/document_loaders/telegram.html).
## Document Loader

View File

@@ -12,4 +12,4 @@ To import this vectorstore:
from langchain.vectorstores import TencentVectorDB
```
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb)
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb.html)

View File

@@ -10,7 +10,7 @@
pip install py-trello beautifulsoup4
```
See [setup instructions](/docs/integrations/document_loaders/trello).
See [setup instructions](/docs/integrations/document_loaders/trello.html).
## Document Loader

View File

@@ -1,7 +1,7 @@
# Typesense
> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either
> [self-host](https://typesense.org/docs/guide/install-typesense#option-2-local-machine-self-hosting) or run
> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
> on [Typesense Cloud](https://cloud.typesense.org/).
> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also
> focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.

View File

@@ -39,4 +39,4 @@ langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
Upstash Redis can be used to persist LLM conversations.
#### Chat Message History Memory
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history).
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history.html).

View File

@@ -7,7 +7,7 @@
"source": [
"# Chat Over Documents with Vectara\n",
"\n",
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db) notebook, but using Vectara as the vector database."
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.html) notebook, but using Vectara as the vector database."
]
},
{

View File

@@ -35,4 +35,4 @@ To import this vectorstore:
from langchain.vectorstores import Weaviate
```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate)
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate.html)

View File

@@ -25,7 +25,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha.html).
### Tool

View File

@@ -93,10 +93,10 @@ llm(
### Usage
For more information and detailed examples, refer to the
[example for xinference LLMs](/docs/integrations/llms/xinference)
[example for xinference LLMs](/docs/integrations/llms/xinference.html)
### Embeddings
Xinference also supports embedding queries and documents. See
[example for xinference embeddings](/docs/integrations/text_embedding/xinference)
[example for xinference embeddings](/docs/integrations/text_embedding/xinference.html)
for a more detailed demo.

View File

@@ -19,4 +19,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import Milvus
```
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz)
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz.html)

View File

@@ -11,7 +11,7 @@
"\n",
"This notebook demonstrates a sample composition of the `Speak`, `Klarna`, and `Spoonacluar` APIs.\n",
"\n",
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi) notebook.\n",
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi.html) notebook.\n",
"\n",
"### First, import dependencies and load the LLM"
]

View File

@@ -379,7 +379,7 @@
"agent.run(\n",
" \"\"\"\n",
"who bought the most expensive ticket?\n",
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe\n",
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe.html\n",
"\"\"\"\n",
")"
]

View File

@@ -6,7 +6,7 @@
"source": [
"# Apify\n",
"\n",
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify) for LangChain.\n",
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify.html) for LangChain.\n",
"\n",
"[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,\n",
"which provides an [ecosystem](https://apify.com/store) of more than a thousand\n",
@@ -72,7 +72,7 @@
"source": [
"Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.\n",
"\n",
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](/docs/integrations/document_loaders/apify_dataset). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](/docs/integrations/document_loaders/apify_dataset.html). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
]
},
{

View File

@@ -141,7 +141,7 @@
"# PGVector needs the connection string to the database.\n",
"CONNECTION_STRING = \"postgresql+psycopg2://harrisonchase@localhost:5432/test3\"\n",
"\n",
"# # Alternatively, you can create it from environment variables.\n",
"# # Alternatively, you can create it from enviornment variables.\n",
"# import os\n",
"\n",
"# CONNECTION_STRING = PGVector.connection_string_from_db_params(\n",

View File

@@ -6,7 +6,7 @@
"source": [
"# Typesense\n",
"\n",
"> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either [self-host](https://typesense.org/docs/guide/install-typesense#option-2-local-machine-self-hosting) or run on [Typesense Cloud](https://cloud.typesense.org/).\n",
"> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run on [Typesense Cloud](https://cloud.typesense.org/).\n",
">\n",
"> Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.\n",
">\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# Custom agent with tool retrieval\n",
"\n",
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent) and assumes familiarity with how agents work.\n",
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent.html) and assumes familiarity with how agents work.\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
"\n",

View File

@@ -9,8 +9,8 @@
"\n",
"This notebook goes over adding memory to **both** an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"\n",
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. The summarization tool also needs access to the conversation memory."
]

View File

@@ -1,6 +1,6 @@
# Callbacks for custom chains
When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains.
`_call`, `_generate`, `_run`, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called `run_manager` which is bound to that run, and contains the logging methods that can be used by that object (i.e. `on_llm_new_token`). This is useful when constructing a custom chain. See this guide for more information on how to [create custom chains and use callbacks inside them](/docs/modules/chains/how_to/custom_chain).
`_call`, `_generate`, `_run`, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called `run_manager` which is bound to that run, and contains the logging methods that can be used by that object (i.e. `on_llm_new_token`). This is useful when constructing a custom chain. See this guide for more information on how to [create custom chains and use callbacks inside them](/docs/modules/chains/how_to/custom_chain.html).

View File

@@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering). Async support for other chains is on the roadmap."
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering.html). Async support for other chains is on the roadmap."
]
},
{

View File

@@ -147,7 +147,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools)."
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools.html)."
]
},
{

View File

@@ -2,7 +2,7 @@
This covers how to load all documents in a directory.
Under the hood, by default this uses the [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file).
Under the hood, by default this uses the [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file.html).
```python
from langchain.document_loaders import DirectoryLoader

View File

@@ -56,7 +56,7 @@ docs = retriever.get_relevant_documents("what did he say about ketanji brown jac
## Similarity score threshold retrieval
You can also set a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold.
You can also a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold.
```python

View File

@@ -9,8 +9,8 @@
"\n",
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"\n",
"In order to add a memory to an agent we are going to perform the following steps:\n",
"\n",

View File

@@ -9,9 +9,9 @@
"\n",
"This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
"- [Memory in Agent](/docs/modules/memory/how_to/agent_with_memory)\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"- [Memory in Agent](/docs/modules/memory/how_to/agent_with_memory.html)\n",
"\n",
"In order to add a memory with an external message store to an agent we are going to do the following steps:\n",
"\n",

View File

@@ -1115,8 +1115,8 @@
"To learn more about the SQL Agent and how it works we refer to the [SQL Agent Toolkit](/docs/integrations/toolkits/sql_database) documentation.\n",
"\n",
"You can also check Agents for other document types:\n",
"- [Pandas Agent](/docs/integrations/toolkits/pandas)\n",
"- [CSV Agent](/docs/integrations/toolkits/csv)"
"- [Pandas Agent](/docs/integrations/toolkits/pandas.html)\n",
"- [CSV Agent](/docs/integrations/toolkits/csv.html)"
]
},
{

View File

@@ -42,7 +42,7 @@ qa.run(query)
</CodeOutputBlock>
## Chain Type
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](/docs/modules/chains/additional/question_answering).
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](/docs/modules/chains/additional/question_answering.html).
There are two ways to load different chain types. First, you can specify the chain type argument in the `from_chain_type` method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to `map_reduce`.
@@ -65,7 +65,7 @@ qa.run(query)
</CodeOutputBlock>
The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](/docs/modules/chains/additional/question_answering)) and then pass that directly to the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](/docs/modules/chains/additional/question_answering.html)) and then pass that directly to the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
```python
@@ -89,7 +89,7 @@ qa.run(query)
</CodeOutputBlock>
## Custom Prompts
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](/docs/modules/chains/additional/question_answering)
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](/docs/modules/chains/additional/question_answering.html)
```python

View File

@@ -173,10 +173,6 @@ const config = {
to: "/docs/community",
label: "Community",
},
{
to: "/docs/contributing",
label: "Developer's guide",
},
{
to: "/docs/additional_resources/dependents",
label: "Dependents",

View File

@@ -33,6 +33,8 @@ sidebar_class_name: hidden
# LLMs
import DocCardList from "@theme/DocCardList";
## Features (natively supported)
All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below:
- *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread.
@@ -43,6 +45,7 @@ Each LLM integration can optionally provide native implementations for async, st
{table}
<DocCardList />
"""
CHAT_MODEL_TEMPLATE = """\
@@ -53,6 +56,8 @@ sidebar_class_name: hidden
# Chat models
import DocCardList from "@theme/DocCardList";
## Features (natively supported)
All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below:
- *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread.
@@ -64,6 +69,7 @@ The table shows, for each integration, which features have been implemented with
{table}
<DocCardList />
"""

View File

@@ -49,5 +49,4 @@ python3.11 -m pip install -r vercel_requirements.txt
python3.11 scripts/model_feat_table.py
nbdoc_build --srcdir docs
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
python3.11 scripts/generate_api_reference_links.py

3
libs/cli/.gitignore vendored
View File

@@ -1,3 +0,0 @@
dist
__pycache__
.venv

View File

@@ -1,213 +0,0 @@
# `langchain`
**Usage**:
```console
$ langchain [OPTIONS] COMMAND [ARGS]...
```
**Options**:
* `--help`: Show this message and exit.
**Commands**:
* `hub`: Manage installable hub packages.
* `serve`: Manage LangServe application projects.
* `start`: Start the LangServe instance, whether it's...
## `langchain hub`
Manage installable hub packages.
**Usage**:
```console
$ langchain hub [OPTIONS] COMMAND [ARGS]...
```
**Options**:
* `--help`: Show this message and exit.
**Commands**:
* `new`: Creates a new hub package.
* `start`: Starts a demo LangServe instance for this...
### `langchain hub new`
Creates a new hub package.
**Usage**:
```console
$ langchain hub new [OPTIONS] NAME
```
**Arguments**:
* `NAME`: [required]
**Options**:
* `--help`: Show this message and exit.
### `langchain hub start`
Starts a demo LangServe instance for this hub package.
**Usage**:
```console
$ langchain hub start [OPTIONS]
```
**Options**:
* `--port INTEGER`
* `--host TEXT`
* `--help`: Show this message and exit.
## `langchain serve`
Manage LangServe application projects.
**Usage**:
```console
$ langchain serve [OPTIONS] COMMAND [ARGS]...
```
**Options**:
* `--help`: Show this message and exit.
**Commands**:
* `add`: Adds the specified package to the current...
* `install`
* `list`: Lists all packages in the current...
* `new`: Create a new LangServe application.
* `remove`: Removes the specified package from the...
* `start`: Starts the LangServe instance.
### `langchain serve add`
Adds the specified package to the current LangServe instance.
e.g.:
langchain serve add simple-pirate
langchain serve add git+ssh://git@github.com/efriis/simple-pirate.git
langchain serve add git+https://github.com/efriis/hub.git#devbranch#subdirectory=mypackage
**Usage**:
```console
$ langchain serve add [OPTIONS] DEPENDENCIES...
```
**Arguments**:
* `DEPENDENCIES...`: [required]
**Options**:
* `--api-path TEXT`
* `--project-dir PATH`
* `--help`: Show this message and exit.
### `langchain serve install`
**Usage**:
```console
$ langchain serve install [OPTIONS]
```
**Options**:
* `--help`: Show this message and exit.
### `langchain serve list`
Lists all packages in the current LangServe instance.
**Usage**:
```console
$ langchain serve list [OPTIONS]
```
**Options**:
* `--help`: Show this message and exit.
### `langchain serve new`
Create a new LangServe application.
**Usage**:
```console
$ langchain serve new [OPTIONS] NAME
```
**Arguments**:
* `NAME`: [required]
**Options**:
* `--package TEXT`
* `--help`: Show this message and exit.
### `langchain serve remove`
Removes the specified package from the current LangServe instance.
**Usage**:
```console
$ langchain serve remove [OPTIONS] API_PATHS...
```
**Arguments**:
* `API_PATHS...`: [required]
**Options**:
* `--help`: Show this message and exit.
### `langchain serve start`
Starts the LangServe instance.
**Usage**:
```console
$ langchain serve start [OPTIONS]
```
**Options**:
* `--port INTEGER`
* `--host TEXT`
* `--help`: Show this message and exit.
## `langchain start`
Start the LangServe instance, whether it's a hub package or a serve project.
**Usage**:
```console
$ langchain start [OPTIONS]
```
**Options**:
* `--port INTEGER`
* `--host TEXT`
* `--help`: Show this message and exit.

View File

@@ -1,43 +0,0 @@
# langchain-cli
Install CLI
`pip install -U --pre langchain-cli`
Create new langchain app
`langchain serve new my-app`
Go into app
`cd my-app`
Install a package
`langchain serve add extraction-summary`
(Activate virtualenv)
Install langserve
`pip install "langserve[all]"`
Install the langchain package
`pip install -e packages/extraction-summary`
Edit `app/server.py` to add that package to the routes
```markdown
from fastapi import FastAPI
from langserve import add_routes
from extraction_summary.chain import chain
app = FastAPI()
add_routes(app, chain)
```
Run the app
`python app/server.py`

View File

@@ -1,36 +0,0 @@
import typer
import subprocess
from typing import Optional
from typing_extensions import Annotated
from langchain_cli.namespaces import hub
from langchain_cli.namespaces import serve
app = typer.Typer(no_args_is_help=True, add_completion=False)
app.add_typer(hub.hub, name="hub", help=hub.__doc__)
app.add_typer(serve.serve, name="serve", help=serve.__doc__)
@app.command()
def start(
*,
port: Annotated[
Optional[int], typer.Option(help="The port to run the server on")
] = None,
host: Annotated[
Optional[str], typer.Option(help="The host to run the server on")
] = None,
) -> None:
"""
Start the LangServe instance, whether it's a hub package or a serve project.
"""
cmd = ["poetry", "run", "poe", "start"]
if port is not None:
cmd += ["--port", str(port)]
if host is not None:
cmd += ["--host", host]
subprocess.run(cmd)
if __name__ == "__main__":
app()

View File

@@ -1,3 +0,0 @@
DEFAULT_GIT_REPO = "https://github.com/langchain-ai/langchain.git"
DEFAULT_GIT_SUBDIRECTORY = "templates"
DEFAULT_GIT_REF = "master"

View File

@@ -1,17 +0,0 @@
"""
Development Scripts for Hub Packages
"""
from fastapi import FastAPI
from langserve.packages import add_package_route
from langchain_cli.utils.packages import get_package_root
def create_demo_server():
"""
Creates a demo server for the current hub package.
"""
app = FastAPI()
package_root = get_package_root()
add_package_route(app, package_root, "")
return app

Some files were not shown because too many files have changed in this diff Show More