mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-22 07:05:36 +00:00
Merge branch 'master' into bagautr/rfc_image_template
This commit is contained in:
@@ -1,15 +1,18 @@
|
||||
# Tutorials
|
||||
|
||||
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases/qa_structured/sql).
|
||||
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
|
||||
|
||||
⛓ icon marks a new addition [last update 2023-09-21]
|
||||
|
||||
---------------------
|
||||
|
||||
### [LangChain on Wikipedia](https://en.wikipedia.org/wiki/LangChain)
|
||||
|
||||
### DeepLearning.AI courses
|
||||
by [Harrison Chase](https://github.com/hwchase17) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
|
||||
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
|
||||
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
|
||||
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
|
||||
- ⛓ [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
|
||||
|
||||
### Handbook
|
||||
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
|
||||
|
||||
@@ -28,3 +28,6 @@ Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas i
|
||||
**Seamless LangSmith tracing integration**
|
||||
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
|
||||
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
|
||||
|
||||
**Seamless LangServe deployment integration**
|
||||
Any chain created with LCEL can be easily deployed using LangServe.
|
||||
@@ -10,9 +10,9 @@ sidebar_position: 0
|
||||
|
||||
This framework consists of several parts.
|
||||
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
|
||||
- **[LangChain Templates](https://github.com/langchain-ai/langchain/tree/master/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
|
||||

|
||||
|
||||
|
||||
243
docs/docs/integrations/document_loaders/docusaurus.ipynb
Normal file
243
docs/docs/integrations/document_loaders/docusaurus.ipynb
Normal file
File diff suppressed because one or more lines are too long
@@ -0,0 +1,76 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "91c6a7ef",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Neo4j\n",
|
||||
"\n",
|
||||
"[Neo4j](https://en.wikipedia.org/wiki/Neo4j) is an open-source graph database management system, renowned for its efficient management of highly connected data. Unlike traditional databases that store data in tables, Neo4j uses a graph structure with nodes, edges, and properties to represent and store data. This design allows for high-performance queries on complex data relationships.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Neo4j` to store chat message history."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d15e3302",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import Neo4jChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = Neo4jChatMessageHistory(\n",
|
||||
" url=\"bolt://localhost:7687\",\n",
|
||||
" username=\"neo4j\",\n",
|
||||
" password=\"password\",\n",
|
||||
" session_id=\"session_id_1\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
"\n",
|
||||
"history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64fc465e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8af285f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -19,7 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain openai pandas faiss-cpu # faiss-gpu for CUDA supported GPU"
|
||||
"!pip install langchain fleet-context openai pandas faiss-cpu # faiss-gpu for CUDA supported GPU"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,13 +43,12 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"def load_fleet_retriever(\n",
|
||||
" url: str,\n",
|
||||
" df: pd.DataFrame,\n",
|
||||
" *,\n",
|
||||
" vectorstore_cls: Type[VectorStore] = FAISS,\n",
|
||||
" docstore: Optional[BaseStore] = None,\n",
|
||||
" **kwargs: Any,\n",
|
||||
"):\n",
|
||||
" df = pd.read_parquet(url)\n",
|
||||
" vectorstore = _populate_vectorstore(df, vectorstore_cls)\n",
|
||||
" if docstore is None:\n",
|
||||
" return vectorstore.as_retriever(**kwargs)\n",
|
||||
@@ -106,7 +105,10 @@
|
||||
"source": [
|
||||
"## Retriever chunks\n",
|
||||
"\n",
|
||||
"As part of their embedding process, the Fleet AI team first chunked long documents before embedding them. This means the vectors correspond to sections of pages in the LangChain docs, not entire pages. By default, when we spin up a retriever from these embeddings, we'll be retrieving these embedded chunks:"
|
||||
"As part of their embedding process, the Fleet AI team first chunked long documents before embedding them. This means the vectors correspond to sections of pages in the LangChain docs, not entire pages. By default, when we spin up a retriever from these embeddings, we'll be retrieving these embedded chunks.",
|
||||
"\n",
|
||||
"\n",
|
||||
"We will be using Fleet Context's `download_embeddings()` to grab Langchain's documentation embeddings. You can view all supported libraries' documentation at https://fleet.so/context."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -116,9 +118,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vecstore_retriever = load_fleet_retriever(\n",
|
||||
" \"https://www.dropbox.com/scl/fi/4rescpkrg9970s3huz47l/libraries_langchain_release.parquet?rlkey=283knw4wamezfwiidgpgptkep&dl=1\",\n",
|
||||
")"
|
||||
"from context import download_embeddings\n",
|
||||
"\n",
|
||||
"df = download_embeddings(\"langchain\")\n",
|
||||
"vecstore_retriever = load_fleet_retriever(df)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
154
docs/docs/integrations/text_embedding/fastembed.ipynb
Normal file
154
docs/docs/integrations/text_embedding/fastembed.ipynb
Normal file
@@ -0,0 +1,154 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Qdrant FastEmbed\n",
|
||||
"\n",
|
||||
"[FastEmbed](https://qdrant.github.io/fastembed/) is a lightweight, fast, Python library built for embedding generation. \n",
|
||||
"\n",
|
||||
"- Quantized model weights\n",
|
||||
"- ONNX Runtime, no PyTorch dependency\n",
|
||||
"- CPU-first design\n",
|
||||
"- Data-parallelism for encoding of large datasets."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "2a773d8d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Dependencies\n",
|
||||
"\n",
|
||||
"To use FastEmbed with LangChain, install the `fastembed` Python package."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "91ea14ce-831d-409a-a88f-30353acdabd1",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install fastembed"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "426f1156",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "3f5dc9d7-65e3-4b5b-9086-3327d016cfe0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.embeddings.fastembed import FastEmbedEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiating FastEmbed\n",
|
||||
" \n",
|
||||
"### Parameters\n",
|
||||
"- `model_name: str` (default: \"BAAI/bge-small-en-v1.5\")\n",
|
||||
" > Name of the FastEmbedding model to use. You can find the list of supported models [here](https://qdrant.github.io/fastembed/examples/Supported_Models/).\n",
|
||||
"\n",
|
||||
"- `max_length: int` (default: 512)\n",
|
||||
" > The maximum number of tokens. Unknown behavior for values > 512.\n",
|
||||
"\n",
|
||||
"- `cache_dir: Optional[str]`\n",
|
||||
" > The path to the cache directory. Defaults to `local_cache` in the parent directory.\n",
|
||||
"\n",
|
||||
"- `threads: Optional[int]`\n",
|
||||
" > The number of threads a single onnxruntime session can use. Defaults to None.\n",
|
||||
"\n",
|
||||
"- `doc_embed_type: Literal[\"default\", \"passage\"]` (default: \"default\")\n",
|
||||
" > \"default\": Uses FastEmbed's default embedding method.\n",
|
||||
" \n",
|
||||
" > \"passage\": Prefixes the text with \"passage\" before embedding."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6fb585dd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = FastEmbedEmbeddings()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"### Generating document embeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"document_embeddings = embeddings.embed_documents([\"This is a document\", \"This is some other document\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Generating query embeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query_embeddings = embeddings.embed_query(\"This is a query\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -38,7 +38,7 @@ It uses the ReAct framework to decide which tool to use, and uses memory to reme
|
||||
## [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search)
|
||||
|
||||
This agent utilizes a single tool that should be named `Intermediate Answer`.
|
||||
This tool should be able to lookup factual answers to questions. This agent
|
||||
This tool should be able to look up factual answers to questions. This agent
|
||||
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
|
||||
where a Google search API was provided as the tool.
|
||||
|
||||
@@ -46,7 +46,7 @@ where a Google search API was provided as the tool.
|
||||
|
||||
This agent uses the ReAct framework to interact with a docstore. Two tools must
|
||||
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
|
||||
The `Search` tool should search for a document, while the `Lookup` tool should lookup
|
||||
The `Search` tool should search for a document, while the `Lookup` tool should look up
|
||||
a term in the most recently found document.
|
||||
This agent is equivalent to the
|
||||
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Custom LLM agent
|
||||
# Custom LLM Agent
|
||||
|
||||
This notebook goes through how to create your own custom LLM agent.
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Custom LLM Agent (with a ChatModel)
|
||||
# Custom LLM Chat Agent
|
||||
|
||||
This notebook goes through how to create your own custom agent based on a chat model.
|
||||
This notebook explains how to create your own custom agent based on a chat model.
|
||||
|
||||
An LLM chat agent consists of three parts:
|
||||
An LLM chat agent consists of four key components:
|
||||
|
||||
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- `ChatModel`: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
|
||||
- `PromptTemplate`: This is the prompt template that instructs the language model on what to do.
|
||||
- `ChatModel`: This is the language model that powers the agent.
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found.
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object.
|
||||
|
||||
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
This walkthrough demonstrates how to replicate the [MRKL](https://arxiv.org/pdf/2205.00445.pdf) system using agents.
|
||||
|
||||
This uses the example Chinook database.
|
||||
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.
|
||||
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/ and place the `.db` file in a "notebooks" folder at the root of this repository.
|
||||
|
||||
```python
|
||||
from langchain.chains import LLMMathChain
|
||||
@@ -127,7 +127,7 @@ mrkl.run("What is the full name of the artist who recently released an album cal
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## With a chat model
|
||||
## Using a Chat Model
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
@@ -4,17 +4,17 @@ sidebar_position: 2
|
||||
# Tools
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
|
||||
For documentation on built-in tool integrations, visit [Integrations](/docs/integrations/tools/).
|
||||
:::
|
||||
|
||||
Tools are interfaces that an agent can use to interact with the world.
|
||||
|
||||
## Get started
|
||||
## Getting Started
|
||||
|
||||
Tools are functions that agents can use to interact with the world.
|
||||
These tools can be generic utilities (e.g. search), other chains, or even other agents.
|
||||
|
||||
Currently, tools can be loaded with the following snippet:
|
||||
Currently, tools can be loaded using the following snippet:
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
|
||||
@@ -4,7 +4,7 @@ sidebar_position: 3
|
||||
# Toolkits
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
|
||||
For documentation on built-in toolkit integrations, visit [Integrations](/docs/integrations/toolkits/).
|
||||
:::
|
||||
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenient loading methods.
|
||||
|
||||
@@ -593,7 +593,7 @@
|
||||
"id": "a4a7d783-4ddf-42e7-b143-8050891663c2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## [LangSmith](https://smith.langchain.com)\n",
|
||||
"## [LangSmith](/docs/langsmith)\n",
|
||||
"\n",
|
||||
"All `ChatModel`s come with built-in LangSmith tracing. Just set the following environment variables:\n",
|
||||
"```bash\n",
|
||||
|
||||
@@ -459,7 +459,7 @@
|
||||
"id": "09108687-ed15-468b-9ac5-674e75785199",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## [LangSmith](https://smith.langchain.com)\n",
|
||||
"## [LangSmith](/docs/langsmith)\n",
|
||||
"\n",
|
||||
"All `LLM`s come with built-in LangSmith tracing. Just set the following environment variables:\n",
|
||||
"```bash\n",
|
||||
|
||||
BIN
docs/static/img/langchain_stack.png
vendored
BIN
docs/static/img/langchain_stack.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 1.6 MiB After Width: | Height: | Size: 4.0 MiB |
@@ -4,7 +4,7 @@ from typing import Any, Dict, List, Optional, cast
|
||||
from langchain.callbacks.manager import CallbackManagerForLLMRun
|
||||
from langchain.chat_models.base import SimpleChatModel
|
||||
from langchain.llms.azureml_endpoint import AzureMLEndpointClient, ContentFormatterBase
|
||||
from langchain.pydantic_v1 import validator
|
||||
from langchain.pydantic_v1 import SecretStr, validator
|
||||
from langchain.schema.messages import (
|
||||
AIMessage,
|
||||
BaseMessage,
|
||||
@@ -12,7 +12,7 @@ from langchain.schema.messages import (
|
||||
HumanMessage,
|
||||
SystemMessage,
|
||||
)
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
from langchain.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
|
||||
|
||||
class LlamaContentFormatter(ContentFormatterBase):
|
||||
@@ -94,7 +94,7 @@ class AzureMLChatOnlineEndpoint(SimpleChatModel):
|
||||
"""URL of pre-existing Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_URL`."""
|
||||
|
||||
endpoint_api_key: str = ""
|
||||
endpoint_api_key: SecretStr = convert_to_secret_str("")
|
||||
"""Authentication Key for Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_API_KEY`."""
|
||||
|
||||
@@ -112,13 +112,15 @@ class AzureMLChatOnlineEndpoint(SimpleChatModel):
|
||||
@classmethod
|
||||
def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:
|
||||
"""Validate that api key and python package exist in environment."""
|
||||
endpoint_key = get_from_dict_or_env(
|
||||
values, "endpoint_api_key", "AZUREML_ENDPOINT_API_KEY"
|
||||
values["endpoint_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "endpoint_api_key", "AZUREML_ENDPOINT_API_KEY")
|
||||
)
|
||||
endpoint_url = get_from_dict_or_env(
|
||||
values, "endpoint_url", "AZUREML_ENDPOINT_URL"
|
||||
)
|
||||
http_client = AzureMLEndpointClient(endpoint_url, endpoint_key)
|
||||
http_client = AzureMLEndpointClient(
|
||||
endpoint_url, values["endpoint_api_key"].get_secret_value()
|
||||
)
|
||||
return http_client
|
||||
|
||||
@property
|
||||
|
||||
@@ -67,6 +67,7 @@ from langchain.document_loaders.diffbot import DiffbotLoader
|
||||
from langchain.document_loaders.directory import DirectoryLoader
|
||||
from langchain.document_loaders.discord import DiscordChatLoader
|
||||
from langchain.document_loaders.docugami import DocugamiLoader
|
||||
from langchain.document_loaders.docusaurus import DocusaurusLoader
|
||||
from langchain.document_loaders.dropbox import DropboxLoader
|
||||
from langchain.document_loaders.duckdb_loader import DuckDBLoader
|
||||
from langchain.document_loaders.email import (
|
||||
@@ -250,6 +251,7 @@ __all__ = [
|
||||
"DirectoryLoader",
|
||||
"DiscordChatLoader",
|
||||
"DocugamiLoader",
|
||||
"DocusaurusLoader",
|
||||
"Docx2txtLoader",
|
||||
"DropboxLoader",
|
||||
"DuckDBLoader",
|
||||
|
||||
49
libs/langchain/langchain/document_loaders/docusaurus.py
Normal file
49
libs/langchain/langchain/document_loaders/docusaurus.py
Normal file
@@ -0,0 +1,49 @@
|
||||
"""Load Documents from Docusarus Documentation"""
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from langchain.document_loaders.sitemap import SitemapLoader
|
||||
|
||||
|
||||
class DocusaurusLoader(SitemapLoader):
|
||||
"""
|
||||
Loader that leverages the SitemapLoader to loop through the generated pages of a
|
||||
Docusaurus Documentation website and extracts the content by looking for specific
|
||||
HTML tags. By default, the parser searches for the main content of the Docusaurus
|
||||
page, which is normally the <article>. You also have the option to define your own
|
||||
custom HTML tags by providing them as a list, for example: ["div", ".main", "a"].
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
url: str,
|
||||
custom_html_tags: Optional[List[str]] = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""
|
||||
Initialize DocusaurusLoader
|
||||
Args:
|
||||
url: The base URL of the Docusaurus website.
|
||||
custom_html_tags: Optional custom html tags to extract content from pages.
|
||||
kwargs: Additional args to extend the underlying SitemapLoader, for example:
|
||||
filter_urls, blocksize, meta_function, is_local, continue_on_failure
|
||||
"""
|
||||
if not kwargs.get("is_local"):
|
||||
url = f"{url}/sitemap.xml"
|
||||
|
||||
self.custom_html_tags = custom_html_tags or ["main article"]
|
||||
|
||||
super().__init__(
|
||||
url,
|
||||
parsing_function=kwargs.get("parsing_function") or self._parsing_function,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def _parsing_function(self, content: Any) -> str:
|
||||
"""Parses specific elements from a Docusarus page."""
|
||||
relevant_elements = content.select(",".join(self.custom_html_tags))
|
||||
|
||||
for element in relevant_elements:
|
||||
if element not in relevant_elements:
|
||||
element.decompose()
|
||||
|
||||
return str(content.get_text())
|
||||
@@ -32,6 +32,7 @@ from langchain.embeddings.elasticsearch import ElasticsearchEmbeddings
|
||||
from langchain.embeddings.embaas import EmbaasEmbeddings
|
||||
from langchain.embeddings.ernie import ErnieEmbeddings
|
||||
from langchain.embeddings.fake import DeterministicFakeEmbedding, FakeEmbeddings
|
||||
from langchain.embeddings.fastembed import FastEmbedEmbeddings
|
||||
from langchain.embeddings.google_palm import GooglePalmEmbeddings
|
||||
from langchain.embeddings.gpt4all import GPT4AllEmbeddings
|
||||
from langchain.embeddings.gradient_ai import GradientEmbeddings
|
||||
@@ -77,6 +78,7 @@ __all__ = [
|
||||
"ClarifaiEmbeddings",
|
||||
"CohereEmbeddings",
|
||||
"ElasticsearchEmbeddings",
|
||||
"FastEmbedEmbeddings",
|
||||
"HuggingFaceEmbeddings",
|
||||
"HuggingFaceInferenceAPIEmbeddings",
|
||||
"GradientEmbeddings",
|
||||
|
||||
@@ -20,7 +20,7 @@ class ClarifaiEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
from langchain.embeddings import ClarifaiEmbeddings
|
||||
clarifai = ClarifaiEmbeddings(
|
||||
model="embed-english-light-v2.0", clarifai_api_key="my-api-key"
|
||||
model="embed-english-light-v3.0", clarifai_api_key="my-api-key"
|
||||
)
|
||||
"""
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ class CohereEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
cohere = CohereEmbeddings(
|
||||
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
|
||||
model="embed-english-light-v3.0", cohere_api_key="my-api-key"
|
||||
)
|
||||
"""
|
||||
|
||||
|
||||
108
libs/langchain/langchain/embeddings/fastembed.py
Normal file
108
libs/langchain/langchain/embeddings/fastembed.py
Normal file
@@ -0,0 +1,108 @@
|
||||
from typing import Any, Dict, List, Literal, Optional
|
||||
|
||||
import numpy as np
|
||||
|
||||
from langchain.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain.schema.embeddings import Embeddings
|
||||
|
||||
|
||||
class FastEmbedEmbeddings(BaseModel, Embeddings):
|
||||
"""Qdrant FastEmbedding models.
|
||||
FastEmbed is a lightweight, fast, Python library built for embedding generation.
|
||||
See more documentation at:
|
||||
* https://github.com/qdrant/fastembed/
|
||||
* https://qdrant.github.io/fastembed/
|
||||
|
||||
To use this class, you must install the `fastembed` Python package.
|
||||
|
||||
`pip install fastembed`
|
||||
Example:
|
||||
from langchain.embeddings import FastEmbedEmbeddings
|
||||
fastembed = FastEmbedEmbeddings()
|
||||
"""
|
||||
|
||||
model_name: str = "BAAI/bge-small-en-v1.5"
|
||||
"""Name of the FastEmbedding model to use
|
||||
Defaults to "BAAI/bge-small-en-v1.5"
|
||||
Find the list of supported models at
|
||||
https://qdrant.github.io/fastembed/examples/Supported_Models/
|
||||
"""
|
||||
|
||||
max_length: int = 512
|
||||
"""The maximum number of tokens. Defaults to 512.
|
||||
Unknown behavior for values > 512.
|
||||
"""
|
||||
|
||||
cache_dir: Optional[str]
|
||||
"""The path to the cache directory.
|
||||
Defaults to `local_cache` in the parent directory
|
||||
"""
|
||||
|
||||
threads: Optional[int]
|
||||
"""The number of threads single onnxruntime session can use.
|
||||
Defaults to None
|
||||
"""
|
||||
|
||||
doc_embed_type: Literal["default", "passage"] = "default"
|
||||
"""Type of embedding to use for documents
|
||||
"default": Uses FastEmbed's default embedding method
|
||||
"passage": Prefixes the text with "passage" before embedding.
|
||||
"""
|
||||
|
||||
_model: Any # : :meta private:
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that FastEmbed has been installed."""
|
||||
try:
|
||||
from fastembed.embedding import FlagEmbedding
|
||||
|
||||
model_name = values.get("model_name")
|
||||
max_length = values.get("max_length")
|
||||
cache_dir = values.get("cache_dir")
|
||||
threads = values.get("threads")
|
||||
values["_model"] = FlagEmbedding(
|
||||
model_name=model_name,
|
||||
max_length=max_length,
|
||||
cache_dir=cache_dir,
|
||||
threads=threads,
|
||||
)
|
||||
except ImportError as ie:
|
||||
raise ImportError(
|
||||
"Could not import 'fastembed' Python package. "
|
||||
"Please install it with `pip install fastembed`."
|
||||
) from ie
|
||||
return values
|
||||
|
||||
def embed_documents(self, texts: List[str]) -> List[List[float]]:
|
||||
"""Generate embeddings for documents using FastEmbed.
|
||||
|
||||
Args:
|
||||
texts: The list of texts to embed.
|
||||
|
||||
Returns:
|
||||
List of embeddings, one for each text.
|
||||
"""
|
||||
embeddings: List[np.ndarray]
|
||||
if self.doc_embed_type == "passage":
|
||||
embeddings = self._model.passage_embed(texts)
|
||||
else:
|
||||
embeddings = self._model.embed(texts)
|
||||
return [e.tolist() for e in embeddings]
|
||||
|
||||
def embed_query(self, text: str) -> List[float]:
|
||||
"""Generate query embeddings using FastEmbed.
|
||||
|
||||
Args:
|
||||
text: The text to embed.
|
||||
|
||||
Returns:
|
||||
Embeddings for the text.
|
||||
"""
|
||||
query_embeddings: np.ndarray = next(self._model.query_embed(text))
|
||||
return query_embeddings.tolist()
|
||||
@@ -13,6 +13,7 @@ from langchain.memory.chat_message_histories.firestore import (
|
||||
from langchain.memory.chat_message_histories.in_memory import ChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.momento import MomentoChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.mongodb import MongoDBChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.neo4j import Neo4jChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.postgres import PostgresChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.redis import RedisChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.rocksetdb import RocksetChatMessageHistory
|
||||
@@ -48,4 +49,5 @@ __all__ = [
|
||||
"XataChatMessageHistory",
|
||||
"ZepChatMessageHistory",
|
||||
"UpstashRedisChatMessageHistory",
|
||||
"Neo4jChatMessageHistory",
|
||||
]
|
||||
|
||||
112
libs/langchain/langchain/memory/chat_message_histories/neo4j.py
Normal file
112
libs/langchain/langchain/memory/chat_message_histories/neo4j.py
Normal file
@@ -0,0 +1,112 @@
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from langchain.schema import BaseChatMessageHistory
|
||||
from langchain.schema.messages import BaseMessage, messages_from_dict
|
||||
from langchain.utils import get_from_env
|
||||
|
||||
|
||||
class Neo4jChatMessageHistory(BaseChatMessageHistory):
|
||||
"""Chat message history stored in a Neo4j database."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
session_id: Union[str, int],
|
||||
url: Optional[str] = None,
|
||||
username: Optional[str] = None,
|
||||
password: Optional[str] = None,
|
||||
database: str = "neo4j",
|
||||
node_label: str = "Session",
|
||||
window: int = 3,
|
||||
):
|
||||
try:
|
||||
import neo4j
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import neo4j python package. "
|
||||
"Please install it with `pip install neo4j`."
|
||||
)
|
||||
|
||||
# Make sure session id is not null
|
||||
if not session_id:
|
||||
raise ValueError("Please ensure that the session_id parameter is provided")
|
||||
|
||||
url = get_from_env("url", "NEO4J_URI", url)
|
||||
username = get_from_env("username", "NEO4J_USERNAME", username)
|
||||
password = get_from_env("password", "NEO4J_PASSWORD", password)
|
||||
database = get_from_env("database", "NEO4J_DATABASE", database)
|
||||
|
||||
self._driver = neo4j.GraphDatabase.driver(url, auth=(username, password))
|
||||
self._database = database
|
||||
self._session_id = session_id
|
||||
self._node_label = node_label
|
||||
self._window = window
|
||||
|
||||
# Verify connection
|
||||
try:
|
||||
self._driver.verify_connectivity()
|
||||
except neo4j.exceptions.ServiceUnavailable:
|
||||
raise ValueError(
|
||||
"Could not connect to Neo4j database. "
|
||||
"Please ensure that the url is correct"
|
||||
)
|
||||
except neo4j.exceptions.AuthError:
|
||||
raise ValueError(
|
||||
"Could not connect to Neo4j database. "
|
||||
"Please ensure that the username and password are correct"
|
||||
)
|
||||
# Create session node
|
||||
self._driver.execute_query(
|
||||
f"MERGE (s:`{self._node_label}` {{id:$session_id}})",
|
||||
{"session_id": self._session_id},
|
||||
).summary
|
||||
|
||||
@property
|
||||
def messages(self) -> List[BaseMessage]: # type: ignore
|
||||
"""Retrieve the messages from Neo4j"""
|
||||
query = (
|
||||
f"MATCH (s:`{self._node_label}`)-[:LAST_MESSAGE]->(last_message) "
|
||||
"WHERE s.id = $session_id MATCH p=(last_message)<-[:NEXT*0.."
|
||||
f"{self._window*2}]-() WITH p, length(p) AS length "
|
||||
"ORDER BY length DESC LIMIT 1 UNWIND reverse(nodes(p)) AS node "
|
||||
"RETURN {data:{content: node.content}, type:node.type} AS result"
|
||||
)
|
||||
records, _, _ = self._driver.execute_query(
|
||||
query, {"session_id": self._session_id}
|
||||
)
|
||||
|
||||
messages = messages_from_dict([el["result"] for el in records])
|
||||
return messages
|
||||
|
||||
def add_message(self, message: BaseMessage) -> None:
|
||||
"""Append the message to the record in Neo4j"""
|
||||
query = (
|
||||
f"MATCH (s:`{self._node_label}`) WHERE s.id = $session_id "
|
||||
"OPTIONAL MATCH (s)-[lm:LAST_MESSAGE]->(last_message) "
|
||||
"CREATE (s)-[:LAST_MESSAGE]->(new:Message) "
|
||||
"SET new += {type:$type, content:$content} "
|
||||
"WITH new, lm, last_message WHERE last_message IS NOT NULL "
|
||||
"CREATE (last_message)-[:NEXT]->(new) "
|
||||
"DELETE lm"
|
||||
)
|
||||
self._driver.execute_query(
|
||||
query,
|
||||
{
|
||||
"type": message.type,
|
||||
"content": message.content,
|
||||
"session_id": self._session_id,
|
||||
},
|
||||
).summary
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear session memory from Neo4j"""
|
||||
query = (
|
||||
f"MATCH (s:`{self._node_label}`)-[:LAST_MESSAGE]->(last_message) "
|
||||
"WHERE s.id = $session_id MATCH p=(last_message)<-[:NEXT]-() "
|
||||
"WITH p, length(p) AS length ORDER BY length DESC LIMIT 1 "
|
||||
"UNWIND nodes(p) as node DETACH DELETE node;"
|
||||
)
|
||||
self._driver.execute_query(query, {"session_id": self._session_id}).summary
|
||||
|
||||
def __del__(self) -> None:
|
||||
if self._driver:
|
||||
self._driver.close()
|
||||
@@ -1217,16 +1217,97 @@ class RunnableSerializable(Serializable, Runnable[Input, Output]):
|
||||
|
||||
|
||||
class RunnableSequence(RunnableSerializable[Input, Output]):
|
||||
"""
|
||||
A sequence of runnables, where the output of each is the input of the next.
|
||||
"""A sequence of runnables, where the output of each is the input of the next.
|
||||
|
||||
RunnableSequence is the most important composition operator in LangChain as it is
|
||||
used in virtually every chain.
|
||||
|
||||
A RunnableSequence can be instantiated directly or more commonly by using the `|`
|
||||
operator where either the left or right operands (or both) must be a Runnable.
|
||||
|
||||
Any RunnableSequence automatically supports sync, async, batch.
|
||||
|
||||
The default implementations of `batch` and `abatch` utilize threadpools and
|
||||
asyncio gather and will be faster than naive invocation of invoke or ainvoke
|
||||
for IO bound runnables.
|
||||
|
||||
Batching is implemented by invoking the batch method on each component of the
|
||||
RunnableSequence in order.
|
||||
|
||||
A RunnableSequence preserves the streaming properties of its components, so if all
|
||||
components of the sequence implement a `transform` method -- which
|
||||
is the method that implements the logic to map a streaming input to a streaming
|
||||
output -- then the sequence will be able to stream input to output!
|
||||
|
||||
If any component of the sequence does not implement transform then the
|
||||
streaming will only begin after this component is run. If there are
|
||||
multiple blocking components, streaming begins after the last one.
|
||||
|
||||
Please note: RunnableLambdas do not support `transform` by default! So if
|
||||
you need to use a RunnableLambdas be careful about where you place them in a
|
||||
RunnableSequence (if you need to use the .stream()/.astream() methods).
|
||||
|
||||
If you need arbitrary logic and need streaming, you can subclass
|
||||
Runnable, and implement `transform` for whatever logic you need.
|
||||
|
||||
Here is a simple example that uses simple functions to illustrate the use of
|
||||
RunnableSequence:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from langchain.schema.runnable import RunnableLambda
|
||||
|
||||
def add_one(x: int) -> int:
|
||||
return x + 1
|
||||
|
||||
def mul_two(x: int) -> int:
|
||||
return x * 2
|
||||
|
||||
runnable_1 = RunnableLambda(add_one)
|
||||
runnable_2 = RunnableLambda(mul_two)
|
||||
sequence = runnable_1 | runnable_2
|
||||
# Or equivalently:
|
||||
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
|
||||
sequence.invoke(1)
|
||||
await runnable.ainvoke(1)
|
||||
|
||||
sequence.batch([1, 2, 3])
|
||||
await sequence.abatch([1, 2, 3])
|
||||
|
||||
Here's an example that uses streams JSON output generated by an LLM:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from langchain.output_parsers.json import SimpleJsonOutputParser
|
||||
from langchain.chat_models.openai import ChatOpenAI
|
||||
|
||||
prompt = PromptTemplate.from_template(
|
||||
'In JSON format, give me a list of {topic} and their '
|
||||
'corresponding names in French, Spanish and in a '
|
||||
'Cat Language.'
|
||||
)
|
||||
|
||||
model = ChatOpenAI()
|
||||
chain = prompt | model | SimpleJsonOutputParser()
|
||||
|
||||
async for chunk in chain.astream({'topic': 'colors'}):
|
||||
print('-')
|
||||
print(chunk, sep='', flush=True)
|
||||
"""
|
||||
|
||||
# The steps are broken into first, middle and last, solely for type checking
|
||||
# purposes. It allows specifying the `Input` on the first type, the `Output` of
|
||||
# the last type.
|
||||
first: Runnable[Input, Any]
|
||||
"""The first runnable in the sequence."""
|
||||
middle: List[Runnable[Any, Any]] = Field(default_factory=list)
|
||||
"""The middle runnables in the sequence."""
|
||||
last: Runnable[Any, Output]
|
||||
"""The last runnable in the sequence."""
|
||||
|
||||
@property
|
||||
def steps(self) -> List[Runnable[Any, Any]]:
|
||||
"""All the runnables that make up the sequence in order."""
|
||||
return [self.first] + self.middle + [self.last]
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
from pathlib import Path
|
||||
|
||||
from langchain.document_loaders import DocusaurusLoader
|
||||
|
||||
DOCS_URL = str(Path(__file__).parent.parent / "examples/docusaurus-sitemap.xml")
|
||||
|
||||
|
||||
def test_docusarus() -> None:
|
||||
"""Test sitemap loader."""
|
||||
loader = DocusaurusLoader(DOCS_URL, is_local=True)
|
||||
documents = loader.load()
|
||||
assert len(documents) > 1
|
||||
assert "🦜️🔗 Langchain" in documents[0].page_content
|
||||
|
||||
|
||||
def test_filter_docusaurus_sitemap() -> None:
|
||||
"""Test sitemap loader."""
|
||||
loader = DocusaurusLoader(
|
||||
DOCS_URL,
|
||||
is_local=True,
|
||||
filter_urls=[
|
||||
"https://python.langchain.com/docs/integrations/document_loaders/sitemap"
|
||||
],
|
||||
)
|
||||
documents = loader.load()
|
||||
assert len(documents) == 1
|
||||
assert "SitemapLoader" in documents[0].page_content
|
||||
|
||||
|
||||
def test_docusarus_metadata() -> None:
|
||||
def sitemap_metadata_one(meta: dict, _content: None) -> dict:
|
||||
return {**meta, "mykey": "Super Important Metadata"}
|
||||
|
||||
"""Test sitemap loader."""
|
||||
loader = DocusaurusLoader(
|
||||
DOCS_URL,
|
||||
is_local=True,
|
||||
meta_function=sitemap_metadata_one,
|
||||
)
|
||||
documents = loader.load()
|
||||
assert len(documents) > 1
|
||||
assert "mykey" in documents[0].metadata
|
||||
assert "Super Important Metadata" in documents[0].metadata["mykey"]
|
||||
@@ -0,0 +1,76 @@
|
||||
"""Test FastEmbed embeddings."""
|
||||
import pytest
|
||||
|
||||
from langchain.embeddings.fastembed import FastEmbedEmbeddings
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"model_name", ["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5"]
|
||||
)
|
||||
@pytest.mark.parametrize("max_length", [50, 512])
|
||||
@pytest.mark.parametrize("doc_embed_type", ["default", "passage"])
|
||||
@pytest.mark.parametrize("threads", [0, 10])
|
||||
def test_fastembed_embedding_documents(
|
||||
model_name: str, max_length: int, doc_embed_type: str, threads: int
|
||||
) -> None:
|
||||
"""Test fastembed embeddings for documents."""
|
||||
documents = ["foo bar", "bar foo"]
|
||||
embedding = FastEmbedEmbeddings(
|
||||
model_name=model_name,
|
||||
max_length=max_length,
|
||||
doc_embed_type=doc_embed_type,
|
||||
threads=threads,
|
||||
)
|
||||
output = embedding.embed_documents(documents)
|
||||
assert len(output) == 2
|
||||
assert len(output[0]) == 384
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"model_name", ["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5"]
|
||||
)
|
||||
@pytest.mark.parametrize("max_length", [50, 512])
|
||||
def test_fastembed_embedding_query(model_name: str, max_length: int) -> None:
|
||||
"""Test fastembed embeddings for query."""
|
||||
document = "foo bar"
|
||||
embedding = FastEmbedEmbeddings(model_name=model_name, max_length=max_length)
|
||||
output = embedding.embed_query(document)
|
||||
assert len(output) == 384
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"model_name", ["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5"]
|
||||
)
|
||||
@pytest.mark.parametrize("max_length", [50, 512])
|
||||
@pytest.mark.parametrize("doc_embed_type", ["default", "passage"])
|
||||
@pytest.mark.parametrize("threads", [0, 10])
|
||||
async def test_fastembed_async_embedding_documents(
|
||||
model_name: str, max_length: int, doc_embed_type: str, threads: int
|
||||
) -> None:
|
||||
"""Test fastembed embeddings for documents."""
|
||||
documents = ["foo bar", "bar foo"]
|
||||
embedding = FastEmbedEmbeddings(
|
||||
model_name=model_name,
|
||||
max_length=max_length,
|
||||
doc_embed_type=doc_embed_type,
|
||||
threads=threads,
|
||||
)
|
||||
output = await embedding.aembed_documents(documents)
|
||||
assert len(output) == 2
|
||||
assert len(output[0]) == 384
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"model_name", ["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5"]
|
||||
)
|
||||
@pytest.mark.parametrize("max_length", [50, 512])
|
||||
async def test_fastembed_async_embedding_query(
|
||||
model_name: str, max_length: int
|
||||
) -> None:
|
||||
"""Test fastembed embeddings for query."""
|
||||
document = "foo bar"
|
||||
embedding = FastEmbedEmbeddings(model_name=model_name, max_length=max_length)
|
||||
output = await embedding.aembed_query(document)
|
||||
assert len(output) == 384
|
||||
@@ -0,0 +1,42 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
|
||||
xmlns:news="http://www.google.com/schemas/sitemap-news/0.9"
|
||||
xmlns:xhtml="http://www.w3.org/1999/xhtml"
|
||||
xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"
|
||||
xmlns:video="http://www.google.com/schemas/sitemap-video/1.1">
|
||||
<url>
|
||||
<loc>https://python.langchain.com/docs/integrations/document_loaders/sitemap</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/cookbook</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/docs/additional_resources</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/docs/modules/chains/how_to/</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/docs/use_cases/summarization</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
<url>
|
||||
<loc>https://python.langchain.com/</loc>
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>0.5</priority>
|
||||
</url>
|
||||
</urlset>
|
||||
30
libs/langchain/tests/integration_tests/memory/test_neo4j.py
Normal file
30
libs/langchain/tests/integration_tests/memory/test_neo4j.py
Normal file
@@ -0,0 +1,30 @@
|
||||
import json
|
||||
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
from langchain.memory.chat_message_histories import Neo4jChatMessageHistory
|
||||
from langchain.schema.messages import _message_to_dict
|
||||
|
||||
|
||||
def test_memory_with_message_store() -> None:
|
||||
"""Test the memory with a message store."""
|
||||
# setup MongoDB as a message store
|
||||
message_history = Neo4jChatMessageHistory(session_id="test-session")
|
||||
memory = ConversationBufferMemory(
|
||||
memory_key="baz", chat_memory=message_history, return_messages=True
|
||||
)
|
||||
|
||||
# add some messages
|
||||
memory.chat_memory.add_ai_message("This is me, the AI")
|
||||
memory.chat_memory.add_user_message("This is me, the human")
|
||||
|
||||
# get the message history from the memory store and turn it into a json
|
||||
messages = memory.chat_memory.messages
|
||||
messages_json = json.dumps([_message_to_dict(msg) for msg in messages])
|
||||
|
||||
assert "This is me, the AI" in messages_json
|
||||
assert "This is me, the human" in messages_json
|
||||
|
||||
# remove the record from Azure Cosmos DB, so the next test run won't pick it up
|
||||
memory.chat_memory.clear()
|
||||
|
||||
assert memory.chat_memory.messages == []
|
||||
@@ -0,0 +1,65 @@
|
||||
"""Test AzureML chat endpoint."""
|
||||
|
||||
import os
|
||||
|
||||
import pytest
|
||||
from pytest import CaptureFixture, FixtureRequest
|
||||
|
||||
from langchain.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
|
||||
from langchain.pydantic_v1 import SecretStr
|
||||
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def api_passed_via_environment_fixture() -> AzureMLChatOnlineEndpoint:
|
||||
"""Fixture to create an AzureMLChatOnlineEndpoint instance
|
||||
with API key passed from environment"""
|
||||
os.environ["AZUREML_ENDPOINT_API_KEY"] = "my-api-key"
|
||||
azure_chat = AzureMLChatOnlineEndpoint(
|
||||
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score"
|
||||
)
|
||||
del os.environ["AZUREML_ENDPOINT_API_KEY"]
|
||||
return azure_chat
|
||||
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def api_passed_via_constructor_fixture() -> AzureMLChatOnlineEndpoint:
|
||||
"""Fixture to create an AzureMLChatOnlineEndpoint instance
|
||||
with API key passed from constructor"""
|
||||
azure_chat = AzureMLChatOnlineEndpoint(
|
||||
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
|
||||
endpoint_api_key="my-api-key",
|
||||
)
|
||||
return azure_chat
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"fixture_name",
|
||||
["api_passed_via_constructor_fixture", "api_passed_via_environment_fixture"],
|
||||
)
|
||||
class TestAzureMLChatOnlineEndpoint:
|
||||
def test_api_key_is_secret_string(
|
||||
self, fixture_name: str, request: FixtureRequest
|
||||
) -> None:
|
||||
"""Test that the API key is a SecretStr instance"""
|
||||
azure_chat = request.getfixturevalue(fixture_name)
|
||||
assert isinstance(azure_chat.endpoint_api_key, SecretStr)
|
||||
|
||||
def test_api_key_masked(
|
||||
self, fixture_name: str, request: FixtureRequest, capsys: CaptureFixture
|
||||
) -> None:
|
||||
"""Test that the API key is masked"""
|
||||
azure_chat = request.getfixturevalue(fixture_name)
|
||||
print(azure_chat.endpoint_api_key, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert (
|
||||
(str(azure_chat.endpoint_api_key) == "**********")
|
||||
and (repr(azure_chat.endpoint_api_key) == "SecretStr('**********')")
|
||||
and (captured.out == "**********")
|
||||
)
|
||||
|
||||
def test_api_key_is_readable(
|
||||
self, fixture_name: str, request: FixtureRequest
|
||||
) -> None:
|
||||
"""Test that the real secret value of the API key can be read"""
|
||||
azure_chat = request.getfixturevalue(fixture_name)
|
||||
assert azure_chat.endpoint_api_key.get_secret_value() == "my-api-key"
|
||||
@@ -47,6 +47,7 @@ EXPECTED_ALL = [
|
||||
"DirectoryLoader",
|
||||
"DiscordChatLoader",
|
||||
"DocugamiLoader",
|
||||
"DocusaurusLoader",
|
||||
"Docx2txtLoader",
|
||||
"DropboxLoader",
|
||||
"DuckDBLoader",
|
||||
|
||||
@@ -7,6 +7,7 @@ EXPECTED_ALL = [
|
||||
"ClarifaiEmbeddings",
|
||||
"CohereEmbeddings",
|
||||
"ElasticsearchEmbeddings",
|
||||
"FastEmbedEmbeddings",
|
||||
"HuggingFaceEmbeddings",
|
||||
"HuggingFaceInferenceAPIEmbeddings",
|
||||
"GradientEmbeddings",
|
||||
|
||||
@@ -19,7 +19,7 @@ These templates cover advanced retrieval techniques, which can be used for chat
|
||||
|
||||
- [Reranking](../rag-pinecone-rerank): This retrieval technique uses Cohere's reranking endpoint to rerank documents from an initial retrieval step.
|
||||
- [Anthropic Iterative Search](../anthropic-iterative-search): This retrieval technique uses iterative prompting to determine what to retrieve and whether the retriever documents are good enough.
|
||||
- [Neo4j Parent Document Retrieval](../neo4j-parent): This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.
|
||||
- **Parent Document Retrieval** using [Neo4j](../neo4j-parent) or [MongoDB](../mongo-parent-document-retrieval): This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.
|
||||
- [Semi-Structured RAG](../rag-semi-structured): The template shows how to do retrieval over semi-structured data (e.g. data that involves both text and tables).
|
||||
- [Temporal RAG](../rag-timescale-hybrid-search-time): The template shows how to do hybrid search over data with a time-based component using [Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral).
|
||||
|
||||
|
||||
21
templates/rag-timescale-conversation/LICENSE
Normal file
21
templates/rag-timescale-conversation/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
80
templates/rag-timescale-conversation/README.md
Normal file
80
templates/rag-timescale-conversation/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
|
||||
# rag-timescale-conversation
|
||||
|
||||
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
|
||||
|
||||
It passes both a conversation history and retrieved documents into an LLM for synthesis.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
|
||||
|
||||
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-timescale-conversation
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-timescale-conversation
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_timescale_conversation import chain as rag_timescale_conversation_chain
|
||||
|
||||
add_routes(app, rag_timescale_conversation_chain, path="/rag-timescale_conversation")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-timescale-conversation/playground](http://127.0.0.1:8000/rag-timescale-conversation/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-conversation")
|
||||
```
|
||||
|
||||
See the `rag_conversation.ipynb` notebook for example usage.
|
||||
|
||||
## Loading your own dataset
|
||||
|
||||
To load your own dataset you will have to create a `load_dataset` function. You can see an example, in the
|
||||
`load_ts_git_dataset` function defined in the `load_sample_dataset.py` file. You can then run this as a
|
||||
standalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once).
|
||||
1930
templates/rag-timescale-conversation/poetry.lock
generated
Normal file
1930
templates/rag-timescale-conversation/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
31
templates/rag-timescale-conversation/pyproject.toml
Normal file
31
templates/rag-timescale-conversation/pyproject.toml
Normal file
@@ -0,0 +1,31 @@
|
||||
[tool.poetry]
|
||||
name = "rag-timescale-conversation"
|
||||
version = "0.1.0"
|
||||
description = ""
|
||||
authors = [
|
||||
"Lance Martin <lance@langchain.dev>",
|
||||
]
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain = ">=0.0.325"
|
||||
openai = ">=0.28.1"
|
||||
tiktoken = ">=0.5.1"
|
||||
pinecone-client = ">=2.2.4"
|
||||
beautifulsoup4 = "^4.12.2"
|
||||
python-dotenv = "^1.0.0"
|
||||
timescale-vector = "^0.0.3"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = ">=0.0.15"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "rag_timescale_conversation"
|
||||
export_attr = "chain"
|
||||
|
||||
[build-system]
|
||||
requires = [
|
||||
"poetry-core",
|
||||
]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
238
templates/rag-timescale-conversation/rag_conversation.ipynb
Normal file
238
templates/rag-timescale-conversation/rag_conversation.ipynb
Normal file
@@ -0,0 +1,238 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "424a9d8d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Run Template\n",
|
||||
"\n",
|
||||
"In `server.py`, set -\n",
|
||||
"```\n",
|
||||
"add_routes(app, chain_rag_timescale_conv, path=\"/rag_timescale_conversation\")\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "5f521923",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langserve.client import RemoteRunnable\n",
|
||||
"\n",
|
||||
"rag_app = RemoteRunnable(\"http://0.0.0.0:8000/rag_timescale_conversation\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "563a58dd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First, setup the history"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "14541994",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"question = \"My name is Sven Klemm\"\n",
|
||||
"answer = rag_app.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": question,\n",
|
||||
" \"chat_history\": [],\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"chat_history = [(question, answer)]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "63e76c4d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, use the history for a question"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "b2d8f735",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The person named Sven Klemm made the following commits:\\n\\n1. Commit \"a31c9b9f8cdfe8643499b710dc983e5c5d6457e4\" on \"Mon May 22 11:34:06 2023 +0200\" with the change summary \"Increase number of sqlsmith loops in nightly CI\". The change details are \"To improve coverage with sqlsmith we run it for longer in the scheduled nightly run.\"\\n\\n2. Commit \"e4ba2bcf560568ae68f3775c058f0a8d7f7c0501\" on \"Wed Nov 9 09:29:36 2022 +0100\" with the change summary \"Remove debian 9 from packages tests.\" The change details are \"Debian 9 is EOL since July 2022 so we won\\'t build packages for it anymore and can remove it from CI.\"'"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\n",
|
||||
"answer = rag_app.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"What commits did the person with my name make?\",\n",
|
||||
" \"chat_history\": chat_history,\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"answer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "bd62df23",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Filter by time\n",
|
||||
"\n",
|
||||
"You can also use timed filters. For example, the sample dataset doesn't include any commits before 2010, so this should return no matches."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "b0a598b7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The context does not provide any information about any commits made by a person named Sven Klemm.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answer = rag_app.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"What commits did the person with my name make?\",\n",
|
||||
" \"chat_history\": chat_history,\n",
|
||||
" \"end_date\": \"2016-01-01 00:00:00\",\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"answer\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "25851869",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"However, there is data from 2022, which can be used"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "4aef5219",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The person named Sven Klemm made the following commits:\\n\\n1. \"e4ba2bcf560568ae68f3775c058f0a8d7f7c0501\" with the change summary \"Remove debian 9 from packages tests.\" The details of this change are that \"Debian 9 is EOL since July 2022 so we won\\'t build packages for it anymore and can remove it from CI.\"\\n\\n2. \"2f237e6e57e5ac66c126233d66969a1f674ffaa4\" with the change summary \"Add Enterprise Linux 9 packages to RPM package test\". The change details for this commit are not provided.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answer = rag_app.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"What commits did the person with my name make?\",\n",
|
||||
" \"chat_history\": chat_history,\n",
|
||||
" \"start_date\": \"2020-01-01 00:00:00\",\n",
|
||||
" \"end_date\": \"2023-01-01 00:00:00\",\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"answer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "6ad86fbd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Filter by metadata\n",
|
||||
"\n",
|
||||
"You can also filter by metadata using this chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "7ac9365f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The person named Sven Klemm made a commit with the ID \"5cd2c038796fb302190b080c90e5acddbef4b8d1\". The change summary for this commit is \"Simplify windows-build-and-test-ignored.yaml\" and the change details are \"Remove code not needed for the skip workflow of the windows test.\" The commit was made on \"Sat Mar 4 10:18:34 2023 +0100\".'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answer = rag_app.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"What commits did the person with my name make?\",\n",
|
||||
" \"chat_history\": chat_history,\n",
|
||||
" \"metadata_filter\": {\"commit_hash\": \" 5cd2c038796fb302190b080c90e5acddbef4b8d1\"},\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"answer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1cde5da5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
from rag_timescale_conversation.chain import chain
|
||||
|
||||
__all__ = ["chain"]
|
||||
@@ -0,0 +1,164 @@
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
from operator import itemgetter
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
from dotenv import find_dotenv, load_dotenv
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.embeddings import OpenAIEmbeddings
|
||||
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
from langchain.prompts.prompt import PromptTemplate
|
||||
from langchain.schema import AIMessage, HumanMessage, format_document
|
||||
from langchain.schema.output_parser import StrOutputParser
|
||||
from langchain.schema.runnable import (
|
||||
RunnableBranch,
|
||||
RunnableLambda,
|
||||
RunnableMap,
|
||||
RunnablePassthrough,
|
||||
)
|
||||
from langchain.vectorstores.timescalevector import TimescaleVector
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from .load_sample_dataset import load_ts_git_dataset
|
||||
|
||||
load_dotenv(find_dotenv())
|
||||
|
||||
if os.environ.get("TIMESCALE_SERVICE_URL", None) is None:
|
||||
raise Exception("Missing `TIMESCALE_SERVICE_URL` environment variable.")
|
||||
|
||||
SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]
|
||||
LOAD_SAMPLE_DATA = os.environ.get("LOAD_SAMPLE_DATA", False)
|
||||
COLLECTION_NAME = os.environ.get("COLLECTION_NAME", "timescale_commits")
|
||||
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-4")
|
||||
|
||||
partition_interval = timedelta(days=7)
|
||||
if LOAD_SAMPLE_DATA:
|
||||
load_ts_git_dataset(
|
||||
SERVICE_URL,
|
||||
collection_name=COLLECTION_NAME,
|
||||
num_records=500,
|
||||
partition_interval=partition_interval,
|
||||
)
|
||||
|
||||
embeddings = OpenAIEmbeddings()
|
||||
vectorstore = TimescaleVector(
|
||||
embedding=embeddings,
|
||||
collection_name=COLLECTION_NAME,
|
||||
service_url=SERVICE_URL,
|
||||
time_partition_interval=partition_interval,
|
||||
)
|
||||
retriever = vectorstore.as_retriever()
|
||||
|
||||
# Condense a chat history and follow-up question into a standalone question
|
||||
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
|
||||
Chat History:
|
||||
{chat_history}
|
||||
Follow Up Input: {question}
|
||||
Standalone question:""" # noqa: E501
|
||||
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
|
||||
|
||||
# RAG answer synthesis prompt
|
||||
template = """Answer the question based only on the following context:
|
||||
<context>
|
||||
{context}
|
||||
</context>"""
|
||||
ANSWER_PROMPT = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
("system", template),
|
||||
MessagesPlaceholder(variable_name="chat_history"),
|
||||
("user", "{question}"),
|
||||
]
|
||||
)
|
||||
|
||||
# Conversational Retrieval Chain
|
||||
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
|
||||
|
||||
|
||||
def _combine_documents(
|
||||
docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
|
||||
):
|
||||
doc_strings = [format_document(doc, document_prompt) for doc in docs]
|
||||
return document_separator.join(doc_strings)
|
||||
|
||||
|
||||
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> List:
|
||||
buffer = []
|
||||
for human, ai in chat_history:
|
||||
buffer.append(HumanMessage(content=human))
|
||||
buffer.append(AIMessage(content=ai))
|
||||
return buffer
|
||||
|
||||
|
||||
# User input
|
||||
class ChatHistory(BaseModel):
|
||||
chat_history: List[Tuple[str, str]] = Field(..., extra={"widget": {"type": "chat"}})
|
||||
question: str
|
||||
start_date: Optional[datetime]
|
||||
end_date: Optional[datetime]
|
||||
metadata_filter: Optional[dict]
|
||||
|
||||
|
||||
_search_query = RunnableBranch(
|
||||
# If input includes chat_history, we condense it with the follow-up question
|
||||
(
|
||||
RunnableLambda(lambda x: bool(x.get("chat_history"))).with_config(
|
||||
run_name="HasChatHistoryCheck"
|
||||
), # Condense follow-up question and chat into a standalone_question
|
||||
RunnablePassthrough.assign(
|
||||
retriever_query=RunnablePassthrough.assign(
|
||||
chat_history=lambda x: _format_chat_history(x["chat_history"])
|
||||
)
|
||||
| CONDENSE_QUESTION_PROMPT
|
||||
| ChatOpenAI(temperature=0, model=OPENAI_MODEL)
|
||||
| StrOutputParser()
|
||||
),
|
||||
),
|
||||
# Else, we have no chat history, so just pass through the question
|
||||
RunnablePassthrough.assign(retriever_query=lambda x: x["question"]),
|
||||
)
|
||||
|
||||
|
||||
def get_retriever_with_metadata(x):
|
||||
start_dt = x.get("start_date", None)
|
||||
end_dt = x.get("end_date", None)
|
||||
metadata_filter = x.get("metadata_filter", None)
|
||||
opt = {}
|
||||
|
||||
if start_dt is not None:
|
||||
opt["start_date"] = start_dt
|
||||
if end_dt is not None:
|
||||
opt["end_date"] = end_dt
|
||||
if metadata_filter is not None:
|
||||
opt["filter"] = metadata_filter
|
||||
v = vectorstore.as_retriever(search_kwargs=opt)
|
||||
return RunnableLambda(itemgetter("retriever_query")) | v
|
||||
|
||||
|
||||
_retriever = RunnableLambda(get_retriever_with_metadata)
|
||||
|
||||
_inputs = RunnableMap(
|
||||
{
|
||||
"question": lambda x: x["question"],
|
||||
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
|
||||
"start_date": lambda x: x.get("start_date", None),
|
||||
"end_date": lambda x: x.get("end_date", None),
|
||||
"context": _search_query | _retriever | _combine_documents,
|
||||
}
|
||||
)
|
||||
|
||||
_datetime_to_string = RunnablePassthrough.assign(
|
||||
start_date=lambda x: x.get("start_date", None).isoformat()
|
||||
if x.get("start_date", None) is not None
|
||||
else None,
|
||||
end_date=lambda x: x.get("end_date", None).isoformat()
|
||||
if x.get("end_date", None) is not None
|
||||
else None,
|
||||
).with_types(input_type=ChatHistory)
|
||||
|
||||
chain = (
|
||||
_datetime_to_string
|
||||
| _inputs
|
||||
| ANSWER_PROMPT
|
||||
| ChatOpenAI(model=OPENAI_MODEL)
|
||||
| StrOutputParser()
|
||||
)
|
||||
@@ -0,0 +1,84 @@
|
||||
import os
|
||||
import tempfile
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import requests
|
||||
from langchain.document_loaders import JSONLoader
|
||||
from langchain.embeddings.openai import OpenAIEmbeddings
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
from langchain.vectorstores.timescalevector import TimescaleVector
|
||||
from timescale_vector import client
|
||||
|
||||
|
||||
def parse_date(date_string: str) -> datetime:
|
||||
if date_string is None:
|
||||
return None
|
||||
time_format = "%a %b %d %H:%M:%S %Y %z"
|
||||
return datetime.strptime(date_string, time_format)
|
||||
|
||||
|
||||
def extract_metadata(record: dict, metadata: dict) -> dict:
|
||||
dt = parse_date(record["date"])
|
||||
metadata["id"] = str(client.uuid_from_time(dt))
|
||||
if dt is not None:
|
||||
metadata["date"] = dt.isoformat()
|
||||
else:
|
||||
metadata["date"] = None
|
||||
metadata["author"] = record["author"]
|
||||
metadata["commit_hash"] = record["commit"]
|
||||
return metadata
|
||||
|
||||
|
||||
def load_ts_git_dataset(
|
||||
service_url,
|
||||
collection_name="timescale_commits",
|
||||
num_records: int = 500,
|
||||
partition_interval=timedelta(days=7),
|
||||
):
|
||||
json_url = "https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.json"
|
||||
tmp_file = "ts_git_log.json"
|
||||
|
||||
temp_dir = tempfile.gettempdir()
|
||||
json_file_path = os.path.join(temp_dir, tmp_file)
|
||||
|
||||
if not os.path.exists(json_file_path):
|
||||
response = requests.get(json_url)
|
||||
if response.status_code == 200:
|
||||
with open(json_file_path, "w") as json_file:
|
||||
json_file.write(response.text)
|
||||
else:
|
||||
print(f"Failed to download JSON file. Status code: {response.status_code}")
|
||||
|
||||
loader = JSONLoader(
|
||||
file_path=json_file_path,
|
||||
jq_schema=".commit_history[]",
|
||||
text_content=False,
|
||||
metadata_func=extract_metadata,
|
||||
)
|
||||
|
||||
documents = loader.load()
|
||||
|
||||
# Remove documents with None dates
|
||||
documents = [doc for doc in documents if doc.metadata["date"] is not None]
|
||||
|
||||
if num_records > 0:
|
||||
documents = documents[:num_records]
|
||||
|
||||
# Split the documents into chunks for embedding
|
||||
text_splitter = CharacterTextSplitter(
|
||||
chunk_size=1000,
|
||||
chunk_overlap=200,
|
||||
)
|
||||
docs = text_splitter.split_documents(documents)
|
||||
|
||||
embeddings = OpenAIEmbeddings()
|
||||
|
||||
# Create a Timescale Vector instance from the collection of documents
|
||||
TimescaleVector.from_documents(
|
||||
embedding=embeddings,
|
||||
ids=[doc.metadata["id"] for doc in docs],
|
||||
documents=docs,
|
||||
collection_name=collection_name,
|
||||
service_url=service_url,
|
||||
time_partition_interval=partition_interval,
|
||||
)
|
||||
Reference in New Issue
Block a user