Added example

This commit is contained in:
adilkhan
2023-10-25 20:11:25 +06:00
parent 06db812332
commit 6349fc5cf4

View File

@@ -0,0 +1,614 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Activeloop DeepLake's DeepMemory + LangChain + ragas"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recently, RAGs (Retrieval-Augmented Generators) have surged in popularity. Advanced RAG techniques and agents are elevating the potential of what can be achieved with RAGs. However, a significant challenge remains: domain shift. If your Language Model (LLM) was pretrained on a dataset significantly different from your production dataset's distribution, you might face reduced recall. This can hinder the performance of RAGs in production settings. Traditionally, this issue can be addressed by either incorporating a new dataset during the LLM's pretraining phase or through extensive fine-tuning. Both approaches come with drawbacks. Adding data during the pretraining phase can be expensive, and fine-tuning might degrade the model's overall performance, as it could lose previously learned information. [Activeloop](https://activeloop.ai/) has tackled this challenge by introducing an additional layer to the LLM. This layer transforms corpus data from the embedding space to a latent space better aligned with user queries. This is achieved with [Deep Memory](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/), a feature available to Activeloop Deep Lake users, where you can train a deep memory network that learns to align user questions with the model's embeddings.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this tutorial we will parse deeplake documentation, and create a RAG system that could answer the question from the docs. \n",
"\n",
"The tutorial can be divided into several parts:\n",
"1. [Dataset creation and uploading](#1-dataset-creation)\n",
"2. [Generating synthetic queries and training deep_memory](#2-generating-synthetic-queries-and-training-deep_memory)\n",
"3. [Evaluating deep memory performance](#3-evaluating-deep-memory-performance)\n",
" - 3.1 [using deepmemory recall@10 metric](#31-using-deepmemory-recall10-metric)\n",
" - 3.2 [using ragas](#32-deepmemory--ragas)\n",
" - 3.3 [deep_memory inference](#33-deepmemory-inference)\n",
" - 3.4 [deep_memory cost savings](#34-cost-savings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"dataset-creation\"></a>\n",
"## 1. Dataset Creation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will parse activeloop's docs for this tutorial using `BeautifulSoup` library and LangChain's document parsers like `Html2TextTransformer`, `AsyncHtmlLoader`. So we will need to install the following libraries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install tiktoken openai python-dotenv datasets langchain deeplake beautifulsoup4 html2text ragas"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also you'll need to create a [Activeloop]((https://activeloop.ai/)) account."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"from langchain.vectorstores.deeplake import DeepLake\n",
"\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.chains import RetrievalQA\n",
"from langchain.llms import OpenAIChat\n",
"\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API token: \")\n",
"# # activeloop token is needed if you are not signed in using CLI: `activeloop login -u <USERNAME> -p <PASSWORD>`\n",
"os.environ[\"ACTIVELOOP_TOKEN\"] = getpass.getpass(\"Enter your ActiveLoop API token: \") # Get your API token from https://app.activeloop.ai, click on your profile picture in the top right corner, and select \"API Tokens\"\n",
"\n",
"token = os.getenv(\"ACTIVELOOP_TOKEN\")\n",
"openai_embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db = DeepLake(\n",
" dataset_path=f\"hub://{ORG_ID}/deeplake-docs-deepmemory\", # org_id stands for your username or organization from activeloop\n",
" embedding=openai_embeddings,\n",
" runtime={\"tensor_db\": True},\n",
" token=token,\n",
" # overwrite=True, # user overwrite flag if you want to overwrite the full dataset\n",
" read_only=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"parsing all links in the webpage using `BeautifulSoup`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from urllib.parse import urljoin\n",
"\n",
"def get_all_links(url):\n",
" response = requests.get(url)\n",
" if response.status_code != 200:\n",
" print(f'Failed to retrieve the page: {url}')\n",
" return []\n",
"\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" \n",
" # Finding all 'a' tags which typically contain href attribute for links\n",
" links = [urljoin(url, a['href']) for a in soup.find_all('a', href=True) if a['href']]\n",
"\n",
" return links\n",
"\n",
"base_url = \"https://docs.deeplake.ai/en/latest/\"\n",
"all_links = get_all_links(base_url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Loading data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import AsyncHtmlLoader\n",
"\n",
"loader = AsyncHtmlLoader(all_links)\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Converting data into user readable format:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_transformers import Html2TextTransformer\n",
"\n",
"html2text = Html2TextTransformer()\n",
"docs_transformed = html2text.transform_documents(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us chunk further the documents as some of the contain too much text:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"\n",
"chunk_size = 4096\n",
"docs_new = []\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size = chunk_size,\n",
")\n",
"\n",
"for doc in docs_transformed:\n",
" if len(doc.page_content) < chunk_size:\n",
" docs_new.append(doc)\n",
" else:\n",
" docs = text_splitter.create_documents([doc.page_content])\n",
" docs_new.extend(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Populating VectorStore:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = db.add_documents(docs_new)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"training\"></a>\n",
"## 2. Generating synthetic queries and training deep_memory "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next step would be to train a deep_memory model that will align your users queries with the dataset that you already have. If you don't have any user queries yet, no worries, we will generate them using LLM!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here above we showed the overall schema how deep_memory works. So as you can see, in order to train it you need relevence, queries together with corpus data (data that we want to query). Corpus data was already populated in the previous section, here we will be generating questions and relevance. \n",
"\n",
"1. `questions` - is a text of strings, where each string represents a query\n",
"2. `relevence` - contains links to the ground truth for each question. There might be several docs that contain answer to the given question. Because of this relevenve is `List[List[tuple[str, float]]]`, where outer list represents queries and inner list relevent documents. Tuple contains str, float pair where string represent the id of the source doc (corresponds to the `id` tensor in the dataset), while float corresponds to how much current document is related to the question. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us generate synthetic questions and relevance:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional, List\n",
"\n",
"from langchain.chains.openai_functions import (\n",
" create_openai_fn_chain,\n",
" create_structured_output_chain,\n",
")\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate\n",
"from langchain.schema import HumanMessage, SystemMessage\n",
"\n",
"from pydantic import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# fetch dataset docs and ids if they exist (optional you can also ingest)\n",
"docs = db.vectorstore.dataset.text.data(fetch_chunks=True, aslist=True)['value']\n",
"ids = db.vectorstore.dataset.id.data(fetch_chunks=True, aslist=True)['value']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If we pass in a model explicitly, we need to make sure it supports the OpenAI function-calling API.\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"class Questions(BaseModel):\n",
" \"\"\"Identifying information about a person.\"\"\"\n",
" question: str = Field(..., description=\"Questions about text\")\n",
"\n",
"prompt_msgs = [\n",
" SystemMessage(\n",
" content=\"You are a world class expert for generating questions based on provided context. \\\n",
" You make sure the question can be answered by the text.\"\n",
" ),\n",
" HumanMessagePromptTemplate.from_template(\n",
" \"Use the given text to generate a question from the following input: {input}\"\n",
" ),\n",
" HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n",
"]\n",
"prompt = ChatPromptTemplate(messages=prompt_msgs)\n",
"chain = create_structured_output_chain(Questions, llm, prompt, verbose=True)\n",
"\n",
"text = \"# Understanding Hallucinations and Bias ## **Introduction** In this lesson, we'll cover the concept of **hallucinations** in LLMs, highlighting their influence on AI applications and demonstrating how to mitigate them using techniques like the retriever's architectures. We'll also explore **bias** within LLMs with examples.\"\n",
"questions = chain.run(input=text)\n",
"print(questions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"from tqdm import tqdm\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"\n",
"def generate_queries(docs: List[str], ids: List[str], n: int=100):\n",
"\n",
" questions = []\n",
" relevances = []\n",
" pbar = tqdm(total=n)\n",
" while len(questions) < n:\n",
" # 1. randomly draw a piece of text and relevance id\n",
" r = random.randint(0, len(docs)-1)\n",
" text, label = docs[r], ids[r]\n",
"\n",
" # 2. generate queries and assign and relevance id\n",
" generated_qs = [chain.run(input=text).question]\n",
" questions.extend(generated_qs)\n",
" relevances.extend([[(label, 1)] for _ in generated_qs])\n",
" pbar.update(len(generated_qs))\n",
" if len(questions) % 10 == 0:\n",
" print(f\"q: {len(questions)}\")\n",
" return questions[:n], relevances[:n]\n",
"\n",
"chain = create_structured_output_chain(Questions, llm, prompt, verbose=False)\n",
"questions, relevances = generate_queries(docs, ids, n=200)\n",
"\n",
"train_questions, train_relevances = questions[:100], relevances[:100]\n",
"test_questions, test_relevances = questions[100:], relevances[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we created 100 training queries as well as 100 queries for testing. Now let us train the deep_memory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"job_id = db.vectorstore.deep_memory.train(\n",
" queries=train_questions,\n",
" relevance=train_relevances,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us track the training progress:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db.vectorstore.deep_memory.status('6538939ca0b69a9ca45c528c')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"evaluation\"></a>\n",
"## 3. Evaluating deep memory performance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great we've trained the model! It's showing some substantial improvement in recall, but how can we use it now and evaluate on unseen new data? In this section we will delve into model evaluation and inference part and see how it can be used with LangChain in order to increase retrieval accuracy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"recall@10\"></a>\n",
"### 3.1 using deepmemory recall@10 metric"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the beginning we can use deep_memory's builtin evaluation method. it can be done easily in a few lines of code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"recall = db.vectorstore.deep_memory.evaluate(\n",
" queries=test_questions,\n",
" relevance=test_relevances,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is showing quite substatntial improvement on an unseen test dataset too!!!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"ragas\"></a>\n",
"### 3.2 DeepMemory + ragas"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ragas.metrics import (\n",
" context_recall,\n",
")\n",
"from ragas.langchain import RagasEvaluatorChain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us convert recall into ground truths:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def convert_relevance_to_ground_truth(docs, relevance):\n",
" ground_truths = []\n",
" \n",
" for rel in relevance:\n",
" ground_truth = []\n",
" for doc_id, _ in rel:\n",
" ground_truth.append(docs[doc_id])\n",
" ground_truths.append(ground_truth)\n",
" return ground_truths"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ground_truths = convert_relevance_to_ground_truth(docs, test_relevances)\n",
"\n",
"for deep_memory in [False, True]:\n",
" print(\"\\nEvaluating with deep_memory =\", deep_memory)\n",
" print(\"===================================\")\n",
" \n",
" retriever = db.as_retriever()\n",
" retriever.search_kwargs[\"deep_memory\"] = deep_memory\n",
"\n",
" qa_chain = RetrievalQA.from_chain_type(\n",
" llm=OpenAIChat(model=\"gpt-3.5-turbo\"),\n",
" chain_type=\"stuff\",\n",
" retriever=retriever,\n",
" return_source_documents=True,\n",
" )\n",
" \n",
" metrics = {\n",
" \"context_recall_score\": 0,\n",
" }\n",
" \n",
" eval_chains = {\n",
" m.name: RagasEvaluatorChain(metric=m)\n",
" for m in [context_recall]\n",
" }\n",
" \n",
" for question, ground_truth in zip(test_questions, ground_truths):\n",
" result = qa_chain({\"query\": question})\n",
" result[\"ground_truths\"] = ground_truth\n",
" for name, eval_chain in eval_chains.items():\n",
" score_name = f\"{name}_score\"\n",
" metrics[score_name] += eval_chain(result)[score_name]\n",
" \n",
" for metric in metrics:\n",
" metrics[metric] /= len(test_questions)\n",
" print(f\"{metric}: {metrics[metric]}\")\n",
" print(\"===================================\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"inference\"></a>\n",
"### 3.3 DeepMemory Inference"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"with deep_memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"retriver = db.as_retriever()\n",
"retriver.search_kwargs[\"deep_memory\"] = True\n",
"retriver.search_kwargs[\"k\"] = 10\n",
"\n",
"query=\"Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome.\"\n",
"qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-4'), chain_type='stuff', retriever=retriver)\n",
"print(qa.run(query))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"without deep_memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"retriver = db.as_retriever()\n",
"retriver.search_kwargs[\"deep_memory\"] = False\n",
"retriver.search_kwargs[\"k\"] = 10\n",
"\n",
"query=\"Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome.\"\n",
"qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-4'), chain_type='stuff', retriever=retriver)\n",
"qa.run(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"cost\"></a>\n",
"### 3.4 Cost savings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deep Memory increases retrieval accuracy without altering your existing workflow. Additionally, by reducing the top_k input into the LLM, you can significantly cut inference costs via lower token usage."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}