Files
langchain/docs/versioned_docs/version-0.2.x/how_to/query_high_cardinality.ipynb
Harrison Chase 66b2ac62eb cr
2024-04-24 17:07:56 -07:00

586 lines
14 KiB
Plaintext

{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 7\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# How deal with high cardinality categoricals when doing query analysis\n",
"\n",
"You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.\n",
"\n",
"In this notebook we take a look at how to approach this."
]
},
{
"cell_type": "markdown",
"id": "a4079b57-4369-49c9-b2ad-c809b5408d7e",
"metadata": {},
"source": [
"## Setup\n",
"#### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai faker langchain-chroma"
]
},
{
"cell_type": "markdown",
"id": "79d66a45-a05c-4d22-b011-b1cdbdfc8f9c",
"metadata": {},
"source": [
"#### Set environment variables\n",
"\n",
"We'll use OpenAI in this example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "d8d47f4b",
"metadata": {},
"source": [
"#### Set up data\n",
"\n",
"We will generate a bunch of fake names"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e5ba65c2",
"metadata": {},
"outputs": [],
"source": [
"from faker import Faker\n",
"\n",
"fake = Faker()\n",
"\n",
"names = [fake.name() for _ in range(10000)]"
]
},
{
"cell_type": "markdown",
"id": "41133694",
"metadata": {},
"source": [
"Let's look at some of the names"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c901ea97",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hayley Gonzalez'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"names[0]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b0d42ae2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Jesse Knight'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"names[567]"
]
},
{
"cell_type": "markdown",
"id": "1725883d",
"metadata": {},
"source": [
"## Query Analysis\n",
"\n",
"We can now set up a baseline query analysis"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0ae69afc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6c9485ce",
"metadata": {},
"outputs": [],
"source": [
"class Search(BaseModel):\n",
" query: str\n",
" author: str"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "aebd704a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"system = \"\"\"Generate a relevant search query for a library system\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "41709a2e",
"metadata": {},
"source": [
"We can see that if we spell the name exactly correctly, it knows how to handle it"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "cc0d344b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"what are books about aliens by Jesse Knight\")"
]
},
{
"cell_type": "markdown",
"id": "a1b57eab",
"metadata": {},
"source": [
"The issue is that the values you want to filter on may NOT be spelled exactly correctly"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "82b6b2ad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jess Knight')"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "0b60b7c2",
"metadata": {},
"source": [
"### Add in all values\n",
"\n",
"One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "98788a94",
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system.\n",
"\n",
"`author` attribute MUST be one of:\n",
"\n",
"{authors}\n",
"\n",
"Do NOT hallucinate author name!\"\"\"\n",
"base_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"prompt = base_prompt.partial(authors=\", \".join(names))"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "e65412f5",
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_all = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "e639285a",
"metadata": {},
"source": [
"However... if the list of categoricals is long enough, it may error!"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "696b000f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Error code: 400 - {'error': {'message': \"This model's maximum context length is 16385 tokens. However, your messages resulted in 33885 tokens (33855 in the messages, 30 in the functions). Please reduce the length of the messages or functions.\", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}\n"
]
}
],
"source": [
"try:\n",
" res = query_analyzer_all.invoke(\"what are books about aliens by jess knight\")\n",
"except Exception as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"id": "1d5d7891",
"metadata": {},
"source": [
"We can try to use a longer context window... but with so much information in there, it is not garunteed to pick it up reliably"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "0f0d0757",
"metadata": {},
"outputs": [],
"source": [
"llm_long = ChatOpenAI(model=\"gpt-4-turbo-preview\", temperature=0)\n",
"structured_llm_long = llm_long.with_structured_output(Search)\n",
"query_analyzer_all = {\"question\": RunnablePassthrough()} | prompt | structured_llm_long"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "03e5b7b2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='Kevin Knight')"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer_all.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "73ecf52b",
"metadata": {},
"source": [
"### Find and all relevant values\n",
"\n",
"Instead, what we can do is create an index over the relevant values and then query that for the N most relevant values,"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "32b19e07",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(names, embeddings, collection_name=\"author_names\")"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "774cb7b0",
"metadata": {},
"outputs": [],
"source": [
"def select_names(question):\n",
" _docs = vectorstore.similarity_search(question, k=10)\n",
" _names = [d.page_content for d in _docs]\n",
" return \", \".join(_names)"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "1173159c",
"metadata": {},
"outputs": [],
"source": [
"create_prompt = {\n",
" \"question\": RunnablePassthrough(),\n",
" \"authors\": select_names,\n",
"} | base_prompt"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "0a892607",
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_select = create_prompt | structured_llm"
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "8195d7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\\n\\n`author` attribute MUST be one of:\\n\\nJesse Knight, Kelly Knight, Scott Knight, Richard Knight, Andrew Knight, Katherine Knight, Erica Knight, Ashley Knight, Becky Knight, Kevin Knight\\n\\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')])"
]
},
"execution_count": 54,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"create_prompt.invoke(\"what are books by jess knight\")"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "d3228b4e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 55,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer_select.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "46ef88bb",
"metadata": {},
"source": [
"### Replace after selection\n",
"\n",
"Another method is to let the LLM fill in whatever value, but then convert that value to a valid value.\n",
"This can actually be done with the Pydantic class itself!"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "a2e8b434",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import validator\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" query: str\n",
" author: str\n",
"\n",
" @validator(\"author\")\n",
" def double(cls, v: str) -> str:\n",
" return vectorstore.similarity_search(v, k=1)[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "919c0601",
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"corrective_structure_llm = llm.with_structured_output(Search)\n",
"corrective_query_analyzer = (\n",
" {\"question\": RunnablePassthrough()} | prompt | corrective_structure_llm\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "6c4f3e9a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"corrective_query_analyzer.invoke(\"what are books about aliens by jes knight\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a309cb11",
"metadata": {},
"outputs": [],
"source": [
"# TODO: show trigram similarity"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}