mv popular and additional chains to use cases (#8242)

This commit is contained in:
Bagatur
2023-07-27 12:55:13 -07:00
committed by GitHub
parent ff98fad2d9
commit 68763bd25f
78 changed files with 959 additions and 1014 deletions

View File

@@ -1,921 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "82f3f65d-fbcb-4e8e-b04b-959856283643",
"metadata": {},
"source": [
"# Causal program-aided language (CPAL) chain\n",
"\n",
"The CPAL chain builds on the recent PAL to stop LLM hallucination. The problem with the PAL approach is that it hallucinates on a math problem with a nested chain of dependence. The innovation here is that this new CPAL approach includes causal structure to fix hallucination.\n",
"\n",
"The original [PR's description](https://github.com/hwchase17/langchain/pull/6255) contains a full overview.\n",
"\n",
"Using the CPAL chain, the LLM translated this\n",
"\n",
" \"Tim buys the same number of pets as Cindy and Boris.\"\n",
" \"Cindy buys the same number of pets as Bill plus Bob.\"\n",
" \"Boris buys the same number of pets as Ben plus Beth.\"\n",
" \"Bill buys the same number of pets as Obama.\"\n",
" \"Bob buys the same number of pets as Obama.\"\n",
" \"Ben buys the same number of pets as Obama.\"\n",
" \"Beth buys the same number of pets as Obama.\"\n",
" \"If Obama buys one pet, how many pets total does everyone buy?\"\n",
"\n",
"\n",
"into this\n",
"\n",
"![complex-graph.png](/img/cpal_diagram.png).\n",
"\n",
"Outline of code examples demoed in this notebook.\n",
"\n",
"1. CPAL's value against hallucination: CPAL vs PAL \n",
" 1.1 Complex narrative \n",
" 1.2 Unanswerable math word problem \n",
"2. CPAL's three types of causal diagrams ([The Book of Why](https://en.wikipedia.org/wiki/The_Book_of_Why)). \n",
" 2.1 Mediator \n",
" 2.2 Collider \n",
" 2.3 Confounder "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1370e40f",
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import SVG\n",
"\n",
"from langchain.experimental.cpal.base import CPALChain\n",
"from langchain.chains import PALChain\n",
"from langchain import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0, max_tokens=512)\n",
"cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)\n",
"pal_chain = PALChain.from_math_prompt(llm=llm, verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "858a87d9-a9bd-4850-9687-9af4b0856b62",
"metadata": {},
"source": [
"## CPAL's value against hallucination: CPAL vs PAL\n",
"\n",
"Like PAL, CPAL intends to reduce large language model (LLM) hallucination.\n",
"\n",
"The CPAL chain is different from the PAL chain for a couple of reasons.\n",
"\n",
"CPAL adds a causal structure (or DAG) to link entity actions (or math expressions).\n",
"The CPAL math expressions are modeling a chain of cause and effect relations, which can be intervened upon, whereas for the PAL chain math expressions are projected math identities.\n"
]
},
{
"cell_type": "markdown",
"id": "496403c5-d268-43ae-8852-2bd9903ce444",
"metadata": {},
"source": [
"### 1.1 Complex narrative\n",
"\n",
"Takeaway: PAL hallucinates, CPAL does not hallucinate."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d5dad768-2892-4825-8093-9b840f643a8a",
"metadata": {},
"outputs": [],
"source": [
"question = (\n",
" \"Tim buys the same number of pets as Cindy and Boris.\"\n",
" \"Cindy buys the same number of pets as Bill plus Bob.\"\n",
" \"Boris buys the same number of pets as Ben plus Beth.\"\n",
" \"Bill buys the same number of pets as Obama.\"\n",
" \"Bob buys the same number of pets as Obama.\"\n",
" \"Ben buys the same number of pets as Obama.\"\n",
" \"Beth buys the same number of pets as Obama.\"\n",
" \"If Obama buys one pet, how many pets total does everyone buy?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bbffa7a0-3c22-4a1d-ab2d-f230973073b0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mdef solution():\n",
" \"\"\"Tim buys the same number of pets as Cindy and Boris.Cindy buys the same number of pets as Bill plus Bob.Boris buys the same number of pets as Ben plus Beth.Bill buys the same number of pets as Obama.Bob buys the same number of pets as Obama.Ben buys the same number of pets as Obama.Beth buys the same number of pets as Obama.If Obama buys one pet, how many pets total does everyone buy?\"\"\"\n",
" obama_pets = 1\n",
" tim_pets = obama_pets\n",
" cindy_pets = obama_pets + obama_pets\n",
" boris_pets = obama_pets + obama_pets\n",
" total_pets = tim_pets + cindy_pets + boris_pets\n",
" result = total_pets\n",
" return result\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'5'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "35a70d1d-86f8-4abc-b818-fbd083f072e9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mstory outcome data\n",
" name code value depends_on\n",
"0 obama pass 1.0 []\n",
"1 bill bill.value = obama.value 1.0 [obama]\n",
"2 bob bob.value = obama.value 1.0 [obama]\n",
"3 ben ben.value = obama.value 1.0 [obama]\n",
"4 beth beth.value = obama.value 1.0 [obama]\n",
"5 cindy cindy.value = bill.value + bob.value 2.0 [bill, bob]\n",
"6 boris boris.value = ben.value + beth.value 2.0 [ben, beth]\n",
"7 tim tim.value = cindy.value + boris.value 4.0 [cindy, boris]\u001b[0m\n",
"\n",
"\u001b[36;1m\u001b[1;3mquery data\n",
"{\n",
" \"question\": \"how many pets total does everyone buy?\",\n",
" \"expression\": \"SELECT SUM(value) FROM df\",\n",
" \"llm_error_msg\": \"\"\n",
"}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"13.0"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cpal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ccb6b2b0-9de6-4f66-a8fb-fc59229ee316",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"292pt\" height=\"260pt\" viewBox=\"0.00 0.00 292.00 260.00\">\n",
"<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 256)\">\n",
"<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-256 288,-256 288,4 -4,4\"/>\n",
"<!-- obama -->\n",
"<g id=\"node1\" class=\"node\">\n",
"<title>obama</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"137\" cy=\"-234\" rx=\"41.69\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"137\" y=\"-230.3\" font-family=\"Times,serif\" font-size=\"14.00\">obama</text>\n",
"</g>\n",
"<!-- bill -->\n",
"<g id=\"node2\" class=\"node\">\n",
"<title>bill</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"27\" cy=\"-162\" rx=\"27\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"27\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">bill</text>\n",
"</g>\n",
"<!-- obama&#45;&gt;bill -->\n",
"<g id=\"edge1\" class=\"edge\">\n",
"<title>obama-&gt;bill</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M114.47,-218.67C97.08,-207.6 72.94,-192.23 54.42,-180.45\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"56.15,-177.4 45.84,-174.99 52.4,-183.31 56.15,-177.4\"/>\n",
"</g>\n",
"<!-- bob -->\n",
"<g id=\"node3\" class=\"node\">\n",
"<title>bob</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"100\" cy=\"-162\" rx=\"28\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"100\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">bob</text>\n",
"</g>\n",
"<!-- obama&#45;&gt;bob -->\n",
"<g id=\"edge2\" class=\"edge\">\n",
"<title>obama-&gt;bob</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M128.04,-216.05C123.66,-207.77 118.3,-197.62 113.44,-188.42\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"116.39,-186.51 108.62,-179.31 110.2,-189.79 116.39,-186.51\"/>\n",
"</g>\n",
"<!-- ben -->\n",
"<g id=\"node4\" class=\"node\">\n",
"<title>ben</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"174\" cy=\"-162\" rx=\"28\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"174\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">ben</text>\n",
"</g>\n",
"<!-- obama&#45;&gt;ben -->\n",
"<g id=\"edge3\" class=\"edge\">\n",
"<title>obama-&gt;ben</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M145.96,-216.05C150.34,-207.77 155.7,-197.62 160.56,-188.42\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"163.8,-189.79 165.38,-179.31 157.61,-186.51 163.8,-189.79\"/>\n",
"</g>\n",
"<!-- beth -->\n",
"<g id=\"node5\" class=\"node\">\n",
"<title>beth</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"252\" cy=\"-162\" rx=\"32\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"252\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">beth</text>\n",
"</g>\n",
"<!-- obama&#45;&gt;beth -->\n",
"<g id=\"edge4\" class=\"edge\">\n",
"<title>obama-&gt;beth</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M160.27,-218.83C178.18,-207.94 203.04,-192.8 222.37,-181.04\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"224.36,-183.92 231.08,-175.73 220.72,-177.95 224.36,-183.92\"/>\n",
"</g>\n",
"<!-- cindy -->\n",
"<g id=\"node6\" class=\"node\">\n",
"<title>cindy</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"93\" cy=\"-90\" rx=\"36\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"93\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">cindy</text>\n",
"</g>\n",
"<!-- bill&#45;&gt;cindy -->\n",
"<g id=\"edge5\" class=\"edge\">\n",
"<title>bill-&gt;cindy</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M41,-146.15C49.77,-136.85 61.25,-124.67 71.2,-114.12\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"73.79,-116.47 78.11,-106.8 68.7,-111.67 73.79,-116.47\"/>\n",
"</g>\n",
"<!-- bob&#45;&gt;cindy -->\n",
"<g id=\"edge6\" class=\"edge\">\n",
"<title>bob-&gt;cindy</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M98.27,-143.7C97.5,-135.98 96.57,-126.71 95.71,-118.11\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"99.19,-117.7 94.71,-108.1 92.22,-118.4 99.19,-117.7\"/>\n",
"</g>\n",
"<!-- boris -->\n",
"<g id=\"node7\" class=\"node\">\n",
"<title>boris</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"181\" cy=\"-90\" rx=\"34.5\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"181\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">boris</text>\n",
"</g>\n",
"<!-- ben&#45;&gt;boris -->\n",
"<g id=\"edge7\" class=\"edge\">\n",
"<title>ben-&gt;boris</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M175.73,-143.7C176.5,-135.98 177.43,-126.71 178.29,-118.11\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"181.78,-118.4 179.29,-108.1 174.81,-117.7 181.78,-118.4\"/>\n",
"</g>\n",
"<!-- beth&#45;&gt;boris -->\n",
"<g id=\"edge8\" class=\"edge\">\n",
"<title>beth-&gt;boris</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M236.59,-145.81C227.01,-136.36 214.51,-124.04 203.8,-113.48\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"205.96,-110.69 196.38,-106.16 201.04,-115.67 205.96,-110.69\"/>\n",
"</g>\n",
"<!-- tim -->\n",
"<g id=\"node8\" class=\"node\">\n",
"<title>tim</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"137\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"137\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">tim</text>\n",
"</g>\n",
"<!-- cindy&#45;&gt;tim -->\n",
"<g id=\"edge9\" class=\"edge\">\n",
"<title>cindy-&gt;tim</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M103.43,-72.41C108.82,-63.83 115.51,-53.19 121.49,-43.67\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"124.59,-45.32 126.95,-34.99 118.66,-41.59 124.59,-45.32\"/>\n",
"</g>\n",
"<!-- boris&#45;&gt;tim -->\n",
"<g id=\"edge10\" class=\"edge\">\n",
"<title>boris-&gt;tim</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M170.79,-72.77C165.41,-64.19 158.68,-53.49 152.65,-43.9\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"155.43,-41.75 147.15,-35.15 149.51,-45.48 155.43,-41.75\"/>\n",
"</g>\n",
"</g>\n",
"</svg>"
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# wait 20 secs to see display\n",
"cpal_chain.draw(path=\"web.svg\")\n",
"SVG(\"web.svg\")"
]
},
{
"cell_type": "markdown",
"id": "1f6f345a-bb16-4e64-83c4-cbbc789a8325",
"metadata": {},
"source": [
"### Unanswerable math\n",
"\n",
"Takeaway: PAL hallucinates, where CPAL, rather than hallucinate, answers with _\"unanswerable, narrative question and plot are incoherent\"_"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "068afd79-fd41-4ec2-b4d0-c64140dc413f",
"metadata": {},
"outputs": [],
"source": [
"question = (\n",
" \"Jan has three times the number of pets as Marcia.\"\n",
" \"Marcia has two more pets than Cindy.\"\n",
" \"If Cindy has ten pets, how many pets does Barak have?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "02f77db2-72e8-46c2-90b3-5e37ca42f80d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mdef solution():\n",
" \"\"\"Jan has three times the number of pets as Marcia.Marcia has two more pets than Cindy.If Cindy has ten pets, how many pets does Barak have?\"\"\"\n",
" cindy_pets = 10\n",
" marcia_pets = cindy_pets + 2\n",
" jan_pets = marcia_pets * 3\n",
" result = jan_pets\n",
" return result\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'36'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "925958de-e998-4ffa-8b2e-5a00ddae5026",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mstory outcome data\n",
" name code value depends_on\n",
"0 cindy pass 10.0 []\n",
"1 marcia marcia.value = cindy.value + 2 12.0 [cindy]\n",
"2 jan jan.value = marcia.value * 3 36.0 [marcia]\u001b[0m\n",
"\n",
"\u001b[36;1m\u001b[1;3mquery data\n",
"{\n",
" \"question\": \"how many pets does barak have?\",\n",
" \"expression\": \"SELECT name, value FROM df WHERE name = 'barak'\",\n",
" \"llm_error_msg\": \"\"\n",
"}\u001b[0m\n",
"\n",
"unanswerable, query and outcome are incoherent\n",
"\n",
"outcome:\n",
" name code value depends_on\n",
"0 cindy pass 10.0 []\n",
"1 marcia marcia.value = cindy.value + 2 12.0 [cindy]\n",
"2 jan jan.value = marcia.value * 3 36.0 [marcia]\n",
"query:\n",
"{'question': 'how many pets does barak have?', 'expression': \"SELECT name, value FROM df WHERE name = 'barak'\", 'llm_error_msg': ''}\n"
]
}
],
"source": [
"try:\n",
" cpal_chain.run(question)\n",
"except Exception as e_msg:\n",
" print(e_msg)"
]
},
{
"cell_type": "markdown",
"id": "095adc76",
"metadata": {},
"source": [
"### Basic math\n",
"\n",
"#### Causal mediator"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3ecf03fa-8350-4c4e-8080-84a307ba6ad4",
"metadata": {},
"outputs": [],
"source": [
"question = (\n",
" \"Jan has three times the number of pets as Marcia. \"\n",
" \"Marcia has two more pets than Cindy. \"\n",
" \"If Cindy has four pets, how many total pets do the three have?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "74e49c47-3eed-4abe-98b7-8e97bcd15944",
"metadata": {},
"source": [
"---\n",
"PAL"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2e88395f-d014-4362-abb0-88f6800860bb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mdef solution():\n",
" \"\"\"Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?\"\"\"\n",
" cindy_pets = 4\n",
" marcia_pets = cindy_pets + 2\n",
" jan_pets = marcia_pets * 3\n",
" total_pets = cindy_pets + marcia_pets + jan_pets\n",
" result = total_pets\n",
" return result\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'28'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pal_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "20ba6640-3d17-4b59-8101-aaba89d68cf4",
"metadata": {},
"source": [
"---\n",
"CPAL"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "312a0943-a482-4ed0-a064-1e7a72e9479b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mstory outcome data\n",
" name code value depends_on\n",
"0 cindy pass 4.0 []\n",
"1 marcia marcia.value = cindy.value + 2 6.0 [cindy]\n",
"2 jan jan.value = marcia.value * 3 18.0 [marcia]\u001b[0m\n",
"\n",
"\u001b[36;1m\u001b[1;3mquery data\n",
"{\n",
" \"question\": \"how many total pets do the three have?\",\n",
" \"expression\": \"SELECT SUM(value) FROM df\",\n",
" \"llm_error_msg\": \"\"\n",
"}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"28.0"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cpal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "4466b975-ae2b-4252-972b-b3182a089ade",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"92pt\" height=\"188pt\" viewBox=\"0.00 0.00 92.49 188.00\">\n",
"<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 184)\">\n",
"<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-184 88.49,-184 88.49,4 -4,4\"/>\n",
"<!-- cindy -->\n",
"<g id=\"node1\" class=\"node\">\n",
"<title>cindy</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"42.25\" cy=\"-162\" rx=\"36\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"42.25\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">cindy</text>\n",
"</g>\n",
"<!-- marcia -->\n",
"<g id=\"node2\" class=\"node\">\n",
"<title>marcia</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"42.25\" cy=\"-90\" rx=\"42.49\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"42.25\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">marcia</text>\n",
"</g>\n",
"<!-- cindy&#45;&gt;marcia -->\n",
"<g id=\"edge1\" class=\"edge\">\n",
"<title>cindy-&gt;marcia</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M42.25,-143.7C42.25,-135.98 42.25,-126.71 42.25,-118.11\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"45.75,-118.1 42.25,-108.1 38.75,-118.1 45.75,-118.1\"/>\n",
"</g>\n",
"<!-- jan -->\n",
"<g id=\"node3\" class=\"node\">\n",
"<title>jan</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"42.25\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"42.25\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">jan</text>\n",
"</g>\n",
"<!-- marcia&#45;&gt;jan -->\n",
"<g id=\"edge2\" class=\"edge\">\n",
"<title>marcia-&gt;jan</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M42.25,-71.7C42.25,-63.98 42.25,-54.71 42.25,-46.11\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"45.75,-46.1 42.25,-36.1 38.75,-46.1 45.75,-46.1\"/>\n",
"</g>\n",
"</g>\n",
"</svg>"
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# wait 20 secs to see display\n",
"cpal_chain.draw(path=\"web.svg\")\n",
"SVG(\"web.svg\")"
]
},
{
"cell_type": "markdown",
"id": "29fa7b8a-75a3-4270-82a2-2c31939cd7e0",
"metadata": {},
"source": [
"### Causal collider"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "618eddac-f0ef-4ab5-90ed-72e880fdeba3",
"metadata": {},
"outputs": [],
"source": [
"question = (\n",
" \"Jan has the number of pets as Marcia plus the number of pets as Cindy. \"\n",
" \"Marcia has no pets. \"\n",
" \"If Cindy has four pets, how many total pets do the three have?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "a01563f3-7974-4de4-8bd9-0b7d710aa0d3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mstory outcome data\n",
" name code value depends_on\n",
"0 marcia pass 0.0 []\n",
"1 cindy pass 4.0 []\n",
"2 jan jan.value = marcia.value + cindy.value 4.0 [marcia, cindy]\u001b[0m\n",
"\n",
"\u001b[36;1m\u001b[1;3mquery data\n",
"{\n",
" \"question\": \"how many total pets do the three have?\",\n",
" \"expression\": \"SELECT SUM(value) FROM df\",\n",
" \"llm_error_msg\": \"\"\n",
"}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"8.0"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cpal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "0fbe7243-0522-4946-b9a2-6e21e7c49a42",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"182pt\" height=\"116pt\" viewBox=\"0.00 0.00 182.00 116.00\">\n",
"<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 112)\">\n",
"<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-112 178,-112 178,4 -4,4\"/>\n",
"<!-- marcia -->\n",
"<g id=\"node1\" class=\"node\">\n",
"<title>marcia</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"42.25\" cy=\"-90\" rx=\"42.49\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"42.25\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">marcia</text>\n",
"</g>\n",
"<!-- jan -->\n",
"<g id=\"node2\" class=\"node\">\n",
"<title>jan</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"90.25\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"90.25\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">jan</text>\n",
"</g>\n",
"<!-- marcia&#45;&gt;jan -->\n",
"<g id=\"edge1\" class=\"edge\">\n",
"<title>marcia-&gt;jan</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M53.62,-72.41C59.57,-63.74 66.95,-52.97 73.53,-43.38\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"76.51,-45.21 79.28,-34.99 70.74,-41.26 76.51,-45.21\"/>\n",
"</g>\n",
"<!-- cindy -->\n",
"<g id=\"node3\" class=\"node\">\n",
"<title>cindy</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"138.25\" cy=\"-90\" rx=\"36\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"138.25\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">cindy</text>\n",
"</g>\n",
"<!-- cindy&#45;&gt;jan -->\n",
"<g id=\"edge2\" class=\"edge\">\n",
"<title>cindy-&gt;jan</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M127.11,-72.77C121.09,-63.98 113.54,-52.96 106.83,-43.19\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"109.53,-40.94 100.99,-34.67 103.75,-44.89 109.53,-40.94\"/>\n",
"</g>\n",
"</g>\n",
"</svg>"
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# wait 20 secs to see display\n",
"cpal_chain.draw(path=\"web.svg\")\n",
"SVG(\"web.svg\")"
]
},
{
"cell_type": "markdown",
"id": "d4082538-ec03-44f0-aac3-07e03aad7555",
"metadata": {},
"source": [
"### Causal confounder"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "83932c30-950b-435a-b328-7993ce8cc6bd",
"metadata": {},
"outputs": [],
"source": [
"question = (\n",
" \"Jan has the number of pets as Marcia plus the number of pets as Cindy. \"\n",
" \"Marcia has two more pets than Cindy. \"\n",
" \"If Cindy has four pets, how many total pets do the three have?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "570de307-7c6b-4fdc-80c3-4361daa8a629",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mstory outcome data\n",
" name code value depends_on\n",
"0 cindy pass 4.0 []\n",
"1 marcia marcia.value = cindy.value + 2 6.0 [cindy]\n",
"2 jan jan.value = cindy.value + marcia.value 10.0 [cindy, marcia]\u001b[0m\n",
"\n",
"\u001b[36;1m\u001b[1;3mquery data\n",
"{\n",
" \"question\": \"how many total pets do the three have?\",\n",
" \"expression\": \"SELECT SUM(value) FROM df\",\n",
" \"llm_error_msg\": \"\"\n",
"}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"20.0"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cpal_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "00375615-6b6d-4357-bdb8-f64f682f7605",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"121pt\" height=\"188pt\" viewBox=\"0.00 0.00 120.99 188.00\">\n",
"<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(4 184)\">\n",
"<polygon fill=\"white\" stroke=\"transparent\" points=\"-4,4 -4,-184 116.99,-184 116.99,4 -4,4\"/>\n",
"<!-- cindy -->\n",
"<g id=\"node1\" class=\"node\">\n",
"<title>cindy</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"77.25\" cy=\"-162\" rx=\"36\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"77.25\" y=\"-158.3\" font-family=\"Times,serif\" font-size=\"14.00\">cindy</text>\n",
"</g>\n",
"<!-- marcia -->\n",
"<g id=\"node2\" class=\"node\">\n",
"<title>marcia</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"42.25\" cy=\"-90\" rx=\"42.49\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"42.25\" y=\"-86.3\" font-family=\"Times,serif\" font-size=\"14.00\">marcia</text>\n",
"</g>\n",
"<!-- cindy&#45;&gt;marcia -->\n",
"<g id=\"edge1\" class=\"edge\">\n",
"<title>cindy-&gt;marcia</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M68.95,-144.41C64.87,-136.25 59.86,-126.22 55.28,-117.07\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"58.33,-115.34 50.72,-107.96 52.07,-118.47 58.33,-115.34\"/>\n",
"</g>\n",
"<!-- jan -->\n",
"<g id=\"node3\" class=\"node\">\n",
"<title>jan</title>\n",
"<ellipse fill=\"none\" stroke=\"black\" cx=\"77.25\" cy=\"-18\" rx=\"27\" ry=\"18\"/>\n",
"<text text-anchor=\"middle\" x=\"77.25\" y=\"-14.3\" font-family=\"Times,serif\" font-size=\"14.00\">jan</text>\n",
"</g>\n",
"<!-- cindy&#45;&gt;jan -->\n",
"<g id=\"edge2\" class=\"edge\">\n",
"<title>cindy-&gt;jan</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M83.73,-144.1C87.32,-133.84 91.42,-120.36 93.25,-108 95.58,-92.17 95.58,-87.83 93.25,-72 91.95,-63.21 89.5,-53.86 86.91,-45.5\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"90.19,-44.29 83.73,-35.9 83.55,-46.49 90.19,-44.29\"/>\n",
"</g>\n",
"<!-- marcia&#45;&gt;jan -->\n",
"<g id=\"edge3\" class=\"edge\">\n",
"<title>marcia-&gt;jan</title>\n",
"<path fill=\"none\" stroke=\"black\" d=\"M50.72,-72.06C54.86,-63.77 59.94,-53.62 64.53,-44.42\"/>\n",
"<polygon fill=\"black\" stroke=\"black\" points=\"67.75,-45.82 69.09,-35.31 61.49,-42.69 67.75,-45.82\"/>\n",
"</g>\n",
"</g>\n",
"</svg>"
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# wait 20 secs to see display\n",
"cpal_chain.draw(path=\"web.svg\")\n",
"SVG(\"web.svg\")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "255683de-0c1c-4131-b277-99d09f5ac1fc",
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,206 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "dd7ec7af",
"metadata": {},
"source": [
"# Elasticsearch database\n",
"\n",
"Interact with Elasticsearch analytics database via Langchain. This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).\n",
"\n",
"The Elasticsearch client must have permissions for index listing, mapping description and search queries.\n",
"\n",
"See [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for instructions on how to run Elasticsearch locally.\n",
"\n",
"Make sure to install the Elasticsearch Python client before:\n",
"\n",
"```sh\n",
"pip install elasticsearch\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "dd8eae75",
"metadata": {},
"outputs": [],
"source": [
"from elasticsearch import Elasticsearch\n",
"\n",
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5cde03bc",
"metadata": {},
"outputs": [],
"source": [
"# Initialize Elasticsearch python client.\n",
"# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.Elasticsearch\n",
"ELASTIC_SEARCH_SERVER = \"https://elastic:pass@localhost:9200\"\n",
"db = Elasticsearch(ELASTIC_SEARCH_SERVER)"
]
},
{
"cell_type": "markdown",
"id": "74a41374",
"metadata": {},
"source": [
"Uncomment the next cell to initially populate your db."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "430ada0f",
"metadata": {},
"outputs": [],
"source": [
"# customers = [\n",
"# {\"firstname\": \"Jennifer\", \"lastname\": \"Walters\"},\n",
"# {\"firstname\": \"Monica\",\"lastname\":\"Rambeau\"},\n",
"# {\"firstname\": \"Carol\",\"lastname\":\"Danvers\"},\n",
"# {\"firstname\": \"Wanda\",\"lastname\":\"Maximoff\"},\n",
"# {\"firstname\": \"Jennifer\",\"lastname\":\"Takeda\"},\n",
"# ]\n",
"# for i, customer in enumerate(customers):\n",
"# db.create(index=\"customers\", document=customer, id=i)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f36ae0d8",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "b5d22d9d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ElasticsearchDatabaseChain chain...\u001b[0m\n",
"What are the first names of all the customers?\n",
"ESQuery:\u001b[32;1m\u001b[1;3m{'size': 10, 'query': {'match_all': {}}, '_source': ['firstname']}\u001b[0m\n",
"ESResult: \u001b[33;1m\u001b[1;3m{'took': 5, 'timed_out': False, '_shards': {'total': 1, 'successful': 1, 'skipped': 0, 'failed': 0}, 'hits': {'total': {'value': 6, 'relation': 'eq'}, 'max_score': 1.0, 'hits': [{'_index': 'customers', '_id': '0', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}, {'_index': 'customers', '_id': '1', '_score': 1.0, '_source': {'firstname': 'Monica'}}, {'_index': 'customers', '_id': '2', '_score': 1.0, '_source': {'firstname': 'Carol'}}, {'_index': 'customers', '_id': '3', '_score': 1.0, '_source': {'firstname': 'Wanda'}}, {'_index': 'customers', '_id': '4', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}, {'_index': 'customers', '_id': 'firstname', '_score': 1.0, '_source': {'firstname': 'Jennifer'}}]}}\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3mThe first names of all the customers are Jennifer, Monica, Carol, Wanda, and Jennifer.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The first names of all the customers are Jennifer, Monica, Carol, Wanda, and Jennifer.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the first names of all the customers?\"\n",
"chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "9b4bfada",
"metadata": {},
"source": [
"## Custom prompt\n",
"\n",
"For best results you'll likely need to customize the prompt."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0a494f5b",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATE\n",
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
"\n",
"Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.\n",
"\n",
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n",
"\n",
"Use the following format:\n",
"\n",
"Question: Question here\n",
"ESQuery: Elasticsearch Query formatted as json\n",
"\"\"\"\n",
"\n",
"PROMPT = PromptTemplate.from_template(\n",
" PROMPT_TEMPLATE,\n",
")\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)"
]
},
{
"cell_type": "markdown",
"id": "372b8f93",
"metadata": {},
"source": [
"## Adding example rows from each index\n",
"\n",
"Sometimes, the format of the data is not obvious and it is optimal to include a sample of rows from the indices in the prompt to allow the LLM to understand the data before providing a final query. Here we will use this feature to let the LLM know that artists are saved with their full names by providing ten rows from the index."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eef818de",
"metadata": {},
"outputs": [],
"source": [
"chain = ElasticsearchDatabaseChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0),\n",
" database=db,\n",
" sample_documents_in_index_info=2, # 2 rows from each index will be included in the prompt as sample data\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,566 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6605e7f7",
"metadata": {},
"source": [
"# Extraction\n",
"\n",
"The extraction chain uses the OpenAI `functions` parameter to specify a schema to extract entities from a document. This helps us make sure that the model outputs exactly the schema of entities and properties that we want, with their appropriate types.\n",
"\n",
"The extraction chain is to be used when we want to extract several entities with their properties from the same passage (i.e. what people were mentioned in this passage?)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "34f04daf",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import create_extraction_chain, create_extraction_chain_pydantic\n",
"from langchain.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a2648974",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")"
]
},
{
"cell_type": "markdown",
"id": "5ef034ce",
"metadata": {},
"source": [
"## Extracting entities"
]
},
{
"cell_type": "markdown",
"id": "78ff9df9",
"metadata": {},
"source": [
"To extract entities, we need to create a schema where we specify all the properties we want to find and the type we expect them to have. We can also specify which of these properties are required and which are optional."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4ac43eba",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\"},\n",
" \"height\": {\"type\": \"integer\"},\n",
" \"hair_color\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [\"name\", \"height\"],\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "640bd005",
"metadata": {},
"outputs": [],
"source": [
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
" \"\"\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "64313214",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "17c48adb",
"metadata": {},
"source": [
"As we can see, we extracted the required entities and their properties in the required format (it even calculated Claudia's height before returning!)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cc5436ed",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},\n",
" {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "8d51fcdc",
"metadata": {},
"source": [
"## Several entity types"
]
},
{
"cell_type": "markdown",
"id": "5813affe",
"metadata": {},
"source": [
"Notice that we are using OpenAI functions under the hood and thus the model can only call one function per request (with one, unique schema)"
]
},
{
"cell_type": "markdown",
"id": "511b9838",
"metadata": {},
"source": [
"If we want to extract more than one entity type, we need to introduce a little hack - we will define our properties with an included entity type. \n",
"\n",
"Following we have an example where we also want to extract dog attributes from the passage. Notice the 'person_' and 'dog_' prefixes we use for each property; this tells the model which entity type the property refers to. In this way, the model can return properties from several entity types in one single call."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cf243a26",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"person_name\": {\"type\": \"string\"},\n",
" \"person_height\": {\"type\": \"integer\"},\n",
" \"person_hair_color\": {\"type\": \"string\"},\n",
" \"dog_name\": {\"type\": \"string\"},\n",
" \"dog_breed\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [\"person_name\", \"person_height\"],\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "52841fb3",
"metadata": {},
"outputs": [],
"source": [
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
"Alex's dog Frosty is a labrador and likes to play hide and seek.\n",
" \"\"\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "93f904ab",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "eb074f7b",
"metadata": {},
"source": [
"People attributes and dog attributes were correctly extracted from the text in the same call"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "db3e9e17",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'person_name': 'Alex',\n",
" 'person_height': 5,\n",
" 'person_hair_color': 'blonde',\n",
" 'dog_name': 'Frosty',\n",
" 'dog_breed': 'labrador'},\n",
" {'person_name': 'Claudia',\n",
" 'person_height': 6,\n",
" 'person_hair_color': 'brunette'}]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "0273e0e2",
"metadata": {},
"source": [
"## Unrelated entities"
]
},
{
"cell_type": "markdown",
"id": "c07b3480",
"metadata": {},
"source": [
"What if our entities are unrelated? In that case, the model will return the unrelated entities in different dictionaries, allowing us to successfully extract several unrelated entity types in the same call."
]
},
{
"cell_type": "markdown",
"id": "01d98af0",
"metadata": {},
"source": [
"Notice that we use `required: []`: we need to allow the model to return **only** person attributes or **only** dog attributes for a single entity (person or dog)"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "e584c993",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"person_name\": {\"type\": \"string\"},\n",
" \"person_height\": {\"type\": \"integer\"},\n",
" \"person_hair_color\": {\"type\": \"string\"},\n",
" \"dog_name\": {\"type\": \"string\"},\n",
" \"dog_breed\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [],\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "ad6b105f",
"metadata": {},
"outputs": [],
"source": [
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
"\n",
"Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "6bfe5a33",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "24fe09af",
"metadata": {},
"source": [
"We have each entity in its own separate dictionary, with only the appropriate attributes being returned"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "f6e1fd89",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n",
" {'person_name': 'Claudia',\n",
" 'person_height': 6,\n",
" 'person_hair_color': 'brunette'},\n",
" {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'},\n",
" {'dog_name': 'Milo', 'dog_breed': 'border collie'}]"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "0ac466d1",
"metadata": {},
"source": [
"## Extra info for an entity"
]
},
{
"cell_type": "markdown",
"id": "d240ffc1",
"metadata": {},
"source": [
"What if.. _we don't know what we want?_ More specifically, say we know a few properties we want to extract for a given entity but we also want to know if there's any extra information in the passage. Fortunately, we don't need to structure everything - we can have unstructured extraction as well. \n",
"\n",
"We can do this by introducing another hack, namely the *extra_info* attribute - let's see an example."
]
},
{
"cell_type": "code",
"execution_count": 68,
"id": "f19685f6",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"person_name\": {\"type\": \"string\"},\n",
" \"person_height\": {\"type\": \"integer\"},\n",
" \"person_hair_color\": {\"type\": \"string\"},\n",
" \"dog_name\": {\"type\": \"string\"},\n",
" \"dog_breed\": {\"type\": \"string\"},\n",
" \"dog_extra_info\": {\"type\": \"string\"},\n",
" },\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 81,
"id": "200c3477",
"metadata": {},
"outputs": [],
"source": [
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
"\n",
"Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 82,
"id": "ddad7dc6",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "e5c0dbbc",
"metadata": {},
"source": [
"It is nice to know more about Willow and Milo!"
]
},
{
"cell_type": "code",
"execution_count": 83,
"id": "c22cfd30",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n",
" {'person_name': 'Claudia',\n",
" 'person_height': 6,\n",
" 'person_hair_color': 'brunette'},\n",
" {'dog_name': 'Willow',\n",
" 'dog_breed': 'German Shepherd',\n",
" 'dog_extra_information': 'likes to play with other dogs'},\n",
" {'dog_name': 'Milo',\n",
" 'dog_breed': 'border collie',\n",
" 'dog_extra_information': 'lives close by'}]"
]
},
"execution_count": 83,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "698b4c4d",
"metadata": {},
"source": [
"## Pydantic example"
]
},
{
"cell_type": "markdown",
"id": "6504a6d9",
"metadata": {},
"source": [
"We can also use a Pydantic schema to choose the required properties and types and we will set as 'Optional' those that are not strictly required.\n",
"\n",
"By using the `create_extraction_chain_pydantic` function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. \n",
"\n",
"In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6792866b",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional, List\n",
"from pydantic import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "36a63761",
"metadata": {},
"outputs": [],
"source": [
"class Properties(BaseModel):\n",
" person_name: str\n",
" person_height: int\n",
" person_hair_color: str\n",
" dog_breed: Optional[str]\n",
" dog_name: Optional[str]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8ffd1e57",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "24baa954",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
"Alex's dog Frosty is a labrador and likes to play hide and seek.\n",
" \"\"\""
]
},
{
"cell_type": "markdown",
"id": "84e0a241",
"metadata": {},
"source": [
"As we can see, we extracted the required entities and their properties in the required format:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f771df58",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed='labrador', dog_name='Frosty'),\n",
" Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0df61283",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,498 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0f0b9afa",
"metadata": {},
"source": [
"# FLARE\n",
"\n",
"This notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).\n",
"\n",
"Please see the original repo [here](https://github.com/jzbjyb/FLARE/tree/main).\n",
"\n",
"The basic idea is:\n",
"\n",
"- Start answering a question\n",
"- If you start generating tokens the model is uncertain about, look up relevant documents\n",
"- Use those documents to continue generating\n",
"- Repeat until finished\n",
"\n",
"There is a lot of cool detail in how the lookup of relevant documents is done.\n",
"Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is `Joe Biden went to Harvard`, and the tokens the model was uncertain about was `Harvard`, then a good generated question would be `where did Joe Biden go to college`. This generated question is then used in a retrieval step to fetch relevant documents.\n",
"\n",
"In order to set up this chain, we will need three things:\n",
"\n",
"- An LLM to generate the answer\n",
"- An LLM to generate hypothetical questions to use in retrieval\n",
"- A retriever to use to look up answers for\n",
"\n",
"The LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).\n",
"\n",
"The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.\n",
"\n",
"The retriever can be anything. In this notebook we will use [SERPER](https://serper.dev/) search engine, because it is cheap.\n",
"\n",
"Other important parameters to understand:\n",
"\n",
"- `max_generation_len`: The maximum number of tokens to generate before stopping to check if any are uncertain\n",
"- `min_prob`: Any tokens generated with probability below this will be considered uncertain"
]
},
{
"cell_type": "markdown",
"id": "a7e4b63d",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "042bb161",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"SERPER_API_KEY\"] = \"\"",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a7888f4a",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"\n",
"import numpy as np\n",
"\n",
"from langchain.schema import BaseRetriever\n",
"from langchain.callbacks.manager import (\n",
" AsyncCallbackManagerForRetrieverRun,\n",
" CallbackManagerForRetrieverRun,\n",
")\n",
"from langchain.utilities import GoogleSerperAPIWrapper\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.schema import Document\n",
"from typing import Any, List"
]
},
{
"cell_type": "markdown",
"id": "5f552dce",
"metadata": {},
"source": [
"## Retriever"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "59c7d875",
"metadata": {},
"outputs": [],
"source": [
"class SerperSearchRetriever(BaseRetriever):\n",
" search: GoogleSerperAPIWrapper = None\n",
"\n",
" def _get_relevant_documents(\n",
" self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any\n",
" ) -> List[Document]:\n",
" return [Document(page_content=self.search.run(query))]\n",
"\n",
" async def _aget_relevant_documents(\n",
" self,\n",
" query: str,\n",
" *,\n",
" run_manager: AsyncCallbackManagerForRetrieverRun,\n",
" **kwargs: Any,\n",
" ) -> List[Document]:\n",
" raise NotImplementedError()\n",
"\n",
"\n",
"retriever = SerperSearchRetriever(search=GoogleSerperAPIWrapper())"
]
},
{
"cell_type": "markdown",
"id": "92478194",
"metadata": {},
"source": [
"## FLARE Chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "577e7c2c",
"metadata": {},
"outputs": [],
"source": [
"# We set this so we can see what exactly is going on\n",
"import langchain\n",
"\n",
"langchain.verbose = True"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "300d783e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import FlareChain\n",
"\n",
"flare = FlareChain.from_llm(\n",
" ChatOpenAI(temperature=0),\n",
" retriever=retriever,\n",
" max_generation_len=164,\n",
" min_prob=0.3,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1f3d5e90",
"metadata": {},
"outputs": [],
"source": [
"query = \"explain in great detail the difference between the langchain framework and baby agi\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4b1bfa8c",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new FlareChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mCurrent Response: \u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: \n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new QuestionGeneratorChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" decentralized platform for natural language processing\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" uses a blockchain\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" distributed ledger to\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" process data, allowing for secure and transparent data sharing.\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" set of tools\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" help developers create\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" create an AI system\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" NLP applications\" is:\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mGenerated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?']\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new _OpenAIResponseChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ...\n",
"\n",
"LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain.\n",
"\n",
"LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology.\n",
"\n",
"Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain.\n",
"\n",
"LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ...\n",
"\n",
"LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n",
"\n",
"Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023\n",
"\n",
"At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> RESPONSE: \u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. '"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"flare.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7bed8944",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains”. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\\n\\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\\n\\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = OpenAI()\n",
"llm(query)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8fb76286",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new FlareChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mCurrent Response: \u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: \n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new QuestionGeneratorChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" very different origin\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" 2020 by a\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" developers as a platform for creating and managing decentralized language learning applications.\" is:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mGenerated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?']\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new _OpenAIResponseChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ...\n",
"\n",
"After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.\n",
"\n",
"At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' The origin stories of LangChain and Bitcoin are quite different. Bitcoin was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency, while LangChain is a framework built around LLMs. '"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"flare.run(\"how are the origin stories of langchain and bitcoin similar or different?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbadd022",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,819 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {
"id": "c94240f5"
},
"source": [
"# ArangoDB QA chain\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/extras/modules/chains/additional/graph_arangodb_qa.ipynb)\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to an [ArangoDB](https://github.com/arangodb/arangodb#readme) database."
]
},
{
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {
"id": "dbc0ee68"
},
"source": [
"You can get a local ArangoDB instance running via the [ArangoDB Docker image](https://hub.docker.com/_/arangodb): \n",
"\n",
"```\n",
"docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodb\n",
"```\n",
"\n",
"An alternative is to use the [ArangoDB Cloud Connector package](https://github.com/arangodb/adb-cloud-connector#readme) to get a temporary cloud instance running:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "izi6YoFC8KRH",
"metadata": {
"id": "izi6YoFC8KRH"
},
"outputs": [],
"source": [
"%%capture\n",
"!pip install python-arango # The ArangoDB Python Driver\n",
"!pip install adb-cloud-connector # The ArangoDB Cloud Instance provisioner\n",
"!pip install openai\n",
"!pip install langchain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62812aad",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "62812aad",
"outputId": "f7ed8346-d88b-40d1-eaff-68e97e0e157e"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Log: requesting new credentials...\n",
"Succcess: new credentials acquired\n",
"{\n",
" \"dbName\": \"TUT3sp29s3pjf1io0h4cfdsq\",\n",
" \"username\": \"TUTo6nkwgzkizej3kysgdyeo8\",\n",
" \"password\": \"TUT9vx0qjqt42i9bq8uik4v9\",\n",
" \"hostname\": \"tutorials.arangodb.cloud\",\n",
" \"port\": 8529,\n",
" \"url\": \"https://tutorials.arangodb.cloud:8529\"\n",
"}\n"
]
}
],
"source": [
"# Instantiate ArangoDB Database\n",
"import json\n",
"from arango import ArangoClient\n",
"from adb_cloud_connector import get_temp_credentials\n",
"\n",
"con = get_temp_credentials()\n",
"\n",
"db = ArangoClient(hosts=con[\"url\"]).db(\n",
" con[\"dbName\"], con[\"username\"], con[\"password\"], verify=True\n",
")\n",
"\n",
"print(json.dumps(con, indent=2))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0928915d",
"metadata": {
"id": "0928915d"
},
"outputs": [],
"source": [
"# Instantiate the ArangoDB-LangChain Graph\n",
"from langchain.graphs import ArangoGraph\n",
"\n",
"graph = ArangoGraph(db)"
]
},
{
"cell_type": "markdown",
"id": "995ea9b9",
"metadata": {
"id": "995ea9b9"
},
"source": [
"## Populating the Database\n",
"\n",
"We will rely on the Python Driver to import our [GameOfThrones](https://github.com/arangodb/example-datasets/tree/master/GameOfThrones) data into our database."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fedd26b9",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fedd26b9",
"outputId": "fc7d9067-e4f5-495e-cd0c-79135bc16fc2"
},
"outputs": [
{
"data": {
"text/plain": [
"{'error': False,\n",
" 'created': 4,\n",
" 'errors': 0,\n",
" 'empty': 0,\n",
" 'updated': 0,\n",
" 'ignored': 0,\n",
" 'details': []}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"if db.has_graph(\"GameOfThrones\"):\n",
" db.delete_graph(\"GameOfThrones\", drop_collections=True)\n",
"\n",
"db.create_graph(\n",
" \"GameOfThrones\",\n",
" edge_definitions=[\n",
" {\n",
" \"edge_collection\": \"ChildOf\",\n",
" \"from_vertex_collections\": [\"Characters\"],\n",
" \"to_vertex_collections\": [\"Characters\"],\n",
" },\n",
" ],\n",
")\n",
"\n",
"documents = [\n",
" {\n",
" \"_key\": \"NedStark\",\n",
" \"name\": \"Ned\",\n",
" \"surname\": \"Stark\",\n",
" \"alive\": True,\n",
" \"age\": 41,\n",
" \"gender\": \"male\",\n",
" },\n",
" {\n",
" \"_key\": \"CatelynStark\",\n",
" \"name\": \"Catelyn\",\n",
" \"surname\": \"Stark\",\n",
" \"alive\": False,\n",
" \"age\": 40,\n",
" \"gender\": \"female\",\n",
" },\n",
" {\n",
" \"_key\": \"AryaStark\",\n",
" \"name\": \"Arya\",\n",
" \"surname\": \"Stark\",\n",
" \"alive\": True,\n",
" \"age\": 11,\n",
" \"gender\": \"female\",\n",
" },\n",
" {\n",
" \"_key\": \"BranStark\",\n",
" \"name\": \"Bran\",\n",
" \"surname\": \"Stark\",\n",
" \"alive\": True,\n",
" \"age\": 10,\n",
" \"gender\": \"male\",\n",
" },\n",
"]\n",
"\n",
"edges = [\n",
" {\"_to\": \"Characters/NedStark\", \"_from\": \"Characters/AryaStark\"},\n",
" {\"_to\": \"Characters/NedStark\", \"_from\": \"Characters/BranStark\"},\n",
" {\"_to\": \"Characters/CatelynStark\", \"_from\": \"Characters/AryaStark\"},\n",
" {\"_to\": \"Characters/CatelynStark\", \"_from\": \"Characters/BranStark\"},\n",
"]\n",
"\n",
"db.collection(\"Characters\").import_bulk(documents)\n",
"db.collection(\"ChildOf\").import_bulk(edges)"
]
},
{
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {
"id": "58c1a8ea"
},
"source": [
"## Getting & Setting the ArangoDB Schema\n",
"\n",
"An initial ArangoDB Schema is generated upon instantiating the `ArangoDBGraph` object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e3de44f",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "4e3de44f",
"outputId": "6102f0c6-4a94-4e00-b93e-eb8de2f7d9d5"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"Graph Schema\": [],\n",
" \"Collection Schema\": []\n",
"}\n"
]
}
],
"source": [
"# The schema should be empty here,\n",
"# since `graph` was initialized prior to ArangoDB Data ingestion (see above).\n",
"\n",
"import json\n",
"\n",
"print(json.dumps(graph.schema, indent=4))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1fe76ccd",
"metadata": {
"id": "1fe76ccd"
},
"outputs": [],
"source": [
"graph.set_schema()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "mZ679anj_-Er",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "mZ679anj_-Er",
"outputId": "e05229c7-bc61-4803-d720-47e3e9b2b350"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"Graph Schema\": [\n",
" {\n",
" \"graph_name\": \"GameOfThrones\",\n",
" \"edge_definitions\": [\n",
" {\n",
" \"edge_collection\": \"ChildOf\",\n",
" \"from_vertex_collections\": [\n",
" \"Characters\"\n",
" ],\n",
" \"to_vertex_collections\": [\n",
" \"Characters\"\n",
" ]\n",
" }\n",
" ]\n",
" }\n",
" ],\n",
" \"Collection Schema\": [\n",
" {\n",
" \"collection_name\": \"ChildOf\",\n",
" \"collection_type\": \"edge\",\n",
" \"edge_properties\": [\n",
" {\n",
" \"name\": \"_key\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_id\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_from\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_to\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_rev\",\n",
" \"type\": \"str\"\n",
" }\n",
" ],\n",
" \"example_edge\": {\n",
" \"_key\": \"266218884025\",\n",
" \"_id\": \"ChildOf/266218884025\",\n",
" \"_from\": \"Characters/AryaStark\",\n",
" \"_to\": \"Characters/NedStark\",\n",
" \"_rev\": \"_gVPKGSq---\"\n",
" }\n",
" },\n",
" {\n",
" \"collection_name\": \"Characters\",\n",
" \"collection_type\": \"document\",\n",
" \"document_properties\": [\n",
" {\n",
" \"name\": \"_key\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_id\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"_rev\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"name\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"surname\",\n",
" \"type\": \"str\"\n",
" },\n",
" {\n",
" \"name\": \"alive\",\n",
" \"type\": \"bool\"\n",
" },\n",
" {\n",
" \"name\": \"age\",\n",
" \"type\": \"int\"\n",
" },\n",
" {\n",
" \"name\": \"gender\",\n",
" \"type\": \"str\"\n",
" }\n",
" ],\n",
" \"example_document\": {\n",
" \"_key\": \"NedStark\",\n",
" \"_id\": \"Characters/NedStark\",\n",
" \"_rev\": \"_gVPKGPi---\",\n",
" \"name\": \"Ned\",\n",
" \"surname\": \"Stark\",\n",
" \"alive\": true,\n",
" \"age\": 41,\n",
" \"gender\": \"male\"\n",
" }\n",
" }\n",
" ]\n",
"}\n"
]
}
],
"source": [
"# We can now view the generated schema\n",
"\n",
"import json\n",
"\n",
"print(json.dumps(graph.schema, indent=4))"
]
},
{
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {
"id": "68a3c677"
},
"source": [
"## Querying the ArangoDB Database\n",
"\n",
"We can now use the ArangoDB Graph QA Chain to inquire about our data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "635c4018",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-key-here\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7476ce98",
"metadata": {
"id": "7476ce98"
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import ArangoGraphQAChain\n",
"\n",
"chain = ArangoGraphQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ef8ee27b",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 261
},
"id": "ef8ee27b",
"outputId": "6008ee92-a3ec-4968-d48d-6a5b66403959"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"WITH Characters\n",
"FOR character IN Characters\n",
"FILTER character.name == \"Ned\" AND character.surname == \"Stark\"\n",
"RETURN character.alive\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[True]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Yes, Ned Stark is alive.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Is Ned Stark alive?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "9CSig1BgA76q",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 261
},
"id": "9CSig1BgA76q",
"outputId": "3060cf15-68e0-4f8a-cdfd-68af3f0e5fbf"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"WITH Characters\n",
"FOR character IN Characters\n",
"FILTER character.name == \"Arya\" && character.surname == \"Stark\"\n",
"RETURN character.age\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[11]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Arya Stark is 11 years old.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"How old is Arya Stark?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9Fzdic_pA_4y",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 298
},
"id": "9Fzdic_pA_4y",
"outputId": "9bd93580-964e-4c53-e273-6723dad5f375"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"WITH Characters, ChildOf\n",
"FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf\n",
" FILTER p.vertices[-1]._key == 'NedStark'\n",
" RETURN p\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Are Arya Stark and Ned Stark related?\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "zq_oeDpAOXpF",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 261
},
"id": "zq_oeDpAOXpF",
"outputId": "a47f37b5-4d7b-41c6-fbc7-7b1abc25fa20"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"WITH Characters, ChildOf\n",
"FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf\n",
"FILTER v.alive == false\n",
"RETURN e\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Does Arya Stark have a dead parent?\")"
]
},
{
"cell_type": "markdown",
"id": "Ob_3aGauGd7d",
"metadata": {
"id": "Ob_3aGauGd7d"
},
"source": [
"## Chain Modifiers"
]
},
{
"cell_type": "markdown",
"id": "3P490E2dGiBp",
"metadata": {
"id": "3P490E2dGiBp"
},
"source": [
"You can alter the values of the following `ArangoDBGraphQAChain` class variables to modify the behaviour of your chain results\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "1B9h3PvzJ41T",
"metadata": {
"id": "1B9h3PvzJ41T"
},
"outputs": [],
"source": [
"# Specify the maximum number of AQL Query Results to return\n",
"chain.top_k = 10\n",
"\n",
"# Specify whether or not to return the AQL Query in the output dictionary\n",
"chain.return_aql_query = True\n",
"\n",
"# Specify whether or not to return the AQL JSON Result in the output dictionary\n",
"chain.return_aql_result = True\n",
"\n",
"# Specify the maximum amount of AQL Generation attempts that should be made\n",
"chain.max_aql_generation_attempts = 5\n",
"\n",
"# Specify a set of AQL Query Examples, which are passed to\n",
"# the AQL Generation Prompt Template to promote few-shot-learning.\n",
"# Defaults to an empty string.\n",
"chain.aql_examples = \"\"\"\n",
"# Is Ned Stark alive?\n",
"RETURN DOCUMENT('Characters/NedStark').alive\n",
"\n",
"# Is Arya Stark the child of Ned Stark?\n",
"FOR e IN ChildOf\n",
" FILTER e._from == \"Characters/AryaStark\" AND e._to == \"Characters/NedStark\"\n",
" RETURN e\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "49cnjYV-PUv3",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 209
},
"id": "49cnjYV-PUv3",
"outputId": "f05f0a86-f922-47d1-91b9-5380ef1f996d"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"RETURN DOCUMENT('Characters/NedStark').alive\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[True]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Yes, according to the information in the database, Ned Stark is alive.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Is Ned Stark alive?\")\n",
"\n",
"# chain(\"Is Ned Stark alive?\") # Returns a dictionary with the AQL Query & AQL Result"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "nWfALJ8dPczE",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 244
},
"id": "nWfALJ8dPczE",
"outputId": "1235baae-f3f7-438e-ef24-658cd17f727d"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ArangoGraphQAChain chain...\u001b[0m\n",
"AQL Query (1):\u001b[32;1m\u001b[1;3m\n",
"FOR e IN ChildOf\n",
" FILTER e._from == \"Characters/BranStark\" AND e._to == \"Characters/NedStark\"\n",
" RETURN e\n",
"\u001b[0m\n",
"AQL Result:\n",
"\u001b[32;1m\u001b[1;3m[{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Is Bran Stark the child of Ned Stark?\")"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [
"995ea9b9",
"58c1a8ea",
"68a3c677",
"Ob_3aGauGd7d"
],
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,400 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {},
"source": [
"# Graph DB QA chain\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language."
]
},
{
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a running Neo4j instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container.\n",
"You can run a local docker container by running the executing the following script:\n",
"\n",
"```\n",
"docker run \\\n",
" --name neo4j \\\n",
" -p 7474:7474 -p 7687:7687 \\\n",
" -d \\\n",
" -e NEO4J_AUTH=neo4j/pleaseletmein \\\n",
" -e NEO4J_PLUGINS=\\[\\\"apoc\\\"\\] \\\n",
" neo4j:latest\n",
"```\n",
"\n",
"If you are using the docker container, you need to wait a couple of second for the database to start."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "62812aad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import GraphCypherQAChain\n",
"from langchain.graphs import Neo4jGraph"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0928915d",
"metadata": {},
"outputs": [],
"source": [
"graph = Neo4jGraph(\n",
" url=\"bolt://localhost:7687\", username=\"neo4j\", password=\"pleaseletmein\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "995ea9b9",
"metadata": {},
"source": [
"## Seeding the database\n",
"\n",
"Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fedd26b9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.query(\n",
" \"\"\"\n",
"MERGE (m:Movie {name:\"Top Gun\"})\n",
"WITH m\n",
"UNWIND [\"Tom Cruise\", \"Val Kilmer\", \"Anthony Edwards\", \"Meg Ryan\"] AS actor\n",
"MERGE (a:Actor {name:actor})\n",
"MERGE (a)-[:ACTED_IN]->(m)\n",
"\"\"\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"If the schema of database changes, you can refresh the schema information needed to generate Cypher statements."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4e3de44f",
"metadata": {},
"outputs": [],
"source": [
"graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1fe76ccd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
" Node properties are the following:\n",
" [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}]\n",
" Relationship properties are the following:\n",
" []\n",
" The relationships are the following:\n",
" ['(:Actor)-[:ACTED_IN]->(:Movie)']\n",
" \n"
]
}
],
"source": [
"print(graph.get_schema)"
]
},
{
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the graph cypher QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7476ce98",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ef8ee27b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "2d28c4df",
"metadata": {},
"source": [
"## Limit the number of results\n",
"You can limit the number of results from the Cypher QA Chain using the `top_k` parameter.\n",
"The default is 10."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "df230946",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3f1600ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Val Kilmer and Anthony Edwards played in Top Gun.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "88c16206",
"metadata": {},
"source": [
"## Return intermediate results\n",
"You can return intermediate steps from the Cypher QA Chain using the `return_intermediate_steps` parameter"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e412f36b",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4f4699dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\\nRETURN a.name\"}, {'context': [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]}]\n",
"Final answer: Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.\n"
]
}
],
"source": [
"result = chain(\"Who played in Top Gun?\")\n",
"print(f\"Intermediate steps: {result['intermediate_steps']}\")\n",
"print(f\"Final answer: {result['result']}\")"
]
},
{
"cell_type": "markdown",
"id": "d6e1b054",
"metadata": {},
"source": [
"## Return direct results\n",
"You can return direct results from the Cypher QA Chain using the `return_direct` parameter"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2d3acf10",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "b0a9d143",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"[{'a.name': 'Val Kilmer'},\n",
" {'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'},\n",
" {'a.name': 'Tom Cruise'}]"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74d0a36f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,308 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d2777010",
"metadata": {},
"source": [
"# HugeGraph QA Chain\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to [HugeGraph](https://hugegraph.apache.org/cn/) database."
]
},
{
"cell_type": "markdown",
"id": "f26dcbe4",
"metadata": {},
"source": [
"You will need to have a running HugeGraph instance.\n",
"You can run a local docker container by running the executing the following script:\n",
"\n",
"```\n",
"docker run \\\n",
" --name=graph \\\n",
" -itd \\\n",
" -p 8080:8080 \\\n",
" hugegraph/hugegraph\n",
"```\n",
"\n",
"If we want to connect HugeGraph in the application, we need to install python sdk:\n",
"\n",
"```\n",
"pip3 install hugegraph-python\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "d64a29f1",
"metadata": {},
"source": [
"If you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e53ab93e",
"metadata": {},
"outputs": [],
"source": [
"from hugegraph.connection import PyHugeGraph\n",
"\n",
"client = PyHugeGraph(\"localhost\", \"8080\", user=\"admin\", pwd=\"admin\", graph=\"hugegraph\")"
]
},
{
"cell_type": "markdown",
"id": "b7c3a50e",
"metadata": {},
"source": [
"First, we create the schema for a simple movie database:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ef5372a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'create EdgeLabel success, Detail: \"b\\'{\"id\":1,\"name\":\"ActedIn\",\"source_label\":\"Person\",\"target_label\":\"Movie\",\"frequency\":\"SINGLE\",\"sort_keys\":[],\"nullable_keys\":[],\"index_labels\":[],\"properties\":[],\"status\":\"CREATED\",\"ttl\":0,\"enable_label_index\":true,\"user_data\":{\"~create_time\":\"2023-07-04 10:48:47.908\"}}\\'\"'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"schema\"\"\"\n",
"schema = client.schema()\n",
"schema.propertyKey(\"name\").asText().ifNotExist().create()\n",
"schema.propertyKey(\"birthDate\").asText().ifNotExist().create()\n",
"schema.vertexLabel(\"Person\").properties(\n",
" \"name\", \"birthDate\"\n",
").usePrimaryKeyId().primaryKeys(\"name\").ifNotExist().create()\n",
"schema.vertexLabel(\"Movie\").properties(\"name\").usePrimaryKeyId().primaryKeys(\n",
" \"name\"\n",
").ifNotExist().create()\n",
"schema.edgeLabel(\"ActedIn\").sourceLabel(\"Person\").targetLabel(\n",
" \"Movie\"\n",
").ifNotExist().create()"
]
},
{
"cell_type": "markdown",
"id": "016f7989",
"metadata": {},
"source": [
"Then we can insert some data."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "b7f4c370",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1:Robert De Niro--ActedIn-->2:The Godfather Part II"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\"\"\"graph\"\"\"\n",
"g = client.graph()\n",
"g.addVertex(\"Person\", {\"name\": \"Al Pacino\", \"birthDate\": \"1940-04-25\"})\n",
"g.addVertex(\"Person\", {\"name\": \"Robert De Niro\", \"birthDate\": \"1943-08-17\"})\n",
"g.addVertex(\"Movie\", {\"name\": \"The Godfather\"})\n",
"g.addVertex(\"Movie\", {\"name\": \"The Godfather Part II\"})\n",
"g.addVertex(\"Movie\", {\"name\": \"The Godfather Coda The Death of Michael Corleone\"})\n",
"\n",
"g.addEdge(\"ActedIn\", \"1:Al Pacino\", \"2:The Godfather\", {})\n",
"g.addEdge(\"ActedIn\", \"1:Al Pacino\", \"2:The Godfather Part II\", {})\n",
"g.addEdge(\n",
" \"ActedIn\", \"1:Al Pacino\", \"2:The Godfather Coda The Death of Michael Corleone\", {}\n",
")\n",
"g.addEdge(\"ActedIn\", \"1:Robert De Niro\", \"2:The Godfather Part II\", {})"
]
},
{
"cell_type": "markdown",
"id": "5b8f7788",
"metadata": {},
"source": [
"## Creating `HugeGraphQAChain`\n",
"\n",
"We can now create the `HugeGraph` and `HugeGraphQAChain`. To create the `HugeGraph` we simply need to pass the database object to the `HugeGraph` constructor."
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "f1f68fcf",
"metadata": {
"is_executing": true
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import HugeGraphQAChain\n",
"from langchain.graphs import HugeGraph"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "b86ebfa7",
"metadata": {},
"outputs": [],
"source": [
"graph = HugeGraph(\n",
" username=\"admin\",\n",
" password=\"admin\",\n",
" address=\"localhost\",\n",
" port=8080,\n",
" graph=\"hugegraph\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e262540b",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"\n",
"If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements."
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "134dd8d6",
"metadata": {},
"outputs": [],
"source": [
"# graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "e78b8e72",
"metadata": {
"ExecuteTime": {}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']]\n",
"Edge properties: [name: ActedIn, properties: []]\n",
"Relationships: ['Person--ActedIn-->Movie']\n",
"\n"
]
}
],
"source": [
"print(graph.get_schema)"
]
},
{
"cell_type": "markdown",
"id": "5c27e813",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the graph Gremlin QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "3b23dead",
"metadata": {},
"outputs": [],
"source": [
"chain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "76aecc93",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Generated gremlin:\n",
"\u001b[32;1m\u001b[1;3mg.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true)\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Al Pacino played in The Godfather.'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in The Godfather?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "869f0258",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,374 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# KuzuQAChain\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to [Kùzu](https://kuzudb.com) database."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[Kùzu](https://kuzudb.com) is an in-process property graph database management system. You can simply install it with `pip`:\n",
"\n",
"```bash\n",
"pip install kuzu\n",
"```\n",
"\n",
"Once installed, you can simply import it and start creating a database on the local machine and connect to it:\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import kuzu\n",
"\n",
"db = kuzu.Database(\"test_db\")\n",
"conn = kuzu.Connection(db)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we create the schema for a simple movie database:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<kuzu.query_result.QueryResult at 0x1066ff410>"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conn.execute(\"CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))\")\n",
"conn.execute(\n",
" \"CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))\"\n",
")\n",
"conn.execute(\"CREATE REL TABLE ActedIn (FROM Person TO Movie)\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we can insert some data."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<kuzu.query_result.QueryResult at 0x107016210>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conn.execute(\"CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})\")\n",
"conn.execute(\"CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})\")\n",
"conn.execute(\"CREATE (:Movie {name: 'The Godfather'})\")\n",
"conn.execute(\"CREATE (:Movie {name: 'The Godfather: Part II'})\")\n",
"conn.execute(\n",
" \"CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})\"\n",
")\n",
"conn.execute(\n",
" \"MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)\"\n",
")\n",
"conn.execute(\n",
" \"MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)\"\n",
")\n",
"conn.execute(\n",
" \"MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)\"\n",
")\n",
"conn.execute(\n",
" \"MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)\"\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating `KuzuQAChain`\n",
"\n",
"We can now create the `KuzuGraph` and `KuzuQAChain`. To create the `KuzuGraph` we simply need to pass the database object to the `KuzuGraph` constructor."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.graphs import KuzuGraph\n",
"from langchain.chains import KuzuQAChain"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"graph = KuzuGraph(db)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"\n",
"If the schema of database changes, you can refresh the schema information needed to generate Cypher statements."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}]\n",
"Relationships properties: [{'properties': [], 'label': 'ActedIn'}]\n",
"Relationships: ['(:Person)-[:ActedIn]->(:Movie)']\n",
"\n"
]
}
],
"source": [
"print(graph.get_schema)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the `KuzuQAChain` to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Al Pacino and Robert De Niro both played in The Godfather: Part II.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in The Godfather: Part II?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie)\n",
"RETURN m.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'m.name': 'The Godfather: Part II'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Robert De Niro played in The Godfather: Part II.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Robert De Niro played in which movies?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie)\n",
"RETURN p.birthDate\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'p.birthDate': '1943-08-17'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Robert De Niro was born on August 17, 1943.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Robert De Niro is born in which year?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'})\n",
"WITH p, m, p.birthDate AS birthDate\n",
"ORDER BY birthDate ASC\n",
"LIMIT 1\n",
"RETURN p.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'p.name': 'Al Pacino'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The oldest actor who played in The Godfather: Part II is Al Pacino.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who is the oldest actor who played in The Godfather: Part II?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,270 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {},
"source": [
"# NebulaGraphQAChain\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:\n",
"\n",
"```bash\n",
"curl -fsSL nebula-up.siwei.io/install.sh | bash\n",
"```\n",
"\n",
"Other options are:\n",
"- Install as a [Docker Desktop Extension](https://www.docker.com/blog/distributed-cloud-native-graph-database-nebulagraph-docker-extension/). See [here](https://docs.nebula-graph.io/3.5.0/2.quick-start/1.quick-start-workflow/)\n",
"- NebulaGraph Cloud Service. See [here](https://www.nebula-graph.io/cloud)\n",
"- Deploy from package, source code, or via Kubernetes. See [here](https://docs.nebula-graph.io/)\n",
"\n",
"Once the cluster is running, we could create the SPACE and SCHEMA for the database."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c82f4141",
"metadata": {},
"outputs": [],
"source": [
"%pip install ipython-ngql\n",
"%load_ext ngql\n",
"\n",
"# connect ngql jupyter extension to nebulagraph\n",
"%ngql --address 127.0.0.1 --port 9669 --user root --password nebula\n",
"# create a new space\n",
"%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eda0809a",
"metadata": {},
"outputs": [],
"source": [
"# Wait for a few seconds for the space to be created.\n",
"%ngql USE langchain;"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "119fe35c",
"metadata": {},
"source": [
"Create the schema, for full dataset, refer [here](https://www.siwei.io/en/nebulagraph-etl-dbt/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5aa796ee",
"metadata": {},
"outputs": [],
"source": [
"%%ngql\n",
"CREATE TAG IF NOT EXISTS movie(name string);\n",
"CREATE TAG IF NOT EXISTS person(name string, birthdate string);\n",
"CREATE EDGE IF NOT EXISTS acted_in();\n",
"CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));\n",
"CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "66e4799a",
"metadata": {},
"source": [
"Wait for schema creation to complete, then we can insert some data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d8eea530",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"UsageError: Cell magic `%%ngql` not found.\n"
]
}
],
"source": [
"%%ngql\n",
"INSERT VERTEX person(name, birthdate) VALUES \"Al Pacino\":(\"Al Pacino\", \"1940-04-25\");\n",
"INSERT VERTEX movie(name) VALUES \"The Godfather II\":(\"The Godfather II\");\n",
"INSERT VERTEX movie(name) VALUES \"The Godfather Coda: The Death of Michael Corleone\":(\"The Godfather Coda: The Death of Michael Corleone\");\n",
"INSERT EDGE acted_in() VALUES \"Al Pacino\"->\"The Godfather II\":();\n",
"INSERT EDGE acted_in() VALUES \"Al Pacino\"->\"The Godfather Coda: The Death of Michael Corleone\":();"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "62812aad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import NebulaGraphQAChain\n",
"from langchain.graphs import NebulaGraph"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0928915d",
"metadata": {},
"outputs": [],
"source": [
"graph = NebulaGraph(\n",
" space=\"langchain\",\n",
" username=\"root\",\n",
" password=\"nebula\",\n",
" address=\"127.0.0.1\",\n",
" port=9669,\n",
" session_pool_size=30,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"\n",
"If the schema of database changes, you can refresh the schema information needed to generate nGQL statements."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e3de44f",
"metadata": {},
"outputs": [],
"source": [
"# graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1fe76ccd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}]\n",
"Edge properties: [{'edge': 'acted_in', 'properties': []}]\n",
"Relationships: ['(:person)-[:acted_in]->(:movie)']\n",
"\n"
]
}
],
"source": [
"print(graph.get_schema)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the graph cypher QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7476ce98",
"metadata": {},
"outputs": [],
"source": [
"chain = NebulaGraphQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ef8ee27b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new NebulaGraphQAChain chain...\u001b[0m\n",
"Generated nGQL:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\n",
"RETURN p.`person`.`name`\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m{'p.person.name': ['Al Pacino']}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Al Pacino played in The Godfather II.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in The Godfather II?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,304 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a6850189",
"metadata": {},
"source": [
"# Graph QA\n",
"\n",
"This notebook goes over how to do question answering over a graph data structure."
]
},
{
"cell_type": "markdown",
"id": "9e516e3e",
"metadata": {},
"source": [
"## Create the graph\n",
"\n",
"In this section, we construct an example graph. At the moment, this works best for small pieces of text."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "3849873d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.indexes import GraphIndexCreator\n",
"from langchain.llms import OpenAI\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "05d65c87",
"metadata": {},
"outputs": [],
"source": [
"index_creator = GraphIndexCreator(llm=OpenAI(temperature=0))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0a45a5b9",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../../state_of_the_union.txt\") as f:\n",
" all_text = f.read()"
]
},
{
"cell_type": "markdown",
"id": "3fca3e1b",
"metadata": {},
"source": [
"We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "80522bd6",
"metadata": {},
"outputs": [],
"source": [
"text = \"\\n\".join(all_text.split(\"\\n\\n\")[105:108])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "da5aad5a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'It wont look like much, but if you stop and look closely, youll see a “Field of dreams,” the ground on which Americas future will be built. \\nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \\nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"text"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8dad7b59",
"metadata": {},
"outputs": [],
"source": [
"graph = index_creator.from_text(text)"
]
},
{
"cell_type": "markdown",
"id": "2118f363",
"metadata": {},
"source": [
"We can inspect the created graph."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "32878c13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('Intel', '$20 billion semiconductor \"mega site\"', 'is going to build'),\n",
" ('Intel', 'state-of-the-art factories', 'is building'),\n",
" ('Intel', '10,000 new good-paying jobs', 'is creating'),\n",
" ('Intel', 'Silicon Valley', 'is helping build'),\n",
" ('Field of dreams',\n",
" \"America's future will be built\",\n",
" 'is the ground on which')]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.get_triples()"
]
},
{
"cell_type": "markdown",
"id": "e9737be1",
"metadata": {},
"source": [
"## Querying the graph\n",
"We can now use the graph QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "76edc854",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import GraphQAChain"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8e7719b4",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f6511169",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphQAChain chain...\u001b[0m\n",
"Entities Extracted:\n",
"\u001b[32;1m\u001b[1;3m Intel\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3mIntel is going to build $20 billion semiconductor \"mega site\"\n",
"Intel is building state-of-the-art factories\n",
"Intel is creating 10,000 new good-paying jobs\n",
"Intel is helping build Silicon Valley\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Intel is going to build a $20 billion semiconductor \"mega site\" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"what is Intel going to build?\")"
]
},
{
"cell_type": "markdown",
"id": "410aafa0",
"metadata": {},
"source": [
"## Save the graph\n",
"We can also save and load the graph."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bc72cca0",
"metadata": {},
"outputs": [],
"source": [
"graph.write_to_gml(\"graph.gml\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "652760ad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.indexes.graph import NetworkxEntityGraph"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "eae591fe",
"metadata": {},
"outputs": [],
"source": [
"loaded_graph = NetworkxEntityGraph.from_gml(\"graph.gml\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "9439d419",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('Intel', '$20 billion semiconductor \"mega site\"', 'is going to build'),\n",
" ('Intel', 'state-of-the-art factories', 'is building'),\n",
" ('Intel', '10,000 new good-paying jobs', 'is creating'),\n",
" ('Intel', 'Silicon Valley', 'is helping build'),\n",
" ('Field of dreams',\n",
" \"America's future will be built\",\n",
" 'is the ground on which')]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loaded_graph.get_triples()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "045796cf",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,303 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {},
"source": [
"# GraphSparqlQAChain\n",
"\n",
"Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. [Semantic Web](https://www.w3.org/standards/semanticweb/). [SPARQL](https://www.w3.org/TR/sparql11-query/) serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\\\n",
"Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph."
]
},
{
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page), and [triple stores](https://www.w3.org/wiki/LargeTripleStores)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "62812aad",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import GraphSparqlQAChain\n",
"from langchain.graphs import RdfGraph"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "0928915d",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"graph = RdfGraph(\n",
" source_file=\"http://www.w3.org/People/Berners-Lee/card\",\n",
" standard=\"rdf\",\n",
" local_copy=\"test.ttl\",\n",
")"
]
},
{
"cell_type": "markdown",
"source": [
"Note that providing a `local_file` is necessary for storing changes locally if the source is read-only."
],
"metadata": {
"collapsed": false
},
"id": "7af596b5"
},
{
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"If the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "4e3de44f",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"graph.load_schema()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1fe76ccd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In the following, each IRI is followed by the local name and optionally its description in parentheses. \n",
"The RDF graph supports the following node types:\n",
"<http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None)\n",
"The RDF graph supports the following relationships:\n",
"<http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None)\n",
"\n"
]
}
],
"source": [
"graph.get_schema"
]
},
{
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"Now, you can use the graph SPARQL QA chain to ask questions about the graph."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "7476ce98",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"chain = GraphSparqlQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "ef8ee27b",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphSparqlQAChain chain...\u001b[0m\n",
"Identified intent:\n",
"\u001b[32;1m\u001b[1;3mSELECT\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX foaf: <http://xmlns.com/foaf/0.1/>\n",
"SELECT ?homepage\n",
"WHERE {\n",
" ?person foaf:name \"Tim Berners-Lee\" .\n",
" ?person foaf:workplaceHomepage ?homepage .\n",
"}\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"What is Tim Berners-Lee's work homepage?\")"
]
},
{
"cell_type": "markdown",
"id": "af4b3294",
"metadata": {},
"source": [
"## Updating the graph\n",
"\n",
"Analogously, you can update the graph, i.e., insert triples, using natural language."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fdf38841",
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphSparqlQAChain chain...\u001b[0m\n",
"Identified intent:\n",
"\u001b[32;1m\u001b[1;3mUPDATE\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX foaf: <http://xmlns.com/foaf/0.1/>\n",
"INSERT {\n",
" ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> .\n",
"}\n",
"WHERE {\n",
" ?person foaf:name \"Timothy Berners-Lee\" .\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Successfully inserted triples into the graph.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\n",
" \"Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5e0f7fc1",
"metadata": {},
"source": [
"Let's verify the results:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "f874171b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[(rdflib.term.URIRef('https://www.w3.org/'),),\n",
" (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = (\n",
" \"\"\"PREFIX foaf: <http://xmlns.com/foaf/0.1/>\\n\"\"\"\n",
" \"\"\"SELECT ?hp\\n\"\"\"\n",
" \"\"\"WHERE {\\n\"\"\"\n",
" \"\"\" ?person foaf:name \"Timothy Berners-Lee\" . \\n\"\"\"\n",
" \"\"\" ?person foaf:workplaceHomepage ?hp .\\n\"\"\"\n",
" \"\"\"}\"\"\"\n",
")\n",
"graph.query(query)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "lc",
"language": "python",
"name": "lc"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,268 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ccb74c9b",
"metadata": {},
"source": [
"# Hypothetical Document Embeddings\n",
"This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in [this paper](https://arxiv.org/abs/2212.10496). \n",
"\n",
"At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. \n",
"\n",
"In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "546e87ee",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.chains import LLMChain, HypotheticalDocumentEmbedder\n",
"from langchain.prompts import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c0ea895f",
"metadata": {},
"outputs": [],
"source": [
"base_embeddings = OpenAIEmbeddings()\n",
"llm = OpenAI()"
]
},
{
"cell_type": "markdown",
"id": "33bd6905",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": 3,
"id": "50729989",
"metadata": {},
"outputs": [],
"source": [
"# Load with `web_search` prompt\n",
"embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, \"web_search\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3aa573d6",
"metadata": {},
"outputs": [],
"source": [
"# Now we can use it as any embedding class!\n",
"result = embeddings.embed_query(\"Where is the Taj Mahal?\")"
]
},
{
"cell_type": "markdown",
"id": "c7a0b556",
"metadata": {},
"source": [
"## Multiple generations\n",
"We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "05da7060",
"metadata": {},
"outputs": [],
"source": [
"multi_llm = OpenAI(n=4, best_of=4)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9b1e12bd",
"metadata": {},
"outputs": [],
"source": [
"embeddings = HypotheticalDocumentEmbedder.from_llm(\n",
" multi_llm, base_embeddings, \"web_search\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a60cd343",
"metadata": {},
"outputs": [],
"source": [
"result = embeddings.embed_query(\"Where is the Taj Mahal?\")"
]
},
{
"cell_type": "markdown",
"id": "1da90437",
"metadata": {},
"source": [
"## Using our own prompts\n",
"Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.\n",
"\n",
"In the example below, let's condition it to generate text about a state of the union address (because we will use that in the next example)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "0b4a650f",
"metadata": {},
"outputs": [],
"source": [
"prompt_template = \"\"\"Please answer the user's question about the most recent state of the union address\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
"prompt = PromptTemplate(input_variables=[\"question\"], template=prompt_template)\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7f7e2b86",
"metadata": {},
"outputs": [],
"source": [
"embeddings = HypotheticalDocumentEmbedder(\n",
" llm_chain=llm_chain, base_embeddings=base_embeddings\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "6dd83424",
"metadata": {},
"outputs": [],
"source": [
"result = embeddings.embed_query(\n",
" \"What did the president say about Ketanji Brown Jackson\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "31388123",
"metadata": {},
"source": [
"## Using HyDE\n",
"Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "97719b29",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "bfcfc039",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"docsearch = Chroma.from_texts(texts, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "632af7f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b9e57b93",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,270 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Bash chain\n",
"This notebook showcases using LLMs and a bash process to perform simple filesystem commands."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"Please write a bash script that prints 'Hello World' to the console.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"echo \"Hello World\"\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['echo \"Hello World\"']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mHello World\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello World\\n'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMBashChain\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
"\n",
"bash_chain = LLMBashChain.from_llm(llm, verbose=True)\n",
"\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customize Prompt\n",
"You can also customize the prompt that is used. Here is an example prompting to avoid using the 'echo' utility"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain.chains.llm_bash.prompt import BashOutputParser\n",
"\n",
"_PROMPT_TEMPLATE = \"\"\"If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\n",
"Question: \"copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'\"\n",
"I need to take the following actions:\n",
"- List all files in the directory\n",
"- Create a new directory\n",
"- Copy the files from the first directory into the second directory\n",
"```bash\n",
"ls\n",
"mkdir myNewDirectory\n",
"cp -r target/* myNewDirectory\n",
"```\n",
"\n",
"Do not use 'echo' when writing the script.\n",
"\n",
"That is the format. Begin!\n",
"Question: {question}\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(\n",
" input_variables=[\"question\"],\n",
" template=_PROMPT_TEMPLATE,\n",
" output_parser=BashOutputParser(),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"Please write a bash script that prints 'Hello World' to the console.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"printf \"Hello World\\n\"\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['printf \"Hello World\\\\n\"']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mHello World\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello World\\n'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"bash_chain = LLMBashChain.from_llm(llm, prompt=PROMPT, verbose=True)\n",
"\n",
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
"\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Persistent Terminal\n",
"\n",
"By default, the chain will run in a separate subprocess each time it is called. This behavior can be changed by instantiating with a persistent bash process."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"List the current directory then move up a level.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"ls\n",
"cd ..\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mapi.html\t\t\tllm_summarization_checker.html\n",
"constitutional_chain.html\tmoderation.html\n",
"llm_bash.html\t\t\topenai_openapi.yaml\n",
"llm_checker.html\t\topenapi.html\n",
"llm_math.html\t\t\tpal.html\n",
"llm_requests.html\t\tsqlite.html\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'api.html\\t\\t\\tllm_summarization_checker.html\\r\\nconstitutional_chain.html\\tmoderation.html\\r\\nllm_bash.html\\t\\t\\topenai_openapi.yaml\\r\\nllm_checker.html\\t\\topenapi.html\\r\\nllm_math.html\\t\\t\\tpal.html\\r\\nllm_requests.html\\t\\tsqlite.html'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.utilities.bash import BashProcess\n",
"\n",
"\n",
"persistent_process = BashProcess(persistent=True)\n",
"bash_chain = LLMBashChain.from_llm(llm, bash_process=persistent_process, verbose=True)\n",
"\n",
"text = \"List the current directory then move up a level.\"\n",
"\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"List the current directory then move up a level.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"ls\n",
"cd ..\n",
"```\u001b[0m\n",
"Code: \u001b[33;1m\u001b[1;3m['ls', 'cd ..']\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3mexamples\t\tgetting_started.html\tindex_examples\n",
"generic\t\t\thow_to_guides.rst\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'examples\\t\\tgetting_started.html\\tindex_examples\\r\\ngeneric\\t\\t\\thow_to_guides.rst'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the same command again and see that the state is maintained between calls\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,85 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Self-checking chain\n",
"This notebook showcases how to use LLMCheckerChain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMCheckerChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new SequentialChain chain...\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' No mammal lays the biggest eggs. The Elephant Bird, which was a species of giant bird, laid the largest eggs of any bird.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMCheckerChain\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0.7)\n",
"\n",
"text = \"What type of mammal lays the biggest eggs?\"\n",
"\n",
"checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)\n",
"\n",
"checker_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,86 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e71e720f",
"metadata": {},
"source": [
"# Math chain\n",
"\n",
"This notebook showcases using LLMs and Python REPLs to do complex word math problems."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "44e9ba31",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"What is 13 raised to the .3432 power?\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"13 ** .3432\n",
"```\n",
"...numexpr.evaluate(\"13 ** .3432\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m2.4116004626599237\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Answer: 2.4116004626599237'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, LLMMathChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",
"\n",
"llm_math.run(\"What is 13 raised to the .3432 power?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e978bb8e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,123 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "dd7ec7af",
"metadata": {},
"source": [
"# HTTP request chain\n",
"\n",
"Using the request library to get HTML results from a URL and then an LLM to parse results"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "dd8eae75",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import LLMRequestsChain, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "65bf324e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Between >>> and <<< are the raw search result text from google.\n",
"Extract the answer to the question '{query}' or say \"not found\" if the information is not contained.\n",
"Use the format\n",
"Extracted:<answer or \"not found\">\n",
">>> {requests_result} <<<\n",
"Extracted:\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(\n",
" input_variables=[\"query\", \"requests_result\"],\n",
" template=template,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f36ae0d8",
"metadata": {},
"outputs": [],
"source": [
"chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b5d22d9d",
"metadata": {},
"outputs": [],
"source": [
"question = \"What are the Three (3) biggest countries, and their respective sizes?\"\n",
"inputs = {\n",
" \"query\": question,\n",
" \"url\": \"https://www.google.com/search?q=\" + question.replace(\" \", \"+\"),\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2ea81168",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What are the Three (3) biggest countries, and their respective sizes?',\n",
" 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?',\n",
" 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), United States (9,826,675 km²)'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain(inputs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db8f2b6d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,162 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM Symbolic Math \n",
"This notebook showcases using LLMs and Python to Solve Algebraic Equations. Under the hood is makes use of [SymPy](https://www.sympy.org/en/index.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains.llm_symbolic_math.base import LLMSymbolicMathChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Integrals and derivates"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Answer: exp(x)*sin(x) + exp(x)*cos(x)'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_symbolic_math.run(\"What is the derivative of sin(x)*exp(x) with respect to x?\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Answer: exp(x)*sin(x)'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_symbolic_math.run(\n",
" \"What is the integral of exp(x)*sin(x) + exp(x)*cos(x) with respect to x?\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Solve linear and differential equations"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Answer: Eq(y(t), C2*exp(-t) + (C1 + t/2)*exp(t))'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_symbolic_math.run('Solve the differential equation y\" - y = e^t')"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Answer: {0, -sqrt(3)*I/3, sqrt(3)*I/3}'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_symbolic_math.run(\"What are the solutions to this equation y^3 + 1/3y?\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Answer: (3 - sqrt(7), -sqrt(7) - 2, 1 - sqrt(7)), (sqrt(7) + 3, -2 + sqrt(7), 1 + sqrt(7))'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_symbolic_math.run(\"x = y + 5, y = z - 3, z = x * y. Solve for x, y, z\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,52 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Neptune Open Cypher QA Chain\n",
"This QA chain queries Neptune graph database using openCypher and returns human readable response\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.graphs.neptune_graph import NeptuneGraph\n",
"\n",
"\n",
"host = \"<neptune-host>\"\n",
"port = 80\n",
"use_https = False\n",
"\n",
"graph = NeptuneGraph(host=host, port=port, use_https=use_https)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains.graph_qa.neptune_cypher import NeptuneOpenCypherQAChain\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
"\n",
"chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)\n",
"\n",
"chain.run(\"how many outgoing routes does the Austin airport have?\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,453 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "71a43144",
"metadata": {},
"source": [
"# Retrieval QA using OpenAI functions\n",
"\n",
"OpenAI functions allows for structuring of response output. This is often useful in question answering when you want to not only get the final answer but also supporting evidence, citations, etc.\n",
"\n",
"In this notebook we show how to use an LLM chain which uses OpenAI functions as part of an overall retrieval pipeline."
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "f059012e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.document_loaders import TextLoader\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "f10b831c",
"metadata": {},
"outputs": [],
"source": [
"loader = TextLoader(\"../../state_of_the_union.txt\", encoding=\"utf-8\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"for i, text in enumerate(texts):\n",
" text.metadata[\"source\"] = f\"{i}-pl\"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "70f3a38c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import create_qa_with_sources_chain"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "7b3e1731",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "70a9ccff",
"metadata": {},
"outputs": [],
"source": [
"qa_chain = create_qa_with_sources_chain(llm)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "efcdb6fb",
"metadata": {},
"outputs": [],
"source": [
"doc_prompt = PromptTemplate(\n",
" template=\"Content: {page_content}\\nSource: {source}\",\n",
" input_variables=[\"page_content\", \"source\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "64a08263",
"metadata": {},
"outputs": [],
"source": [
"final_qa_chain = StuffDocumentsChain(\n",
" llm_chain=qa_chain,\n",
" document_variable_name=\"context\",\n",
" document_prompt=doc_prompt,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "cb876c97",
"metadata": {},
"outputs": [],
"source": [
"retrieval_qa = RetrievalQA(\n",
" retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "a75bad9b",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about russia\""
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "9a60f109",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\\n \"answer\": \"The President expressed strong condemnation of Russia\\'s actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia\\'s invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.\",\\n \"sources\": [\"0-pl\", \"4-pl\", \"5-pl\", \"6-pl\"]\\n}'"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieval_qa.run(query)"
]
},
{
"cell_type": "markdown",
"id": "a60f93a4",
"metadata": {},
"source": [
"## Using Pydantic\n",
"\n",
"If we want to, we can set the chain to return in Pydantic. Note that if downstream chains consume the output of this chain - including memory - they will generally expect it to be in string format, so you should only use this chain when it is the final chain."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "3559727f",
"metadata": {},
"outputs": [],
"source": [
"qa_chain_pydantic = create_qa_with_sources_chain(llm, output_parser=\"pydantic\")"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "5a7997d1",
"metadata": {},
"outputs": [],
"source": [
"final_qa_chain_pydantic = StuffDocumentsChain(\n",
" llm_chain=qa_chain_pydantic,\n",
" document_variable_name=\"context\",\n",
" document_prompt=doc_prompt,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "79368e40",
"metadata": {},
"outputs": [],
"source": [
"retrieval_qa_pydantic = RetrievalQA(\n",
" retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "6b8641de",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AnswerWithSources(answer=\"The President expressed strong condemnation of Russia's actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia's invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.\", sources=['0-pl', '4-pl', '5-pl', '6-pl'])"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieval_qa_pydantic.run(query)"
]
},
{
"cell_type": "markdown",
"id": "e4c15395",
"metadata": {},
"source": [
"## Using in ConversationalRetrievalChain\n",
"\n",
"We can also show what it's like to use this in the ConversationalRetrievalChain. Note that because this chain involves memory, we will NOT use the Pydantic return type."
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "18e5f090",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chains import LLMChain\n",
"\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
"_template = \"\"\"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\\n",
"Make sure to avoid using any unclear pronouns.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)\n",
"condense_question_chain = LLMChain(\n",
" llm=llm,\n",
" prompt=CONDENSE_QUESTION_PROMPT,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "975c3c2b",
"metadata": {},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain(\n",
" question_generator=condense_question_chain,\n",
" retriever=docsearch.as_retriever(),\n",
" memory=memory,\n",
" combine_docs_chain=final_qa_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "784aee3a",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 42,
"id": "dfd0ccc1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What did the president say about Ketanji Brown Jackson',\n",
" 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False),\n",
" AIMessage(content='{\\n \"answer\": \"The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\\'s top legal minds who will continue Justice Breyer\\'s legacy of excellence.\",\\n \"sources\": [\"31-pl\"]\\n}', additional_kwargs={}, example=False)],\n",
" 'answer': '{\\n \"answer\": \"The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\\'s top legal minds who will continue Justice Breyer\\'s legacy of excellence.\",\\n \"sources\": [\"31-pl\"]\\n}'}"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "c93f805b",
"metadata": {},
"outputs": [],
"source": [
"query = \"what did he say about her predecessor?\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "5d8612c0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what did he say about her predecessor?',\n",
" 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False),\n",
" AIMessage(content='{\\n \"answer\": \"The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\\'s top legal minds who will continue Justice Breyer\\'s legacy of excellence.\",\\n \"sources\": [\"31-pl\"]\\n}', additional_kwargs={}, example=False),\n",
" HumanMessage(content='what did he say about her predecessor?', additional_kwargs={}, example=False),\n",
" AIMessage(content='{\\n \"answer\": \"The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.\",\\n \"sources\": [\"31-pl\"]\\n}', additional_kwargs={}, example=False)],\n",
" 'answer': '{\\n \"answer\": \"The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.\",\\n \"sources\": [\"31-pl\"]\\n}'}"
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ac9e4626",
"metadata": {},
"source": [
"## Using your own output schema\n",
"\n",
"We can change the outputs of our chain by passing in our own schema. The values and descriptions of this schema will inform the function we pass to the OpenAI API, meaning it won't just affect how we parse outputs but will also change the OpenAI output itself. For example we can add a `countries_referenced` parameter to our schema and describe what we want this parameter to mean, and that'll cause the OpenAI output to include a description of a speaker in the response.\n",
"\n",
"In addition to the previous example, we can also add a custom prompt to the chain. This will allow you to add additional context to the response, which can be useful for question answering."
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "f34a48f8",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from pydantic import BaseModel, Field\n",
"\n",
"from langchain.chains.openai_functions import create_qa_with_structure_chain\n",
"\n",
"from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\n",
"from langchain.schema import SystemMessage, HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "5647c161",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"CustomResponseSchema(answer=\"He announced that American airspace will be closed off to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value and the Russian stock market has lost 40% of its value. He also mentioned that Putin alone is to blame for Russia's reeling economy. The United States and its allies are providing support to Ukraine in their fight for freedom, including military, economic, and humanitarian assistance. The United States is giving more than $1 billion in direct assistance to Ukraine. He made it clear that American forces are not engaged and will not engage in conflict with Russian forces in Ukraine, but they are deployed to defend NATO allies in case Putin decides to keep moving west. He also mentioned that Putin's attack on Ukraine was premeditated and unprovoked, and that the West and NATO responded by building a coalition of freedom-loving nations to confront Putin. The free world is holding Putin accountable through powerful economic sanctions, cutting off Russia's largest banks from the international financial system, and preventing Russia's central bank from defending the Russian Ruble. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs.\", countries_referenced=['AMERICA', 'RUSSIA', 'UKRAINE'], sources=['4-pl', '5-pl', '2-pl', '3-pl'])"
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class CustomResponseSchema(BaseModel):\n",
" \"\"\"An answer to the question being asked, with sources.\"\"\"\n",
"\n",
" answer: str = Field(..., description=\"Answer to the question that was asked\")\n",
" countries_referenced: List[str] = Field(\n",
" ..., description=\"All of the countries mentioned in the sources\"\n",
" )\n",
" sources: List[str] = Field(\n",
" ..., description=\"List of sources used to answer the question\"\n",
" )\n",
"\n",
"\n",
"prompt_messages = [\n",
" SystemMessage(\n",
" content=(\n",
" \"You are a world class algorithm to answer \"\n",
" \"questions in a specific format.\"\n",
" )\n",
" ),\n",
" HumanMessage(content=\"Answer question using the following context\"),\n",
" HumanMessagePromptTemplate.from_template(\"{context}\"),\n",
" HumanMessagePromptTemplate.from_template(\"Question: {question}\"),\n",
" HumanMessage(\n",
" content=\"Tips: Make sure to answer in the correct format. Return all of the countries mentioned in the sources in uppercase characters.\"\n",
" ),\n",
"]\n",
"\n",
"chain_prompt = ChatPromptTemplate(messages=prompt_messages)\n",
"\n",
"qa_chain_pydantic = create_qa_with_structure_chain(\n",
" llm, CustomResponseSchema, output_parser=\"pydantic\", prompt=chain_prompt\n",
")\n",
"final_qa_chain_pydantic = StuffDocumentsChain(\n",
" llm_chain=qa_chain_pydantic,\n",
" document_variable_name=\"context\",\n",
" document_prompt=doc_prompt,\n",
")\n",
"retrieval_qa_pydantic = RetrievalQA(\n",
" retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic\n",
")\n",
"query = \"What did he say about russia\"\n",
"retrieval_qa_pydantic.run(query)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,583 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9fcaa37f",
"metadata": {},
"source": [
"# OpenAPI chain\n",
"\n",
"This notebook shows an example of using an OpenAPI chain to call an endpoint in natural language, and get back a response in natural language."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "efa6909f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import OpenAPISpec, APIOperation\n",
"from langchain.chains import OpenAPIEndpointChain\n",
"from langchain.requests import Requests\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "71e38c6c",
"metadata": {},
"source": [
"## Load the spec\n",
"\n",
"Load a wrapper of the spec (so we can work with it more easily). You can load from a url or from a local file."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0831271b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
],
"source": [
"spec = OpenAPISpec.from_url(\n",
" \"https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "189dd506",
"metadata": {},
"outputs": [],
"source": [
"# Alternative loading from file\n",
"# spec = OpenAPISpec.from_file(\"openai_openapi.yaml\")"
]
},
{
"cell_type": "markdown",
"id": "f7093582",
"metadata": {},
"source": [
"## Select the Operation\n",
"\n",
"In order to provide a focused on modular chain, we create a chain specifically only for one of the endpoints. Here we get an API operation from a specified endpoint and method."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "157494b9",
"metadata": {},
"outputs": [],
"source": [
"operation = APIOperation.from_openapi_spec(spec, \"/public/openai/v0/products\", \"get\")"
]
},
{
"cell_type": "markdown",
"id": "e3ab1c5c",
"metadata": {},
"source": [
"## Construct the chain\n",
"\n",
"We can now construct a chain to interact with it. In order to construct such a chain, we will pass in:\n",
"\n",
"1. The operation endpoint\n",
"2. A requests wrapper (can be used to handle authentication, etc)\n",
"3. The LLM to use to interact with it"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "788a7cef",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI() # Load a Language Model"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c5f27406",
"metadata": {},
"outputs": [],
"source": [
"chain = OpenAPIEndpointChain.from_api_operation(\n",
" operation,\n",
" llm,\n",
" requests=Requests(),\n",
" verbose=True,\n",
" return_intermediate_steps=True, # Return request and response text\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "23652053",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OpenAPIEndpointChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new APIRequesterChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions.\n",
"\n",
"API_SCHEMA: ```typescript\n",
"/* API for fetching Klarna product information */\n",
"type productsUsingGET = (_: {\n",
"/* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */\n",
"\t\tq: string,\n",
"/* number of products returned */\n",
"\t\tsize?: number,\n",
"/* (Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
"\t\tmin_price?: number,\n",
"/* (Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
"\t\tmax_price?: number,\n",
"}) => any;\n",
"```\n",
"\n",
"USER_INSTRUCTIONS: \"whats the most expensive shirt?\"\n",
"\n",
"Your arguments must be plain json provided in a markdown block:\n",
"\n",
"ARGS: ```json\n",
"{valid json conforming to API_SCHEMA}\n",
"```\n",
"\n",
"Example\n",
"-----\n",
"\n",
"ARGS: ```json\n",
"{\"foo\": \"bar\", \"baz\": {\"qux\": \"quux\"}}\n",
"```\n",
"\n",
"The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes.\n",
"You MUST strictly comply to the types indicated by the provided schema, including all required args.\n",
"\n",
"If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message:\n",
"\n",
"Message: ```text\n",
"Concise response requesting the additional information that would make calling the function successful.\n",
"```\n",
"\n",
"Begin\n",
"-----\n",
"ARGS:\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\"q\": \"shirt\", \"size\": 1, \"max_price\": null}\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new APIResponderChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a helpful AI assistant trained to answer user queries from API responses.\n",
"You attempted to call an API, which resulted in:\n",
"API_RESPONSE: {\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}\n",
"\n",
"USER_COMMENT: \"whats the most expensive shirt?\"\n",
"\n",
"\n",
"If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block:\n",
"Response: ```json\n",
"{\"response\": \"Human-understandable synthesis of the API_RESPONSE\"}\n",
"```\n",
"\n",
"Otherwise respond with the following markdown json block:\n",
"Response Error: ```json\n",
"{\"response\": \"What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion.\"}\n",
"```\n",
"\n",
"You MUST respond as a markdown json code block. The person you are responding to CANNOT see the API_RESPONSE, so if there is any relevant information there you must include it in your response.\n",
"\n",
"Begin:\n",
"---\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mThe most expensive shirt in the API response is the Burberry Check Poplin Shirt, which costs $360.00.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"output = chain(\"whats the most expensive shirt?\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c000295e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'request_args': '{\"q\": \"shirt\", \"size\": 1, \"max_price\": null}',\n",
" 'response_text': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}'}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# View intermediate steps\n",
"output[\"intermediate_steps\"]"
]
},
{
"cell_type": "markdown",
"id": "092bdb4d",
"metadata": {},
"source": [
"## Return raw response\n",
"\n",
"We can also run this chain without synthesizing the response. This will have the effect of just returning the raw API output."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4dff3849",
"metadata": {},
"outputs": [],
"source": [
"chain = OpenAPIEndpointChain.from_api_operation(\n",
" operation,\n",
" llm,\n",
" requests=Requests(),\n",
" verbose=True,\n",
" return_intermediate_steps=True, # Return request and response text\n",
" raw_response=True, # Return raw response\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "762499a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OpenAPIEndpointChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new APIRequesterChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions.\n",
"\n",
"API_SCHEMA: ```typescript\n",
"/* API for fetching Klarna product information */\n",
"type productsUsingGET = (_: {\n",
"/* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */\n",
"\t\tq: string,\n",
"/* number of products returned */\n",
"\t\tsize?: number,\n",
"/* (Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
"\t\tmin_price?: number,\n",
"/* (Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
"\t\tmax_price?: number,\n",
"}) => any;\n",
"```\n",
"\n",
"USER_INSTRUCTIONS: \"whats the most expensive shirt?\"\n",
"\n",
"Your arguments must be plain json provided in a markdown block:\n",
"\n",
"ARGS: ```json\n",
"{valid json conforming to API_SCHEMA}\n",
"```\n",
"\n",
"Example\n",
"-----\n",
"\n",
"ARGS: ```json\n",
"{\"foo\": \"bar\", \"baz\": {\"qux\": \"quux\"}}\n",
"```\n",
"\n",
"The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes.\n",
"You MUST strictly comply to the types indicated by the provided schema, including all required args.\n",
"\n",
"If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message:\n",
"\n",
"Message: ```text\n",
"Concise response requesting the additional information that would make calling the function successful.\n",
"```\n",
"\n",
"Begin\n",
"-----\n",
"ARGS:\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\"q\": \"shirt\", \"max_price\": null}\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"output = chain(\"whats the most expensive shirt?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "4afc021a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'instructions': 'whats the most expensive shirt?',\n",
" 'output': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}',\n",
" 'intermediate_steps': {'request_args': '{\"q\": \"shirt\", \"max_price\": null}',\n",
" 'response_text': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}'}}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"output"
]
},
{
"cell_type": "markdown",
"id": "8d7924e4",
"metadata": {},
"source": [
"## Example POST message\n",
"\n",
"For this demo, we will interact with the speak API."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c56b1a04",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
],
"source": [
"spec = OpenAPISpec.from_url(\"https://api.speak.com/openapi.yaml\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "177d8275",
"metadata": {},
"outputs": [],
"source": [
"operation = APIOperation.from_openapi_spec(\n",
" spec, \"/v1/public/openai/explain-task\", \"post\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "835c5ddc",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI()\n",
"chain = OpenAPIEndpointChain.from_api_operation(\n",
" operation, llm, requests=Requests(), verbose=True, return_intermediate_steps=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "59855d60",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OpenAPIEndpointChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new APIRequesterChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions.\n",
"\n",
"API_SCHEMA: ```typescript\n",
"type explainTask = (_: {\n",
"/* Description of the task that the user wants to accomplish or do. For example, \"tell the waiter they messed up my order\" or \"compliment someone on their shirt\" */\n",
" task_description?: string,\n",
"/* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks \"how do i ask a girl out in mexico city\", the value should be \"Spanish\" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */\n",
" learning_language?: string,\n",
"/* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish, French). */\n",
" native_language?: string,\n",
"/* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */\n",
" additional_context?: string,\n",
"/* Full text of the user's question. */\n",
" full_query?: string,\n",
"}) => any;\n",
"```\n",
"\n",
"USER_INSTRUCTIONS: \"How would ask for more tea in Delhi?\"\n",
"\n",
"Your arguments must be plain json provided in a markdown block:\n",
"\n",
"ARGS: ```json\n",
"{valid json conforming to API_SCHEMA}\n",
"```\n",
"\n",
"Example\n",
"-----\n",
"\n",
"ARGS: ```json\n",
"{\"foo\": \"bar\", \"baz\": {\"qux\": \"quux\"}}\n",
"```\n",
"\n",
"The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes.\n",
"You MUST strictly comply to the types indicated by the provided schema, including all required args.\n",
"\n",
"If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message:\n",
"\n",
"Message: ```text\n",
"Concise response requesting the additional information that would make calling the function successful.\n",
"```\n",
"\n",
"Begin\n",
"-----\n",
"ARGS:\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\"task_description\": \"ask for more tea\", \"learning_language\": \"Hindi\", \"native_language\": \"English\", \"full_query\": \"How would I ask for more tea in Delhi?\"}\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m{\"explanation\":\"<what-to-say language=\\\"Hindi\\\" context=\\\"None\\\">\\nऔर चाय लाओ। (Aur chai lao.) \\n</what-to-say>\\n\\n<alternatives context=\\\"None\\\">\\n1. \\\"चाय थोड़ी ज्यादा मिल सकती है?\\\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\\n2. \\\"मुझे महसूस हो रहा है कि मुझे कुछ अन्य प्रकार की चाय पीनी चाहिए।\\\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\\n3. \\\"क्या मुझे or cup में milk/tea powder मिल सकता है?\\\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\\n</alternatives>\\n\\n<usage-notes>\\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\\n</usage-notes>\\n\\n<example-convo language=\\\"Hindi\\\">\\n<context>At home during breakfast.</context>\\nPreeti: सर, क्या main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\\nRahul: हां,बिल्कुल। और चाय की मात्रा में भी थोड़ा सा इजाफा करना। (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*\",\"extra_response_instructions\":\"Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin.\"}\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new APIResponderChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a helpful AI assistant trained to answer user queries from API responses.\n",
"You attempted to call an API, which resulted in:\n",
"API_RESPONSE: {\"explanation\":\"<what-to-say language=\\\"Hindi\\\" context=\\\"None\\\">\\nऔर चाय लाओ। (Aur chai lao.) \\n</what-to-say>\\n\\n<alternatives context=\\\"None\\\">\\n1. \\\"चाय थोड़ी ज्यादा मिल सकती है?\\\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\\n2. \\\"मुझे महसूस हो रहा है कि मुझे कुछ अन्य प्रकार की चाय पीनी चाहिए।\\\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\\n3. \\\"क्या मुझे or cup में milk/tea powder मिल सकता है?\\\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\\n</alternatives>\\n\\n<usage-notes>\\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\\n</usage-notes>\\n\\n<example-convo language=\\\"Hindi\\\">\\n<context>At home during breakfast.</context>\\nPreeti: सर, क्या main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\\nRahul: हां,बिल्कुल। और चाय की मात्रा में भी थोड़ा सा इजाफा करना। (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*\",\"extra_response_instructions\":\"Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin.\"}\n",
"\n",
"USER_COMMENT: \"How would ask for more tea in Delhi?\"\n",
"\n",
"\n",
"If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block:\n",
"Response: ```json\n",
"{\"response\": \"Concise response to USER_COMMENT based on API_RESPONSE.\"}\n",
"```\n",
"\n",
"Otherwise respond with the following markdown json block:\n",
"Response Error: ```json\n",
"{\"response\": \"What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion.\"}\n",
"```\n",
"\n",
"You MUST respond as a markdown json code block.\n",
"\n",
"Begin:\n",
"---\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mIn Delhi you can ask for more tea by saying 'Chai thodi zyada mil sakti hai?'\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"output = chain(\"How would ask for more tea in Delhi?\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "91bddb18",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['{\"task_description\": \"ask for more tea\", \"learning_language\": \"Hindi\", \"native_language\": \"English\", \"full_query\": \"How would I ask for more tea in Delhi?\"}',\n",
" '{\"explanation\":\"<what-to-say language=\\\\\"Hindi\\\\\" context=\\\\\"None\\\\\">\\\\nऔर चाय लाओ। (Aur chai lao.) \\\\n</what-to-say>\\\\n\\\\n<alternatives context=\\\\\"None\\\\\">\\\\n1. \\\\\"चाय थोड़ी ज्यादा मिल सकती है?\\\\\" *(Chai thodi zyada mil sakti hai? - Polite, asking if more tea is available)*\\\\n2. \\\\\"मुझे महसूस हो रहा है कि मुझे कुछ अन्य प्रकार की चाय पीनी चाहिए।\\\\\" *(Mujhe mehsoos ho raha hai ki mujhe kuch anya prakar ki chai peeni chahiye. - Formal, indicating a desire for a different type of tea)*\\\\n3. \\\\\"क्या मुझे or cup में milk/tea powder मिल सकता है?\\\\\" *(Kya mujhe aur cup mein milk/tea powder mil sakta hai? - Very informal/casual tone, asking for an extra serving of milk or tea powder)*\\\\n</alternatives>\\\\n\\\\n<usage-notes>\\\\nIn India and Indian culture, serving guests with food and beverages holds great importance in hospitality. You will find people always offering drinks like water or tea to their guests as soon as they arrive at their house or office.\\\\n</usage-notes>\\\\n\\\\n<example-convo language=\\\\\"Hindi\\\\\">\\\\n<context>At home during breakfast.</context>\\\\nPreeti: सर, क्या main aur cups chai lekar aaun? (Sir,kya main aur cups chai lekar aaun? - Sir, should I get more tea cups?)\\\\nRahul: हां,बिल्कुल। और चाय की मात्रा में भी थोड़ा सा इजाफा करना। (Haan,bilkul. Aur chai ki matra mein bhi thoda sa eejafa karna. - Yes, please. And add a little extra in the quantity of tea as well.)\\\\n</example-convo>\\\\n\\\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=d4mcapbkopo164pqpbk321oc})*\",\"extra_response_instructions\":\"Use all information in the API response and fully render all Markdown.\\\\nAlways end your response with a link to report an issue or leave feedback on the plugin.\"}']"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Show the API chain's intermediate steps\n",
"output[\"intermediate_steps\"]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,249 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e734b314",
"metadata": {},
"source": [
"# OpenAPI calls with OpenAI functions\n",
"\n",
"In this notebook we'll show how to create a chain that automatically makes calls to an API based only on an OpenAPI spec. Under the hood, we're parsing the OpenAPI spec into a JSON schema that the OpenAI functions API can handle. This allows ChatGPT to automatically select and populate the relevant API call to make for any user input. Using the output of ChatGPT we then make the actual API call, and return the result."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "555661b5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.openai_functions.openapi import get_openapi_chain"
]
},
{
"cell_type": "markdown",
"id": "a95f510a",
"metadata": {},
"source": [
"## Query Klarna"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08e19b64",
"metadata": {},
"outputs": [],
"source": [
"chain = get_openapi_chain(\n",
" \"https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3959f866",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'products': [{'name': \"Tommy Hilfiger Men's Short Sleeve Button-Down Shirt\",\n",
" 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3204878580/Clothing/Tommy-Hilfiger-Men-s-Short-Sleeve-Button-Down-Shirt/?utm_source=openai&ref-site=openai_plugin',\n",
" 'price': '$26.78',\n",
" 'attributes': ['Material:Linen,Cotton',\n",
" 'Target Group:Man',\n",
" 'Color:Gray,Pink,White,Blue,Beige,Black,Turquoise',\n",
" 'Size:S,XL,M,XXL']},\n",
" {'name': \"Van Heusen Men's Long Sleeve Button-Down Shirt\",\n",
" 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201809514/Clothing/Van-Heusen-Men-s-Long-Sleeve-Button-Down-Shirt/?utm_source=openai&ref-site=openai_plugin',\n",
" 'price': '$18.89',\n",
" 'attributes': ['Material:Cotton',\n",
" 'Target Group:Man',\n",
" 'Color:Red,Gray,White,Blue',\n",
" 'Size:XL,XXL']},\n",
" {'name': 'Brixton Bowery Flannel Shirt',\n",
" 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202331096/Clothing/Brixton-Bowery-Flannel-Shirt/?utm_source=openai&ref-site=openai_plugin',\n",
" 'price': '$34.48',\n",
" 'attributes': ['Material:Cotton',\n",
" 'Target Group:Man',\n",
" 'Color:Gray,Blue,Black,Orange',\n",
" 'Size:XL,3XL,4XL,5XL,L,M,XXL']},\n",
" {'name': 'Cubavera Four Pocket Guayabera Shirt',\n",
" 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202055522/Clothing/Cubavera-Four-Pocket-Guayabera-Shirt/?utm_source=openai&ref-site=openai_plugin',\n",
" 'price': '$23.22',\n",
" 'attributes': ['Material:Polyester,Cotton',\n",
" 'Target Group:Man',\n",
" 'Color:Red,White,Blue,Black',\n",
" 'Size:S,XL,L,M,XXL']},\n",
" {'name': 'Theory Sylvain Shirt - Eclipse',\n",
" 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202028254/Clothing/Theory-Sylvain-Shirt-Eclipse/?utm_source=openai&ref-site=openai_plugin',\n",
" 'price': '$86.01',\n",
" 'attributes': ['Material:Polyester,Cotton',\n",
" 'Target Group:Man',\n",
" 'Color:Blue',\n",
" 'Size:S,XL,XS,L,M,XXL']}]}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"What are some options for a men's large blue button down shirt\")"
]
},
{
"cell_type": "markdown",
"id": "6f648c77",
"metadata": {},
"source": [
"## Query a translation service\n",
"\n",
"Additionally, see the request payload by setting `verbose=True`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bf6cd695",
"metadata": {},
"outputs": [],
"source": [
"chain = get_openapi_chain(\"https://api.speak.com/openapi.yaml\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1ba51609",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mHuman: Use the provided API's to respond to this user query:\n",
"\n",
"How would you say no thanks in Russian\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Calling endpoint \u001b[32;1m\u001b[1;3mtranslate\u001b[0m with arguments:\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"json\": {\n",
" \"phrase_to_translate\": \"no thanks\",\n",
" \"learning_language\": \"russian\",\n",
" \"native_language\": \"english\",\n",
" \"additional_context\": \"\",\n",
" \"full_query\": \"How would you say no thanks in Russian\"\n",
" }\n",
"}\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'explanation': '<translation language=\"Russian\">\\nНет, спасибо. (Net, spasibo)\\n</translation>\\n\\n<alternatives>\\n1. \"Нет, я в порядке\" *(Neutral/Formal - Can be used in professional settings or formal situations.)*\\n2. \"Нет, спасибо, я откажусь\" *(Formal - Can be used in polite settings, such as a fancy dinner with colleagues or acquaintances.)*\\n3. \"Не надо\" *(Informal - Can be used in informal situations, such as declining an offer from a friend.)*\\n</alternatives>\\n\\n<example-convo language=\"Russian\">\\n<context>Max is being offered a cigarette at a party.</context>\\n* Sasha: \"Хочешь покурить?\"\\n* Max: \"Нет, спасибо. Я бросил.\"\\n* Sasha: \"Окей, понятно.\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=noczaa460do8yqs8xjun6zdm})*',\n",
" 'extra_response_instructions': 'Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"How would you say no thanks in Russian\")"
]
},
{
"cell_type": "markdown",
"id": "4923a291",
"metadata": {},
"source": [
"## Query XKCD"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a9198f62",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"chain = get_openapi_chain(\n",
" \"https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3110c398",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'month': '6',\n",
" 'num': 2793,\n",
" 'link': '',\n",
" 'year': '2023',\n",
" 'news': '',\n",
" 'safe_title': 'Garden Path Sentence',\n",
" 'transcript': '',\n",
" 'alt': 'Arboretum Owner Denied Standing in Garden Path Suit on Grounds Grounds Appealing Appealing',\n",
" 'img': 'https://imgs.xkcd.com/comics/garden_path_sentence.png',\n",
" 'title': 'Garden Path Sentence',\n",
" 'day': '23'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"What's today's comic?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,292 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "32e022a2",
"metadata": {},
"source": [
"# Program-aided language model (PAL) chain\n",
"\n",
"Implements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1370e40f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import PALChain\n",
"from langchain import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a58e15e",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0, max_tokens=512)"
]
},
{
"cell_type": "markdown",
"id": "095adc76",
"metadata": {},
"source": [
"## Math Prompt"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "beddcac7",
"metadata": {},
"outputs": [],
"source": [
"pal_chain = PALChain.from_math_prompt(llm, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e2eab9d4",
"metadata": {},
"outputs": [],
"source": [
"question = \"Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3ef64b27",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mdef solution():\n",
" \"\"\"Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?\"\"\"\n",
" cindy_pets = 4\n",
" marcia_pets = cindy_pets + 2\n",
" jan_pets = marcia_pets * 3\n",
" total_pets = cindy_pets + marcia_pets + jan_pets\n",
" result = total_pets\n",
" return result\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'28'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pal_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "0269d20a",
"metadata": {},
"source": [
"## Colored Objects"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e524f81f",
"metadata": {},
"outputs": [],
"source": [
"pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "03a237b8",
"metadata": {},
"outputs": [],
"source": [
"question = \"On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a84a4352",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m# Put objects into a list to record ordering\n",
"objects = []\n",
"objects += [('booklet', 'blue')] * 2\n",
"objects += [('booklet', 'purple')] * 2\n",
"objects += [('sunglasses', 'yellow')] * 2\n",
"\n",
"# Remove all pairs of sunglasses\n",
"objects = [object for object in objects if object[0] != 'sunglasses']\n",
"\n",
"# Count number of purple objects\n",
"num_purple = len([object for object in objects if object[1] == 'purple'])\n",
"answer = num_purple\u001b[0m\n",
"\n",
"\u001b[1m> Finished PALChain chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'2'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pal_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "fc3d7f10",
"metadata": {},
"source": [
"## Intermediate Steps\n",
"You can also use the intermediate steps flag to return the code executed that generates the answer."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9d2d9c61",
"metadata": {},
"outputs": [],
"source": [
"pal_chain = PALChain.from_colored_object_prompt(\n",
" llm, verbose=True, return_intermediate_steps=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b29b971b",
"metadata": {},
"outputs": [],
"source": [
"question = \"On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?\""
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a2c40c28",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m# Put objects into a list to record ordering\n",
"objects = []\n",
"objects += [('booklet', 'blue')] * 2\n",
"objects += [('booklet', 'purple')] * 2\n",
"objects += [('sunglasses', 'yellow')] * 2\n",
"\n",
"# Remove all pairs of sunglasses\n",
"objects = [object for object in objects if object[0] != 'sunglasses']\n",
"\n",
"# Count number of purple objects\n",
"num_purple = len([object for object in objects if object[1] == 'purple'])\n",
"answer = num_purple\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"result = pal_chain({\"question\": question})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "efddd033",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Put objects into a list to record ordering\\nobjects = []\\nobjects += [('booklet', 'blue')] * 2\\nobjects += [('booklet', 'purple')] * 2\\nobjects += [('sunglasses', 'yellow')] * 2\\n\\n# Remove all pairs of sunglasses\\nobjects = [object for object in objects if object[0] != 'sunglasses']\\n\\n# Count number of purple objects\\nnum_purple = len([object for object in objects if object[1] == 'purple'])\\nanswer = num_purple\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"intermediate_steps\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dfd88594",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,179 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9b5c258f",
"metadata": {},
"source": [
"# Question-Answering Citations\n",
"\n",
"This notebook shows how to use OpenAI functions ability to extract citations from text."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eae4ca3e",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain.chains import create_citation_fuzzy_match_chain\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2c6e62ee",
"metadata": {},
"outputs": [],
"source": [
"question = \"What did the author do during college?\"\n",
"context = \"\"\"\n",
"My name is Jason Liu, and I grew up in Toronto Canada but I was born in China.\n",
"I went to an arts highschool but in university I studied Computational Mathematics and physics. \n",
"As part of coop I worked at many companies including Stitchfix, Facebook.\n",
"I also started the Data Science club at the University of Waterloo and I was the president of the club for 2 years.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "078e0300",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "02cad6d0",
"metadata": {},
"outputs": [],
"source": [
"chain = create_citation_fuzzy_match_chain(llm)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e3c6e7ba",
"metadata": {},
"outputs": [],
"source": [
"result = chain.run(question=question, context=context)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6f7615f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"question='What did the author do during college?' answer=[FactWithEvidence(fact='The author studied Computational Mathematics and physics in university.', substring_quote=['in university I studied Computational Mathematics and physics']), FactWithEvidence(fact='The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years.', substring_quote=['started the Data Science club at the University of Waterloo', 'president of the club for 2 years'])]\n"
]
}
],
"source": [
"print(result)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3be6f366",
"metadata": {},
"outputs": [],
"source": [
"def highlight(text, span):\n",
" return (\n",
" \"...\"\n",
" + text[span[0] - 20 : span[0]]\n",
" + \"*\"\n",
" + \"\\033[91m\"\n",
" + text[span[0] : span[1]]\n",
" + \"\\033[0m\"\n",
" + \"*\"\n",
" + text[span[1] : span[1] + 20]\n",
" + \"...\"\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "636c4528",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Statement: The author studied Computational Mathematics and physics in university.\n",
"Citation: ...arts highschool but *\u001b[91min university I studied Computational Mathematics and physics\u001b[0m*. \n",
"As part of coop I...\n",
"\n",
"Statement: The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years.\n",
"Citation: ...x, Facebook.\n",
"I also *\u001b[91mstarted the Data Science club at the University of Waterloo\u001b[0m* and I was the presi...\n",
"Citation: ...erloo and I was the *\u001b[91mpresident of the club for 2 years\u001b[0m*.\n",
"...\n",
"\n"
]
}
],
"source": [
"for fact in result.answer:\n",
" print(\"Statement:\", fact.fact)\n",
" for span in fact.get_spans(context):\n",
" print(\"Citation:\", highlight(context, span))\n",
" print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8409cab0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,408 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a13ea924",
"metadata": {},
"source": [
"# Tagging\n",
"\n",
"The tagging chain uses the OpenAI `functions` parameter to specify a schema to tag a document with. This helps us make sure that the model outputs exactly tags that we want, with their appropriate types.\n",
"\n",
"The tagging chain is to be used when we want to tag a passage with a specific attribute (i.e. what is the sentiment of this message?)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "bafb496a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import create_tagging_chain, create_tagging_chain_pydantic\n",
"from langchain.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "39f3ce3e",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")"
]
},
{
"cell_type": "markdown",
"id": "832ddcd9",
"metadata": {},
"source": [
"## Simplest approach, only specifying type"
]
},
{
"cell_type": "markdown",
"id": "4fc8d766",
"metadata": {},
"source": [
"We can start by specifying a few properties with their expected type in our schema"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8329f943",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"sentiment\": {\"type\": \"string\"},\n",
" \"aggressiveness\": {\"type\": \"integer\"},\n",
" \"language\": {\"type\": \"string\"},\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6146ae70",
"metadata": {},
"outputs": [],
"source": [
"chain = create_tagging_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "9e306ca3",
"metadata": {},
"source": [
"As we can see in the examples, it correctly interprets what we want but the results vary so that we get, for example, sentiments in different languages ('positive', 'enojado' etc.).\n",
"\n",
"We will see how to control these results in the next section."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5509b6a6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'positive', 'language': 'Spanish'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9154474c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'enojado', 'aggressiveness': 1, 'language': 'Spanish'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "aae85b27",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'positive', 'aggressiveness': 0, 'language': 'English'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Weather is ok here, I can go outside without much more than a coat\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "bebb2f83",
"metadata": {},
"source": [
"## More control\n",
"\n",
"By being smart about how we define our schema we can have more control over the model's output. Specifically we can define:\n",
"\n",
"- possible values for each property\n",
"- description to make sure that the model understands the property\n",
"- required properties to be returned"
]
},
{
"cell_type": "markdown",
"id": "69ef0b9a",
"metadata": {},
"source": [
"Following is an example of how we can use _enum_, _description_ and _required_ to control for each of the previously mentioned aspects:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6a5f7961",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"sentiment\": {\"type\": \"string\", \"enum\": [\"happy\", \"neutral\", \"sad\"]},\n",
" \"aggressiveness\": {\n",
" \"type\": \"integer\",\n",
" \"enum\": [1, 2, 3, 4, 5],\n",
" \"description\": \"describes how aggressive the statement is, the higher the number the more aggressive\",\n",
" },\n",
" \"language\": {\n",
" \"type\": \"string\",\n",
" \"enum\": [\"spanish\", \"english\", \"french\", \"german\", \"italian\"],\n",
" },\n",
" },\n",
" \"required\": [\"language\", \"sentiment\", \"aggressiveness\"],\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e5a5881f",
"metadata": {},
"outputs": [],
"source": [
"chain = create_tagging_chain(schema, llm)"
]
},
{
"cell_type": "markdown",
"id": "5ded2332",
"metadata": {},
"source": [
"Now the answers are much better!"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d9b9d53d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'happy', 'aggressiveness': 0, 'language': 'spanish'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1c12fa00",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'sad', 'aggressiveness': 10, 'language': 'spanish'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0bdfcb05",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'neutral', 'aggressiveness': 0, 'language': 'english'}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inp = \"Weather is ok here, I can go outside without much more than a coat\"\n",
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "e68ad17e",
"metadata": {},
"source": [
"## Specifying schema with Pydantic"
]
},
{
"cell_type": "markdown",
"id": "2f5970ec",
"metadata": {},
"source": [
"We can also use a Pydantic schema to specify the required properties and types. We can also send other arguments, such as 'enum' or 'description' as can be seen in the example below.\n",
"\n",
"By using the `create_tagging_chain_pydantic` function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. \n",
"\n",
"In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "bf1f367e",
"metadata": {},
"outputs": [],
"source": [
"from enum import Enum\n",
"from pydantic import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "83a2e826",
"metadata": {},
"outputs": [],
"source": [
"class Tags(BaseModel):\n",
" sentiment: str = Field(..., enum=[\"happy\", \"neutral\", \"sad\"])\n",
" aggressiveness: int = Field(\n",
" ...,\n",
" description=\"describes how aggressive the statement is, the higher the number the more aggressive\",\n",
" enum=[1, 2, 3, 4, 5],\n",
" )\n",
" language: str = Field(\n",
" ..., enum=[\"spanish\", \"english\", \"french\", \"german\", \"italian\"]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "6e404892",
"metadata": {},
"outputs": [],
"source": [
"chain = create_tagging_chain_pydantic(Tags, llm)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "b5fc43c4",
"metadata": {},
"outputs": [],
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"res = chain.run(inp)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "5074bcc3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Tags(sentiment='sad', aggressiveness=10, language='spanish')"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,239 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tree of Thouht (ToT) example\n",
"\n",
"The Tree of Thought (ToT) is a chain that allows you to query a Large Language Model (LLM) using the Tree of Thought technique. This is based on the papaer [\"Large Language Model Guided Tree-of-Thought\"](https://arxiv.org/pdf/2305.08291.pdf)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.13) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=1, max_tokens=512, model=\"text-davinci-003\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\n",
"\n",
"- This is a 4x4 Sudoku puzzle.\n",
"- The * represents a cell to be filled.\n",
"- The | character separates rows.\n",
"- At each step, replace one or more * with digits 1-4.\n",
"- There must be no duplicate digits in any row, column or 2x2 subgrid.\n",
"- Keep the known digits from previous valid thoughts in place.\n",
"- Each thought can be a partial or the final solution.\n"
]
}
],
"source": [
"sudoku_puzzle = \"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\"\n",
"sudoku_solution = \"3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\"\n",
"problem_description = f\"\"\"\n",
"{sudoku_puzzle}\n",
"\n",
"- This is a 4x4 Sudoku puzzle.\n",
"- The * represents a cell to be filled.\n",
"- The | character separates rows.\n",
"- At each step, replace one or more * with digits 1-4.\n",
"- There must be no duplicate digits in any row, column or 2x2 subgrid.\n",
"- Keep the known digits from previous valid thoughts in place.\n",
"- Each thought can be a partial or the final solution.\n",
"\"\"\".strip()\n",
"print(problem_description)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Rules Based Checker\n",
"\n",
"Each thought is evaluated by the thought checker and is given a validity type: valid, invalid or partial. A simple checker can be rule based. For example, in the case of a sudoku puzzle, the checker can check if the puzzle is valid, invalid or partial.\n",
"\n",
"In the following code we implement a simple rule based checker for a specific 4x4 sudoku puzzle.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from typing import Tuple\n",
"from langchain_experimental.tot.checker import ToTChecker\n",
"from langchain_experimental.tot.thought import ThoughtValidity\n",
"import re\n",
"\n",
"class MyChecker(ToTChecker):\n",
" def evaluate(self, problem_description: str, thoughts: Tuple[str, ...] = ()) -> ThoughtValidity:\n",
" last_thought = thoughts[-1]\n",
" clean_solution = last_thought.replace(\" \", \"\").replace('\"', \"\")\n",
" regex_solution = clean_solution.replace(\"*\", \".\").replace(\"|\", \"\\\\|\")\n",
" if sudoku_solution in clean_solution:\n",
" return ThoughtValidity.VALID_FINAL\n",
" elif re.search(regex_solution, sudoku_solution):\n",
" return ThoughtValidity.VALID_INTERMEDIATE\n",
" else:\n",
" return ThoughtValidity.INVALID"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just testing the MyChecker class above:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"checker = MyChecker()\n",
"assert checker.evaluate(\"\", (\"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\",)) == ThoughtValidity.VALID_INTERMEDIATE\n",
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\",)) == ThoughtValidity.VALID_FINAL\n",
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,*,1\",)) == ThoughtValidity.VALID_INTERMEDIATE\n",
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,*,3,1\",)) == ThoughtValidity.INVALID"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tree of Thought Chain\n",
"\n",
"Initialize and run the ToT chain, with maximum number of interactions `k` set to `30` and the maximum number child thoughts `c` set to `8`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ToTChain chain...\u001b[0m\n",
"Starting the ToT solve procedure.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31;1m\u001b[1;3mThought: 3*,*,2|1*,3,*|*,1,*,3|4,*,*,1\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3*,1,2|1*,3,*|*,1,*,3|4,*,*,1\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3*,1,2|1*,3,4|*,1,*,3|4,*,*,1\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3*,1,2|1*,3,4|*,1,2,3|4,*,*,1\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3*,1,2|1*,3,4|2,1,*,3|4,*,*,1\n",
"\u001b[0m"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Type <enum 'ThoughtValidity'> not serializable\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31;1m\u001b[1;3mThought: 3,*,*,2|1,*,3,*|*,1,*,3|4,1,*,*\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3,*,*,2|*,3,2,*|*,1,*,3|4,1,*,*\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3,2,*,2|1,*,3,*|*,1,*,3|4,1,*,*\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3,2,*,2|1,*,3,*|1,1,*,3|4,1,*,*\n",
"\u001b[0m\u001b[31;1m\u001b[1;3mThought: 3,2,*,2|1,1,3,*|1,1,*,3|4,1,*,*\n",
"\u001b[0m\u001b[33;1m\u001b[1;3mThought: 3,*,*,2|1,2,3,*|*,1,*,3|4,*,*,1\n",
"\u001b[0m\u001b[31;1m\u001b[1;3m Thought: 3,1,4,2|1,2,3,4|2,1,4,3|4,3,2,1\n",
"\u001b[0m\u001b[32;1m\u001b[1;3m Thought: 3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_experimental.tot.base import ToTChain\n",
"\n",
"tot_chain = ToTChain(llm=llm, checker=MyChecker(), k=30, c=5, verbose=True, verbose_llm=False)\n",
"tot_chain.run(problem_description=problem_description)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,199 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vector store-augmented text generation\n",
"\n",
"This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Data\n",
"\n",
"First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.docstore.document import Document\n",
"import requests\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.prompts import PromptTemplate\n",
"import pathlib\n",
"import subprocess\n",
"import tempfile"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Cloning into '.'...\n"
]
}
],
"source": [
"def get_github_docs(repo_owner, repo_name):\n",
" with tempfile.TemporaryDirectory() as d:\n",
" subprocess.check_call(\n",
" f\"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .\",\n",
" cwd=d,\n",
" shell=True,\n",
" )\n",
" git_sha = (\n",
" subprocess.check_output(\"git rev-parse HEAD\", shell=True, cwd=d)\n",
" .decode(\"utf-8\")\n",
" .strip()\n",
" )\n",
" repo_path = pathlib.Path(d)\n",
" markdown_files = list(repo_path.glob(\"*/*.md\")) + list(\n",
" repo_path.glob(\"*/*.mdx\")\n",
" )\n",
" for markdown_file in markdown_files:\n",
" with open(markdown_file, \"r\") as f:\n",
" relative_path = markdown_file.relative_to(repo_path)\n",
" github_url = f\"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}\"\n",
" yield Document(page_content=f.read(), metadata={\"source\": github_url})\n",
"\n",
"\n",
"sources = get_github_docs(\"yirenlu92\", \"deno-manual-forked\")\n",
"\n",
"source_chunks = []\n",
"splitter = CharacterTextSplitter(separator=\" \", chunk_size=1024, chunk_overlap=0)\n",
"for source in sources:\n",
" for chunk in splitter.split_text(source.page_content):\n",
" source_chunks.append(Document(page_content=chunk, metadata=source.metadata))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up Vector DB\n",
"\n",
"Now that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up LLM Chain with Custom Prompt\n",
"\n",
"Next, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: `context`, which will be the documents fetched from the vector search, and `topic`, which is given by the user."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"\n",
"prompt_template = \"\"\"Use the context below to write a 400 word blog post about the topic below:\n",
" Context: {context}\n",
" Topic: {topic}\n",
" Blog post:\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(template=prompt_template, input_variables=[\"context\", \"topic\"])\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"chain = LLMChain(llm=llm, prompt=PROMPT)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Text\n",
"\n",
"Finally, we write a function to apply our inputs to the chain. The function takes an input parameter `topic`. We find the documents in the vector index that correspond to that `topic`, and use them as additional context in our simple LLM chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def generate_blog_post(topic):\n",
" docs = search_index.similarity_search(topic, k=4)\n",
" inputs = [{\"context\": doc.page_content, \"topic\": topic} for doc in docs]\n",
" print(chain.apply(inputs))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'text': '\\n\\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\\n\\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\\n\\n```ts\\nDeno.env.set(\"FIREBASE_API_KEY\", \"examplekey123\");\\nDeno.env.set(\"FIREBASE_AUTH_DOMAIN\", \"firebasedomain.com\");\\n\\nconsole.log(Deno.env.get(\"FIREBASE_API_KEY\")); // examplekey123\\nconsole.log(Deno.env.get(\"FIREBASE_AUTH_DOMAIN\")); // firebasedomain.com\\n```\\n\\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\\n\\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\\n\\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\\n\\n```\\nVAR=hello deno run main.ts\\n```\\n\\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\\n\\n```\\nVAR=hello && deno eval \"console.log(\\'Deno: \\' + Deno.env.get(\\'VAR'}, {'text': '\\n\\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\\n\\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\\n\\n```js\\n// env.js\\nDeno.env.get(\"HOME\");\\n```\\n\\nWhen running this code, you\\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\\n\\n```shell\\n# Allow access to only the HOME env var\\ndeno run --allow-env=HOME env.js\\n```\\n\\nIt\\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\\n\\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\\n\\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\\n\\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\\n\\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\\n\\n```'}]\n"
]
}
],
"source": [
"generate_blog_post(\"environment variables\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/modules/chains/additional/question_answering.html). Async support for other chains is on the roadmap."
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/how_to/question_answering.html). Async support for other chains is on the roadmap."
]
},
{

View File

@@ -494,9 +494,9 @@
"\n",
"There are a number of more specific chains that use OpenAI functions.\n",
"- [Extraction](/docs/modules/chains/additional/extraction): very similar to structured output chain, intended for information/entity extraction specifically.\n",
"- [Tagging](/docs/modules/chains/additional/tagging): tag inputs.\n",
"- [OpenAPI](/docs/modules/chains/additional/openapi_openai): take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.\n",
"- [QA with citations](/docs/modules/chains/additional/qa_citations): use OpenAI functions ability to extract citations from text."
"- [Tagging](/docs/use_cases/tagging): tag inputs.\n",
"- [OpenAPI](/docs/use_cases/apis/openapi_openai): take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.\n",
"- [QA with citations](/docs/use_cases/question_answering/how_to/qa_citations): use OpenAI functions ability to extract citations from text."
]
}
],