mirror of
				https://github.com/hwchase17/langchain.git
				synced 2025-11-03 17:54:10 +00:00 
			
		
		
		
	This patch updates function "run" to "invoke" in smart_llm.ipynb.
Without this patch, you see following warning.
LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
    Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
		
	
		
			
				
	
	
		
			282 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			282 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
{
 | 
						|
 "cells": [
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "9e9b7651",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "# How to use a SmartLLMChain\n",
 | 
						|
    "\n",
 | 
						|
    "A SmartLLMChain is a form of self-critique chain that can help you if have particularly complex questions to answer. Instead of doing a single LLM pass, it instead performs these 3 steps:\n",
 | 
						|
    "1. Ideation: Pass the user prompt n times through the LLM to get n output proposals (called \"ideas\"), where n is a parameter you can set \n",
 | 
						|
    "2. Critique: The LLM critiques all ideas to find possible flaws and picks the best one \n",
 | 
						|
    "3. Resolve: The LLM tries to improve upon the best idea (as chosen in the critique step) and outputs it. This is then the final output.\n",
 | 
						|
    "\n",
 | 
						|
    "SmartLLMChains are based on the SmartGPT workflow proposed in https://youtu.be/wVzuvf9D9BU.\n",
 | 
						|
    "\n",
 | 
						|
    "Note that SmartLLMChains\n",
 | 
						|
    "- use more LLM passes (ie n+2 instead of just 1)\n",
 | 
						|
    "- only work then the underlying LLM has the capability for reflection, which smaller models often don't\n",
 | 
						|
    "- only work with underlying models that return exactly 1 output, not multiple\n",
 | 
						|
    "\n",
 | 
						|
    "This notebook demonstrates how to use a SmartLLMChain."
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "714dede0",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "##### Same LLM for all steps"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 1,
 | 
						|
   "id": "d3f7fb22",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "import os\n",
 | 
						|
    "\n",
 | 
						|
    "os.environ[\"OPENAI_API_KEY\"] = \"...\""
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 2,
 | 
						|
   "id": "10e5ece6",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "from langchain.prompts import PromptTemplate\n",
 | 
						|
    "from langchain_experimental.smart_llm import SmartLLMChain\n",
 | 
						|
    "from langchain_openai import ChatOpenAI"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "1780da51",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "As example question, we will use \"I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?\". This is an example from the original SmartGPT video (https://youtu.be/wVzuvf9D9BU?t=384). While this seems like a very easy question, LLMs struggle do these kinds of questions that involve numbers and physical reasoning.\n",
 | 
						|
    "\n",
 | 
						|
    "As we will see, all 3 initial ideas are completely wrong - even though we're using GPT4! Only when using self-reflection do we get a correct answer. "
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": null,
 | 
						|
   "id": "054af6b1",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "hard_question = \"I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?\""
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "8049cecd",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "So, we first create an LLM and prompt template"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 3,
 | 
						|
   "id": "811ea8e1",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "prompt = PromptTemplate.from_template(hard_question)\n",
 | 
						|
    "llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "50b602e4",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "Now we can create a SmartLLMChain"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 4,
 | 
						|
   "id": "8cd49199",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "chain = SmartLLMChain(llm=llm, prompt=prompt, n_ideas=3, verbose=True)"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "6a72f276",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "Now we can use the SmartLLM as a drop-in replacement for our LLM. E.g.:"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 5,
 | 
						|
   "id": "074e5e75",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [
 | 
						|
    {
 | 
						|
     "name": "stdout",
 | 
						|
     "output_type": "stream",
 | 
						|
     "text": [
 | 
						|
      "\n",
 | 
						|
      "\n",
 | 
						|
      "\u001b[1m> Entering new SmartLLMChain chain...\u001b[0m\n",
 | 
						|
      "Prompt after formatting:\n",
 | 
						|
      "\u001b[32;1m\u001b[1;3mI have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?\u001b[0m\n",
 | 
						|
      "Idea 1:\n",
 | 
						|
      "\u001b[36;1m\u001b[1;3m1. Fill the 6-liter jug completely.\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug.\n",
 | 
						|
      "3. Fill the 6-liter jug again.\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full.\n",
 | 
						|
      "5. The amount of water left in the 6-liter jug will be exactly 6 liters.\u001b[0m\n",
 | 
						|
      "Idea 2:\n",
 | 
						|
      "\u001b[36;1m\u001b[1;3m1. Fill the 6-liter jug completely.\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug.\n",
 | 
						|
      "3. Fill the 6-liter jug again.\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full.\n",
 | 
						|
      "5. Since the 12-liter jug is now full, there will be 2 liters of water left in the 6-liter jug.\n",
 | 
						|
      "6. Empty the 12-liter jug.\n",
 | 
						|
      "7. Pour the 2 liters of water from the 6-liter jug into the 12-liter jug.\n",
 | 
						|
      "8. Fill the 6-liter jug completely again.\n",
 | 
						|
      "9. Pour the water from the 6-liter jug into the 12-liter jug, which already has 2 liters in it.\n",
 | 
						|
      "10. Now, the 12-liter jug will have exactly 6 liters of water (2 liters from before + 4 liters from the 6-liter jug).\u001b[0m\n",
 | 
						|
      "Idea 3:\n",
 | 
						|
      "\u001b[36;1m\u001b[1;3m1. Fill the 6-liter jug completely.\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug.\n",
 | 
						|
      "3. Fill the 6-liter jug again.\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full.\n",
 | 
						|
      "5. The amount of water left in the 6-liter jug will be exactly 6 liters.\u001b[0m\n",
 | 
						|
      "Critique:\n",
 | 
						|
      "\u001b[33;1m\u001b[1;3mIdea 1:\n",
 | 
						|
      "1. Fill the 6-liter jug completely. (No flaw)\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw)\n",
 | 
						|
      "3. Fill the 6-liter jug again. (No flaw)\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.)\n",
 | 
						|
      "5. The amount of water left in the 6-liter jug will be exactly 6 liters. (Flaw: This statement is incorrect, as there will be no water left in the 6-liter jug after pouring it into the 12-liter jug.)\n",
 | 
						|
      "\n",
 | 
						|
      "Idea 2:\n",
 | 
						|
      "1. Fill the 6-liter jug completely. (No flaw)\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw)\n",
 | 
						|
      "3. Fill the 6-liter jug again. (No flaw)\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.)\n",
 | 
						|
      "5. Since the 12-liter jug is now full, there will be 2 liters of water left in the 6-liter jug. (Flaw: This statement is incorrect, as the 12-liter jug will not be full and there will be no water left in the 6-liter jug.)\n",
 | 
						|
      "6. Empty the 12-liter jug. (No flaw)\n",
 | 
						|
      "7. Pour the 2 liters of water from the 6-liter jug into the 12-liter jug. (Flaw: This step is based on the incorrect assumption that there are 2 liters of water left in the 6-liter jug.)\n",
 | 
						|
      "8. Fill the 6-liter jug completely again. (No flaw)\n",
 | 
						|
      "9. Pour the water from the 6-liter jug into the 12-liter jug, which already has 2 liters in it. (Flaw: This step is based on the incorrect assumption that there are 2 liters of water in the 12-liter jug.)\n",
 | 
						|
      "10. Now, the 12-liter jug will have exactly 6 liters of water (2 liters from before + 4 liters from the 6-liter jug). (Flaw: This conclusion is based on the incorrect assumptions made in the previous steps.)\n",
 | 
						|
      "\n",
 | 
						|
      "Idea 3:\n",
 | 
						|
      "1. Fill the 6-liter jug completely. (No flaw)\n",
 | 
						|
      "2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw)\n",
 | 
						|
      "3. Fill the 6-liter jug again. (No flaw)\n",
 | 
						|
      "4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.)\n",
 | 
						|
      "5. The amount of water left in the 6-liter jug will be exactly 6 liters. (Flaw: This statement is incorrect, as there will be no water left in the 6-liter jug after pouring it into the 12-liter jug.)\u001b[0m\n",
 | 
						|
      "Resolution:\n",
 | 
						|
      "\u001b[32;1m\u001b[1;3m1. Fill the 12-liter jug completely.\n",
 | 
						|
      "2. Pour the water from the 12-liter jug into the 6-liter jug until the 6-liter jug is full.\n",
 | 
						|
      "3. The amount of water left in the 12-liter jug will be exactly 6 liters.\u001b[0m\n",
 | 
						|
      "\n",
 | 
						|
      "\u001b[1m> Finished chain.\u001b[0m\n"
 | 
						|
     ]
 | 
						|
    },
 | 
						|
    {
 | 
						|
     "data": {
 | 
						|
      "text/plain": [
 | 
						|
       "'1. Fill the 12-liter jug completely.\\n2. Pour the water from the 12-liter jug into the 6-liter jug until the 6-liter jug is full.\\n3. The amount of water left in the 12-liter jug will be exactly 6 liters.'"
 | 
						|
      ]
 | 
						|
     },
 | 
						|
     "execution_count": 5,
 | 
						|
     "metadata": {},
 | 
						|
     "output_type": "execute_result"
 | 
						|
    }
 | 
						|
   ],
 | 
						|
   "source": [
 | 
						|
    "chain.invoke({})"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "bbfebea1",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "##### Different LLM for different steps"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "attachments": {},
 | 
						|
   "cell_type": "markdown",
 | 
						|
   "id": "5be6ec08",
 | 
						|
   "metadata": {},
 | 
						|
   "source": [
 | 
						|
    "You can also use different LLMs for the different steps by passing `ideation_llm`, `critique_llm` and `resolve_llm`. You might want to do this to use a more creative (i.e., high-temperature) model for ideation and a more strict (i.e., low-temperature) model for critique and resolution."
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": 8,
 | 
						|
   "id": "9c33fa19",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": [
 | 
						|
    "chain = SmartLLMChain(\n",
 | 
						|
    "    ideation_llm=ChatOpenAI(temperature=0.9, model_name=\"gpt-4\"),\n",
 | 
						|
    "    llm=ChatOpenAI(\n",
 | 
						|
    "        temperature=0, model_name=\"gpt-4\"\n",
 | 
						|
    "    ),  # will be used for critique and resolution as no specific llms are given\n",
 | 
						|
    "    prompt=prompt,\n",
 | 
						|
    "    n_ideas=3,\n",
 | 
						|
    "    verbose=True,\n",
 | 
						|
    ")"
 | 
						|
   ]
 | 
						|
  },
 | 
						|
  {
 | 
						|
   "cell_type": "code",
 | 
						|
   "execution_count": null,
 | 
						|
   "id": "886c1cc1",
 | 
						|
   "metadata": {},
 | 
						|
   "outputs": [],
 | 
						|
   "source": []
 | 
						|
  }
 | 
						|
 ],
 | 
						|
 "metadata": {
 | 
						|
  "kernelspec": {
 | 
						|
   "display_name": "Python 3 (ipykernel)",
 | 
						|
   "language": "python",
 | 
						|
   "name": "python3"
 | 
						|
  },
 | 
						|
  "language_info": {
 | 
						|
   "codemirror_mode": {
 | 
						|
    "name": "ipython",
 | 
						|
    "version": 3
 | 
						|
   },
 | 
						|
   "file_extension": ".py",
 | 
						|
   "mimetype": "text/x-python",
 | 
						|
   "name": "python",
 | 
						|
   "nbconvert_exporter": "python",
 | 
						|
   "pygments_lexer": "ipython3",
 | 
						|
   "version": "3.9.1"
 | 
						|
  }
 | 
						|
 },
 | 
						|
 "nbformat": 4,
 | 
						|
 "nbformat_minor": 5
 | 
						|
}
 |