mirror of
				https://github.com/hwchase17/langchain.git
				synced 2025-10-31 16:08:59 +00:00 
			
		
		
		
	In the section `Get Message Completions from a Chat Model` of the quick start guide, the HumanMessage doesn't need to include `Translate this sentence from English to French.` when there is a system message. Simplify HumanMessages in these examples can further demonstrate the power of LLM.
		
			
				
	
	
		
			412 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			412 lines
		
	
	
		
			11 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| {
 | |
|  "cells": [
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "e49f1e0d",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "# Getting Started\n",
 | |
|     "\n",
 | |
|     "This notebook covers how to get started with chat models. The interface is based around messages rather than raw text."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 1,
 | |
|    "id": "522686de",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "from langchain.chat_models import ChatOpenAI\n",
 | |
|     "from langchain import PromptTemplate, LLMChain\n",
 | |
|     "from langchain.prompts.chat import (\n",
 | |
|     "    ChatPromptTemplate,\n",
 | |
|     "    SystemMessagePromptTemplate,\n",
 | |
|     "    AIMessagePromptTemplate,\n",
 | |
|     "    HumanMessagePromptTemplate,\n",
 | |
|     ")\n",
 | |
|     "from langchain.schema import (\n",
 | |
|     "    AIMessage,\n",
 | |
|     "    HumanMessage,\n",
 | |
|     "    SystemMessage\n",
 | |
|     ")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 2,
 | |
|    "id": "62e0dbc3",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "chat = ChatOpenAI(temperature=0)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "bbaec18e-3684-4eef-955f-c1cec8bf765d",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 3,
 | |
|    "id": "76a6e7b0-e927-4bfb-a414-1332a4149106",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"J'aime programmer.\", additional_kwargs={})"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 3,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chat([HumanMessage(content=\"Translate this sentence from English to French. I love programming.\")])"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "a62153d4-1211-411b-a493-3febfe446ae0",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "OpenAI's chat model supports multiple messages as input. See [here](https://platform.openai.com/docs/guides/chat/chat-vs-completions) for more information. Here is an example of sending a system and user message to the chat model:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 4,
 | |
|    "id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"J'aime programmer.\", additional_kwargs={})"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 4,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "messages = [\n",
 | |
|     "    SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
 | |
|     "    HumanMessage(content=\"I love programming.\")\n",
 | |
|     "]\n",
 | |
|     "chat(messages)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "36dc8d7e-bd25-47ac-8c1b-60e3422603d3",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 5,
 | |
|    "id": "2b21fc52-74b6-4950-ab78-45d12c68fb4d",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "LLMResult(generations=[[ChatGeneration(text=\"J'aime programmer.\", generation_info=None, message=AIMessage(content=\"J'aime programmer.\", additional_kwargs={}))], [ChatGeneration(text=\"J'aime l'intelligence artificielle.\", generation_info=None, message=AIMessage(content=\"J'aime l'intelligence artificielle.\", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 5,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "batch_messages = [\n",
 | |
|     "    [\n",
 | |
|     "        SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
 | |
|     "        HumanMessage(content=\"I love programming.\")\n",
 | |
|     "    ],\n",
 | |
|     "    [\n",
 | |
|     "        SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
 | |
|     "        HumanMessage(content=\"I love artificial intelligence.\")\n",
 | |
|     "    ],\n",
 | |
|     "]\n",
 | |
|     "result = chat.generate(batch_messages)\n",
 | |
|     "result"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "2960f50f",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can recover things like token usage from this LLMResult"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 6,
 | |
|    "id": "a6186bee",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "{'token_usage': {'prompt_tokens': 57,\n",
 | |
|        "  'completion_tokens': 20,\n",
 | |
|        "  'total_tokens': 77}}"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 6,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "result.llm_output"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "b10b00ef-f373-4bc3-8302-2dfc28033734",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## PromptTemplates"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "778f912a-66ea-4a5d-b3de-6c7db4baba26",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.\n",
 | |
|     "\n",
 | |
|     "For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 7,
 | |
|    "id": "180c5cc8",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "template=\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
 | |
|     "system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n",
 | |
|     "human_template=\"{text}\"\n",
 | |
|     "human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 8,
 | |
|    "id": "fbb043e6",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "AIMessage(content=\"J'adore la programmation.\", additional_kwargs={})"
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 8,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])\n",
 | |
|     "\n",
 | |
|     "# get a chat completion from the formatted messages\n",
 | |
|     "chat(chat_prompt.format_prompt(input_language=\"English\", output_language=\"French\", text=\"I love programming.\").to_messages())"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "e28b98da",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 9,
 | |
|    "id": "d5b1ab1c",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "prompt=PromptTemplate(\n",
 | |
|     "    template=\"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
 | |
|     "    input_variables=[\"input_language\", \"output_language\"],\n",
 | |
|     ")\n",
 | |
|     "system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "92af0bba",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## LLMChain\n",
 | |
|     "You can use the existing LLMChain in a very similar way to before - provide a prompt and a model."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 10,
 | |
|    "id": "f2cbfe3d",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": [
 | |
|     "chain = LLMChain(llm=chat, prompt=chat_prompt)"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 11,
 | |
|    "id": "268543b1",
 | |
|    "metadata": {},
 | |
|    "outputs": [
 | |
|     {
 | |
|      "data": {
 | |
|       "text/plain": [
 | |
|        "\"J'adore la programmation.\""
 | |
|       ]
 | |
|      },
 | |
|      "execution_count": 11,
 | |
|      "metadata": {},
 | |
|      "output_type": "execute_result"
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "chain.run(input_language=\"English\", output_language=\"French\", text=\"I love programming.\")"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "markdown",
 | |
|    "id": "eb779f3f",
 | |
|    "metadata": {},
 | |
|    "source": [
 | |
|     "## Streaming\n",
 | |
|     "\n",
 | |
|     "Streaming is supported for `ChatOpenAI` through callback handling."
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": 10,
 | |
|    "id": "509181be",
 | |
|    "metadata": {
 | |
|     "tags": []
 | |
|    },
 | |
|    "outputs": [
 | |
|     {
 | |
|      "name": "stdout",
 | |
|      "output_type": "stream",
 | |
|      "text": [
 | |
|       "\n",
 | |
|       "\n",
 | |
|       "Verse 1:\n",
 | |
|       "Bubbles rising to the top\n",
 | |
|       "A refreshing drink that never stops\n",
 | |
|       "Clear and crisp, it's pure delight\n",
 | |
|       "A taste that's sure to excite\n",
 | |
|       "\n",
 | |
|       "Chorus:\n",
 | |
|       "Sparkling water, oh so fine\n",
 | |
|       "A drink that's always on my mind\n",
 | |
|       "With every sip, I feel alive\n",
 | |
|       "Sparkling water, you're my vibe\n",
 | |
|       "\n",
 | |
|       "Verse 2:\n",
 | |
|       "No sugar, no calories, just pure bliss\n",
 | |
|       "A drink that's hard to resist\n",
 | |
|       "It's the perfect way to quench my thirst\n",
 | |
|       "A drink that always comes first\n",
 | |
|       "\n",
 | |
|       "Chorus:\n",
 | |
|       "Sparkling water, oh so fine\n",
 | |
|       "A drink that's always on my mind\n",
 | |
|       "With every sip, I feel alive\n",
 | |
|       "Sparkling water, you're my vibe\n",
 | |
|       "\n",
 | |
|       "Bridge:\n",
 | |
|       "From the mountains to the sea\n",
 | |
|       "Sparkling water, you're the key\n",
 | |
|       "To a healthy life, a happy soul\n",
 | |
|       "A drink that makes me feel whole\n",
 | |
|       "\n",
 | |
|       "Chorus:\n",
 | |
|       "Sparkling water, oh so fine\n",
 | |
|       "A drink that's always on my mind\n",
 | |
|       "With every sip, I feel alive\n",
 | |
|       "Sparkling water, you're my vibe\n",
 | |
|       "\n",
 | |
|       "Outro:\n",
 | |
|       "Sparkling water, you're the one\n",
 | |
|       "A drink that's always so much fun\n",
 | |
|       "I'll never let you go, my friend\n",
 | |
|       "Sparkling"
 | |
|      ]
 | |
|     }
 | |
|    ],
 | |
|    "source": [
 | |
|     "from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
 | |
|     "chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)\n",
 | |
|     "resp = chat([HumanMessage(content=\"Write me a song about sparkling water.\")])\n"
 | |
|    ]
 | |
|   },
 | |
|   {
 | |
|    "cell_type": "code",
 | |
|    "execution_count": null,
 | |
|    "id": "c095285d",
 | |
|    "metadata": {},
 | |
|    "outputs": [],
 | |
|    "source": []
 | |
|   }
 | |
|  ],
 | |
|  "metadata": {
 | |
|   "kernelspec": {
 | |
|    "display_name": "Python 3 (ipykernel)",
 | |
|    "language": "python",
 | |
|    "name": "python3"
 | |
|   },
 | |
|   "language_info": {
 | |
|    "codemirror_mode": {
 | |
|     "name": "ipython",
 | |
|     "version": 3
 | |
|    },
 | |
|    "file_extension": ".py",
 | |
|    "mimetype": "text/x-python",
 | |
|    "name": "python",
 | |
|    "nbconvert_exporter": "python",
 | |
|    "pygments_lexer": "ipython3",
 | |
|    "version": "3.9.1"
 | |
|   }
 | |
|  },
 | |
|  "nbformat": 4,
 | |
|  "nbformat_minor": 5
 | |
| }
 |