mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-18 13:31:36 +00:00
Looking at tokens / page of our docs, we see a few outliers: <img width="761" alt="image" src="https://github.com/langchain-ai/langchain/assets/122662504/677aa2d6-0a29-45e4-882a-db2bbf46d02b"> It is due to non-rendering images in one case, and output spamming. Clean these, along with other cases of excessing output spamming in docs. All get sucked into chat-langchain for retrieval.
377 lines
13 KiB
Plaintext
377 lines
13 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "7320f16b",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_label: Llama 2 Chat\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "90a1faf2",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Llama2Chat\n",
|
|
"\n",
|
|
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/use_cases/question_answering/local_retrieval_qa), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
|
|
"\n",
|
|
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "36c03540",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.chains import LLMChain\n",
|
|
"from langchain.memory import ConversationBufferMemory\n",
|
|
"from langchain_experimental.chat_models import Llama2Chat"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5c76910f",
|
|
"metadata": {},
|
|
"source": [
|
|
"For the chat application examples below, we'll use the following chat `prompt_template`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "9bbfaf3a",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.prompts.chat import (\n",
|
|
" ChatPromptTemplate,\n",
|
|
" HumanMessagePromptTemplate,\n",
|
|
" MessagesPlaceholder,\n",
|
|
")\n",
|
|
"from langchain_core.messages import SystemMessage\n",
|
|
"\n",
|
|
"template_messages = [\n",
|
|
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
|
|
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
|
|
" HumanMessagePromptTemplate.from_template(\"{text}\"),\n",
|
|
"]\n",
|
|
"prompt_template = ChatPromptTemplate.from_messages(template_messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2f3343b7",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Chat with Llama-2 via `HuggingFaceTextGenInference` LLM"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2ff99380",
|
|
"metadata": {},
|
|
"source": [
|
|
"A HuggingFaceTextGenInference LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"docker run \\\n",
|
|
" --rm \\\n",
|
|
" --gpus all \\\n",
|
|
" --ipc=host \\\n",
|
|
" -p 8080:80 \\\n",
|
|
" -v ~/.cache/huggingface/hub:/data \\\n",
|
|
" -e HF_API_TOKEN=${HF_API_TOKEN} \\\n",
|
|
" ghcr.io/huggingface/text-generation-inference:0.9 \\\n",
|
|
" --hostname 0.0.0.0 \\\n",
|
|
" --model-id meta-llama/Llama-2-13b-chat-hf \\\n",
|
|
" --quantize bitsandbytes \\\n",
|
|
" --num-shard 4\n",
|
|
"```\n",
|
|
"\n",
|
|
"This works on a machine with 4 x RTX 3080ti cards, for example. Adjust the `--num_shard` value to the number of GPUs available. The `HF_API_TOKEN` environment variable holds the Hugging Face API token."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "238095fd",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# !pip3 install text-generation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "79c4ace9",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create a `HuggingFaceTextGenInference` instance that connects to the local inference server and wrap it into `Llama2Chat`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "7a9f6de2",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_community.llms import HuggingFaceTextGenInference\n",
|
|
"\n",
|
|
"llm = HuggingFaceTextGenInference(\n",
|
|
" inference_server_url=\"http://127.0.0.1:8080/\",\n",
|
|
" max_new_tokens=512,\n",
|
|
" top_k=50,\n",
|
|
" temperature=0.1,\n",
|
|
" repetition_penalty=1.03,\n",
|
|
")\n",
|
|
"\n",
|
|
"model = Llama2Chat(llm=llm)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4f646a2b",
|
|
"metadata": {},
|
|
"source": [
|
|
"Then you are ready to use the chat `model` together with `prompt_template` and conversation `memory` in an `LLMChain`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "54b5d1d1",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
|
|
"chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "e6717947",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Sure, I'd be happy to help! Here are a few popular locations to consider visiting in Vienna:\n",
|
|
"\n",
|
|
"1. Schönbrunn Palace\n",
|
|
"2. St. Stephen's Cathedral\n",
|
|
"3. Hofburg Palace\n",
|
|
"4. Belvedere Palace\n",
|
|
"5. Prater Park\n",
|
|
"6. Vienna State Opera\n",
|
|
"7. Albertina Museum\n",
|
|
"8. Museum of Natural History\n",
|
|
"9. Kunsthistorisches Museum\n",
|
|
"10. Ringstrasse\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(\n",
|
|
" chain.run(\n",
|
|
" text=\"What can I see in Vienna? Propose a few locations. Names only, no details.\"\n",
|
|
" )\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "17bf10d5",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Certainly! St. Stephen's Cathedral (Stephansdom) is one of the most recognizable landmarks in Vienna and a must-see attraction for visitors. This stunning Gothic cathedral is located in the heart of the city and is known for its intricate stone carvings, colorful stained glass windows, and impressive dome.\n",
|
|
"\n",
|
|
"The cathedral was built in the 12th century and has been the site of many important events throughout history, including the coronation of Holy Roman emperors and the funeral of Mozart. Today, it is still an active place of worship and offers guided tours, concerts, and special events. Visitors can climb up the south tower for panoramic views of the city or attend a service to experience the beautiful music and chanting.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(chain.run(text=\"Tell me more about #2.\"))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2a297e09",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Chat with Llama-2 via `LlamaCPP` LLM"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "52c1a0b9",
|
|
"metadata": {},
|
|
"source": [
|
|
"For using a Llama-2 chat model with a [LlamaCPP](/docs/integrations/llms/llamacpp) `LMM`, install the `llama-cpp-python` library using [these installation instructions](/docs/integrations/llms/llamacpp#installation). The following example uses a quantized [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`. \n",
|
|
"\n",
|
|
"After creating a `LlamaCpp` instance, the `llm` is again wrapped into `Llama2Chat`"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "18d10bc3-ede6-4410-a867-7c623a0efdb8",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from os.path import expanduser\n",
|
|
"\n",
|
|
"from langchain_community.llms import LlamaCpp\n",
|
|
"\n",
|
|
"model_path = expanduser(\"~/Models/llama-2-7b-chat.Q4_0.gguf\")\n",
|
|
"\n",
|
|
"llm = LlamaCpp(\n",
|
|
" model_path=model_path,\n",
|
|
" streaming=False,\n",
|
|
")\n",
|
|
"model = Llama2Chat(llm=llm)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "50498d96",
|
|
"metadata": {},
|
|
"source": [
|
|
"and used in the same way as in the previous example."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "90782b96",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
|
|
"chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "2160b26d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Of course! Vienna is a beautiful city with a rich history and culture. Here are some of the top tourist attractions you might want to consider visiting:\n",
|
|
"1. Schönbrunn Palace\n",
|
|
"2. St. Stephen's Cathedral\n",
|
|
"3. Hofburg Palace\n",
|
|
"4. Belvedere Palace\n",
|
|
"5. Prater Park\n",
|
|
"6. MuseumsQuartier\n",
|
|
"7. Ringstrasse\n",
|
|
"8. Vienna State Opera\n",
|
|
"9. Kunsthistorisches Museum\n",
|
|
"10. Imperial Palace\n",
|
|
"\n",
|
|
"These are just a few of the many amazing places to see in Vienna. Each one has its own unique history and charm, so I hope you enjoy exploring this beautiful city!\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"llama_print_timings: load time = 250.46 ms\n",
|
|
"llama_print_timings: sample time = 56.40 ms / 144 runs ( 0.39 ms per token, 2553.37 tokens per second)\n",
|
|
"llama_print_timings: prompt eval time = 1444.25 ms / 47 tokens ( 30.73 ms per token, 32.54 tokens per second)\n",
|
|
"llama_print_timings: eval time = 8832.02 ms / 143 runs ( 61.76 ms per token, 16.19 tokens per second)\n",
|
|
"llama_print_timings: total time = 10645.94 ms\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(\n",
|
|
" chain.run(\n",
|
|
" text=\"What can I see in Vienna? Propose a few locations. Names only, no details.\"\n",
|
|
" )\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "d9ce06e3",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Llama.generate: prefix-match hit\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Of course! St. Stephen's Cathedral (also known as Stephansdom) is a stunning Gothic-style cathedral located in the heart of Vienna, Austria. It is one of the most recognizable landmarks in the city and is considered a symbol of Vienna.\n",
|
|
"Here are some interesting facts about St. Stephen's Cathedral:\n",
|
|
"1. History: The construction of St. Stephen's Cathedral began in the 12th century on the site of a former Romanesque church, and it took over 600 years to complete. The cathedral has been renovated and expanded several times throughout its history, with the most significant renovation taking place in the 19th century.\n",
|
|
"2. Architecture: St. Stephen's Cathedral is built in the Gothic style, characterized by its tall spires, pointed arches, and intricate stone carvings. The cathedral features a mix of Romanesque, Gothic, and Baroque elements, making it a unique blend of styles.\n",
|
|
"3. Design: The cathedral's design is based on the plan of a cross with a long nave and two shorter arms extending from it. The main altar is\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"llama_print_timings: load time = 250.46 ms\n",
|
|
"llama_print_timings: sample time = 100.60 ms / 256 runs ( 0.39 ms per token, 2544.73 tokens per second)\n",
|
|
"llama_print_timings: prompt eval time = 5128.71 ms / 160 tokens ( 32.05 ms per token, 31.20 tokens per second)\n",
|
|
"llama_print_timings: eval time = 16193.02 ms / 255 runs ( 63.50 ms per token, 15.75 tokens per second)\n",
|
|
"llama_print_timings: total time = 21988.57 ms\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(chain.run(text=\"Tell me more about #2.\"))"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.8"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|