mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 14:43:07 +00:00
530 lines
17 KiB
Plaintext
530 lines
17 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "GDDVue_1cq6d"
|
||
},
|
||
"source": [
|
||
"# NVIDIA NIMs \n",
|
||
"\n",
|
||
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
|
||
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
|
||
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
|
||
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
|
||
"command on NVIDIA accelerated infrastructure.\n",
|
||
"\n",
|
||
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
|
||
"NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
|
||
"giving enterprises ownership and full control of their IP and AI application.\n",
|
||
"\n",
|
||
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
|
||
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
|
||
"\n",
|
||
"This example goes over how to use LangChain to interact with the supported [NVIDIA Retrieval QA Embedding Model](https://build.nvidia.com/nvidia/embed-qa-4) for [retrieval-augmented generation](https://developer.nvidia.com/blog/build-enterprise-retrieval-augmented-generation-apps-with-nvidia-retrieval-qa-embedding-model/) via the `NVIDIAEmbeddings` class.\n",
|
||
"\n",
|
||
"For more information on accessing the chat models through this API, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Installation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install --upgrade --quiet langchain-nvidia-ai-endpoints"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "qKcxQMFTcwWi"
|
||
},
|
||
"source": [
|
||
"## Setup\n",
|
||
"\n",
|
||
"**To get started:**\n",
|
||
"\n",
|
||
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
|
||
"\n",
|
||
"2. Select the `Retrieval` tab, then select your model of choice.\n",
|
||
"\n",
|
||
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
|
||
"\n",
|
||
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/"
|
||
},
|
||
"id": "hoF41-tNczS3",
|
||
"outputId": "7f2833dc-191c-4d73-b823-7b2745a93a2f"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import getpass\n",
|
||
"import os\n",
|
||
"\n",
|
||
"# del os.environ['NVIDIA_API_KEY'] ## delete key and reset\n",
|
||
"if os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
|
||
" print(\"Valid NVIDIA_API_KEY already in environment. Delete to reset\")\n",
|
||
"else:\n",
|
||
" nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
|
||
" assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
|
||
" os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "l185et2kc8pS"
|
||
},
|
||
"source": [
|
||
"We should be able to see an embedding model among that list which can be used in conjunction with an LLM for effective RAG solutions. We can interface with this model as well as other embedding models supported by NIM through the `NVIDIAEmbeddings` class."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Working with NIMs on the NVIDIA API Catalog\n",
|
||
"\n",
|
||
"When initializing an embedding model you can select a model by passing it, e.g. `NV-Embed-QA` below, or use the default by not passing any arguments."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "hbXmJssPdIPX"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
|
||
"\n",
|
||
"embedder = NVIDIAEmbeddings(model=\"NV-Embed-QA\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "SvQijbCwdLXB"
|
||
},
|
||
"source": [
|
||
"This model is a fine-tuned E5-large model which supports the expected `Embeddings` methods including:\n",
|
||
"\n",
|
||
"- `embed_query`: Generate query embedding for a query sample.\n",
|
||
"\n",
|
||
"- `embed_documents`: Generate passage embeddings for a list of documents which you would like to search over.\n",
|
||
"\n",
|
||
"- `aembed_query`/`aembed_documents`: Asynchronous versions of the above."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Working with self-hosted NVIDIA NIMs\n",
|
||
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
|
||
"\n",
|
||
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n",
|
||
"\n",
|
||
"# connect to an embedding NIM running at localhost:8080\n",
|
||
"embedder = NVIDIAEmbeddings(base_url=\"http://localhost:8080/v1\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "pcDu3v4CbmWk"
|
||
},
|
||
"source": [
|
||
"### **Similarity**\n",
|
||
"\n",
|
||
"The following is a quick test of the similarity for these data points:\n",
|
||
"\n",
|
||
"**Queries:**\n",
|
||
"\n",
|
||
"- What's the weather like in Komchatka?\n",
|
||
"\n",
|
||
"- What kinds of food is Italy known for?\n",
|
||
"\n",
|
||
"- What's my name? I bet you don't remember...\n",
|
||
"\n",
|
||
"- What's the point of life anyways?\n",
|
||
"\n",
|
||
"- The point of life is to have fun :D\n",
|
||
"\n",
|
||
"**Documents:**\n",
|
||
"\n",
|
||
"- Komchatka's weather is cold, with long, severe winters.\n",
|
||
"\n",
|
||
"- Italy is famous for pasta, pizza, gelato, and espresso.\n",
|
||
"\n",
|
||
"- I can't recall personal names, only provide information.\n",
|
||
"\n",
|
||
"- Life's purpose varies, often seen as personal fulfillment.\n",
|
||
"\n",
|
||
"- Enjoying life's moments is indeed a wonderful approach."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "xrmtRzgXdhMF"
|
||
},
|
||
"source": [
|
||
"### Embedding Runtimes"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/"
|
||
},
|
||
"id": "bUQM6OoObM_C",
|
||
"outputId": "afbb1ea0-4f14-46b0-da42-25c5ae8eab2e"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(\"\\nSequential Embedding: \")\n",
|
||
"q_embeddings = [\n",
|
||
" embedder.embed_query(\"What's the weather like in Komchatka?\"),\n",
|
||
" embedder.embed_query(\"What kinds of food is Italy known for?\"),\n",
|
||
" embedder.embed_query(\"What's my name? I bet you don't remember...\"),\n",
|
||
" embedder.embed_query(\"What's the point of life anyways?\"),\n",
|
||
" embedder.embed_query(\"The point of life is to have fun :D\"),\n",
|
||
"]\n",
|
||
"print(\"Shape:\", (len(q_embeddings), len(q_embeddings[0])))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "SfX00xRdbKDw"
|
||
},
|
||
"source": [
|
||
"### Document Embedding"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/"
|
||
},
|
||
"id": "o1vKyTx-O_vZ",
|
||
"outputId": "a8d864a8-01e8-4431-ee8a-b466d8348bef"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(\"\\nBatch Document Embedding: \")\n",
|
||
"d_embeddings = embedder.embed_documents(\n",
|
||
" [\n",
|
||
" \"Komchatka's weather is cold, with long, severe winters.\",\n",
|
||
" \"Italy is famous for pasta, pizza, gelato, and espresso.\",\n",
|
||
" \"I can't recall personal names, only provide information.\",\n",
|
||
" \"Life's purpose varies, often seen as personal fulfillment.\",\n",
|
||
" \"Enjoying life's moments is indeed a wonderful approach.\",\n",
|
||
" ]\n",
|
||
")\n",
|
||
"print(\"Shape:\", (len(q_embeddings), len(q_embeddings[0])))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "E6AilXxjdm1I"
|
||
},
|
||
"source": [
|
||
"Now that we've generated our embeddings, we can do a simple similarity check on the results to see which documents would have triggered as reasonable answers in a retrieval task:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install --upgrade --quiet matplotlib scikit-learn"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/",
|
||
"height": 564
|
||
},
|
||
"id": "7szaiBBYCHQ-",
|
||
"outputId": "86b6d2c4-6bee-4324-f7b1-3fcf2b940763"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import matplotlib.pyplot as plt\n",
|
||
"import numpy as np\n",
|
||
"from sklearn.metrics.pairwise import cosine_similarity\n",
|
||
"\n",
|
||
"# Compute the similarity matrix between q_embeddings and d_embeddings\n",
|
||
"cross_similarity_matrix = cosine_similarity(\n",
|
||
" np.array(q_embeddings),\n",
|
||
" np.array(d_embeddings),\n",
|
||
")\n",
|
||
"\n",
|
||
"# Plotting the cross-similarity matrix\n",
|
||
"plt.figure(figsize=(8, 6))\n",
|
||
"plt.imshow(cross_similarity_matrix, cmap=\"Greens\", interpolation=\"nearest\")\n",
|
||
"plt.colorbar()\n",
|
||
"plt.title(\"Cross-Similarity Matrix\")\n",
|
||
"plt.xlabel(\"Query Embeddings\")\n",
|
||
"plt.ylabel(\"Document Embeddings\")\n",
|
||
"plt.grid(True)\n",
|
||
"plt.show()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "K5sLkHWZcRF2"
|
||
},
|
||
"source": [
|
||
"As a reminder, the queries and documents sent to our system were:\n",
|
||
"\n",
|
||
"**Queries:**\n",
|
||
"\n",
|
||
"- What's the weather like in Komchatka?\n",
|
||
"\n",
|
||
"- What kinds of food is Italy known for?\n",
|
||
"\n",
|
||
"- What's my name? I bet you don't remember...\n",
|
||
"\n",
|
||
"- What's the point of life anyways?\n",
|
||
"\n",
|
||
"- The point of life is to have fun :D\n",
|
||
"\n",
|
||
"**Documents:**\n",
|
||
"\n",
|
||
"- Komchatka's weather is cold, with long, severe winters.\n",
|
||
"\n",
|
||
"- Italy is famous for pasta, pizza, gelato, and espresso.\n",
|
||
"\n",
|
||
"- I can't recall personal names, only provide information.\n",
|
||
"\n",
|
||
"- Life's purpose varies, often seen as personal fulfillment.\n",
|
||
"\n",
|
||
"- Enjoying life's moments is indeed a wonderful approach."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Truncation\n",
|
||
"\n",
|
||
"Embedding models typically have a fixed context window that determines the maximum number of input tokens that can be embedded. This limit could be a hard limit, equal to the model's maximum input token length, or an effective limit, beyond which the accuracy of the embedding decreases.\n",
|
||
"\n",
|
||
"Since models operate on tokens and applications usually work with text, it can be challenging for an application to ensure that its input stays within the model's token limits. By default, an exception is thrown if the input is too large.\n",
|
||
"\n",
|
||
"To assist with this, NVIDIA's NIMs (API Catalog or local) provide a `truncate` parameter that truncates the input on the server side if it's too large.\n",
|
||
"\n",
|
||
"The `truncate` parameter has three options:\n",
|
||
" - \"NONE\": The default option. An exception is thrown if the input is too large.\n",
|
||
" - \"START\": The server truncates the input from the start (left), discarding tokens as necessary.\n",
|
||
" - \"END\": The server truncates the input from the end (right), discarding tokens as necessary."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"long_text = \"AI is amazing, amazing is \" * 100"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"strict_embedder = NVIDIAEmbeddings()\n",
|
||
"try:\n",
|
||
" strict_embedder.embed_query(long_text)\n",
|
||
"except Exception as e:\n",
|
||
" print(\"Error:\", e)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"truncating_embedder = NVIDIAEmbeddings(truncate=\"END\")\n",
|
||
"truncating_embedder.embed_query(long_text)[:5]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "RNIeY4N96v3B"
|
||
},
|
||
"source": [
|
||
"## RAG Retrieval:\n",
|
||
"\n",
|
||
"The following is a repurposing of the initial example of the [LangChain Expression Language Retrieval Cookbook entry](\n",
|
||
"https://python.langchain.com/docs/expression_language/cookbook/retrieval), but executed with the AI Foundation Models' [Mixtral 8x7B Instruct](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/mixtral-8x7b) and [NVIDIA Retrieval QA Embedding](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/nvolve-40k) models available in their playground environments. The subsequent examples in the cookbook also run as expected, and we encourage you to explore with these options.\n",
|
||
"\n",
|
||
"**TIP:** We would recommend using Mixtral for internal reasoning (i.e. instruction following for data extraction, tool selection, etc.) and Llama-Chat for a single final \"wrap-up by making a simple response that works for this user based on the history and context\" response."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "zn_zeRGP64DJ"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install --upgrade --quiet langchain faiss-cpu tiktoken langchain_community\n",
|
||
"\n",
|
||
"from operator import itemgetter\n",
|
||
"\n",
|
||
"from langchain_community.vectorstores import FAISS\n",
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||
"from langchain_nvidia_ai_endpoints import ChatNVIDIA"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/",
|
||
"height": 36
|
||
},
|
||
"id": "zIXyr9Vd7CED",
|
||
"outputId": "a8d36812-c3e0-4fd4-804a-4b5ba43948e5"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"vectorstore = FAISS.from_texts(\n",
|
||
" [\"harrison worked at kensho\"],\n",
|
||
" embedding=NVIDIAEmbeddings(model=\"NV-Embed-QA\"),\n",
|
||
")\n",
|
||
"retriever = vectorstore.as_retriever()\n",
|
||
"\n",
|
||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\n",
|
||
" \"system\",\n",
|
||
" \"Answer solely based on the following context:\\n<Documents>\\n{context}\\n</Documents>\",\n",
|
||
" ),\n",
|
||
" (\"user\", \"{question}\"),\n",
|
||
" ]\n",
|
||
")\n",
|
||
"\n",
|
||
"model = ChatNVIDIA(model=\"ai-mixtral-8x7b-instruct\")\n",
|
||
"\n",
|
||
"chain = (\n",
|
||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||
" | prompt\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"chain.invoke(\"where did harrison work?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/",
|
||
"height": 36
|
||
},
|
||
"id": "OuY62kJ28oNK",
|
||
"outputId": "672ff6df-64d8-442b-9143-f69dbc09f763"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\n",
|
||
" \"system\",\n",
|
||
" \"Answer using information solely based on the following context:\\n<Documents>\\n{context}\\n</Documents>\"\n",
|
||
" \"\\nSpeak only in the following language: {language}\",\n",
|
||
" ),\n",
|
||
" (\"user\", \"{question}\"),\n",
|
||
" ]\n",
|
||
")\n",
|
||
"\n",
|
||
"chain = (\n",
|
||
" {\n",
|
||
" \"context\": itemgetter(\"question\") | retriever,\n",
|
||
" \"question\": itemgetter(\"question\"),\n",
|
||
" \"language\": itemgetter(\"language\"),\n",
|
||
" }\n",
|
||
" | prompt\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
")\n",
|
||
"\n",
|
||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"colab": {
|
||
"provenance": []
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.13"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 4
|
||
}
|