langchain/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb

761 lines
24 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"id": "1f666798-8635-4bc0-a515-04d318588d67",
"metadata": {},
"source": [
"---\n",
"sidebar_label: NVIDIA AI Endpoints\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fa8eb20e-4db8-45e3-9e79-c595f4f274da",
"metadata": {},
"source": [
"# ChatNVIDIA\n",
"\n",
"This will help you getting started with NVIDIA [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatNVIDIA` features and configurations head to the [API reference](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html).\n",
"\n",
"## Overview\n",
"The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n",
"NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models \n",
"from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA \n",
"accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single \n",
"command on NVIDIA accelerated infrastructure.\n",
"\n",
"NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, \n",
"NIMs can be exported from NVIDIAs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, \n",
"giving enterprises ownership and full control of their IP and AI application.\n",
"\n",
"NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. \n",
"At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.\n",
"\n",
"This example goes over how to use LangChain to interact with NVIDIA supported via the `ChatNVIDIA` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/) documentation.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatNVIDIA](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html) | [langchain_nvidia_ai_endpoints](https://python.langchain.com/api_reference/nvidia_ai_endpoints/index.html) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_nvidia_ai_endpoints?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n",
"\n",
"2. Click on your model of choice.\n",
"\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.\n",
"\n",
"### Credentials\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "208b72da-1535-4249-bbd3-2500028e25e9",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NVIDIA_API_KEY\"):\n",
" # Note: the API key should start with \"nvapi-\"\n",
" os.environ[\"NVIDIA_API_KEY\"] = getpass.getpass(\"Enter your NVIDIA API key: \")"
]
},
{
"cell_type": "markdown",
"id": "52dc8dcb-0a48-4a4e-9947-764116d2ffd4",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2cd9cb12-6ca5-432a-9e42-8a57da073c7e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "f2be90a9",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain NVIDIA AI Endpoints integration lives in the `langchain_nvidia_ai_endpoints` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e13eb331",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-nvidia-ai-endpoints"
]
},
{
"cell_type": "markdown",
"id": "af0ce26b",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can access models in the NVIDIA API Catalog:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "Jdl2NUfMhi4J",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Jdl2NUfMhi4J",
"outputId": "e9c4cc72-8db6-414b-d8e9-95de93fc5db4"
},
"outputs": [],
"source": [
"## Core LC Chat Interface\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")"
]
},
{
"cell_type": "markdown",
"id": "469c8c7f-de62-457f-a30f-674763a8b717",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9512c81b-1f3a-4eca-9470-f52cedff5c74",
"metadata": {},
"outputs": [],
"source": [
"result = llm.invoke(\"Write a ballad about LangChain.\")\n",
"print(result.content)"
]
},
{
"cell_type": "markdown",
"id": "9d35686b",
"metadata": {},
"source": [
"## Working with NVIDIA NIMs\n",
"When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
"\n",
"[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49838930",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta/llama3-8b-instruct\")"
]
},
{
"cell_type": "markdown",
"id": "71d37987-d568-4a73-9d2a-8bd86323f8bf",
"metadata": {},
"source": [
"## Stream, Batch, and Async\n",
"\n",
"These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01fa5095-be72-47b0-8247-e9fac799435d",
"metadata": {},
"outputs": [],
"source": [
"print(llm.batch([\"What's 2*3?\", \"What's 2*6?\"]))\n",
"# Or via the async API\n",
"# await llm.abatch([\"What's 2*3?\", \"What's 2*6?\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75189ac6-e13f-414f-9064-075c77d6e754",
"metadata": {},
"outputs": [],
"source": [
"for chunk in llm.stream(\"How far can a seagull fly in one day?\"):\n",
" # Show the token separations\n",
" print(chunk.content, end=\"|\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8a9a4122-7a10-40c0-a979-82a769ce7f6a",
"metadata": {},
"outputs": [],
"source": [
"async for chunk in llm.astream(\n",
" \"How long does it take for monarch butterflies to migrate?\"\n",
"):\n",
" print(chunk.content, end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "6RrXHC_XqWc1",
"metadata": {
"id": "6RrXHC_XqWc1"
},
"source": [
"## Supported models\n",
"\n",
"Querying `available_models` will still give you all of the other models offered by your API credentials.\n",
"\n",
"The `playground_` prefix is optional."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b8a312d-38e9-4528-843e-59451bdadbac",
"metadata": {},
"outputs": [],
"source": [
"ChatNVIDIA.get_available_models()\n",
"# llm.get_available_models()"
]
},
{
"cell_type": "markdown",
"id": "d8a407c6-e38b-4cfc-9a33-bcafadc18cf2",
"metadata": {},
"source": [
"## Model types"
]
},
{
"cell_type": "markdown",
"id": "WMW79Iegqj4e",
"metadata": {
"id": "WMW79Iegqj4e"
},
"source": [
"All of these models above are supported and can be accessed via `ChatNVIDIA`. \n",
"\n",
"Some model types support unique prompting techniques and chat messages. We will review a few important ones below.\n",
"\n",
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**"
]
},
{
"cell_type": "markdown",
"id": "03d65053-59fe-40cf-a2d0-55d3dbb13585",
"metadata": {},
"source": [
"### General Chat\n",
"\n",
"Models such as `meta/llama3-8b-instruct` and `mistralai/mixtral-8x22b-instruct-v0.1` are good all-around models that you can use for with any LangChain chat messages. Example below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f5f7aee8-e90c-4d5a-ac97-0dd3d45c3f4c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful AI assistant named Fred.\"), (\"user\", \"{input}\")]\n",
")\n",
"chain = prompt | ChatNVIDIA(model=\"meta/llama3-8b-instruct\") | StrOutputParser()\n",
"\n",
"for txt in chain.stream({\"input\": \"What's your name?\"}):\n",
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "04146118-281b-42ef-b781-2fadeeeea6c8",
"metadata": {},
"source": [
"### Code Generation\n",
"\n",
"These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `meta/codellama-70b`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49aa569b-5f33-47b3-9edc-df58313eb038",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are an expert coding AI. Respond only in valid python; no narration whatsoever.\",\n",
" ),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"chain = prompt | ChatNVIDIA(model=\"meta/codellama-70b\") | StrOutputParser()\n",
"\n",
"for txt in chain.stream({\"input\": \"How do I solve this fizz buzz problem?\"}):\n",
" print(txt, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "7f465ff6-5922-41d8-8abb-1d1e4095cc27",
"metadata": {},
"source": [
"## Multimodal\n",
"\n",
"NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is `nvidia/neva-22b`.\n",
"\n",
"Below is an example use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26625437-1695-440f-b792-b85e6add9a90",
"metadata": {},
"outputs": [],
"source": [
"import IPython\n",
"import requests\n",
"\n",
"image_url = \"https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg\" ## Large Image\n",
"image_content = requests.get(image_url).content\n",
"\n",
"IPython.display.Image(image_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dfbbe57c-27a5-4cbb-b967-19c4e7d29fd0",
"metadata": {},
"outputs": [],
"source": [
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"llm = ChatNVIDIA(model=\"nvidia/neva-22b\")"
]
},
{
"cell_type": "markdown",
"id": "7ddcb8f1-9cd8-4376-963d-af61c29b2a3c",
"metadata": {},
"source": [
"#### Passing an image as a URL"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "432ea2a2-4d39-43f8-a236-041294171f14",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"llm.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe this image:\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ]\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0573dd1f-9a17-4c99-ab2a-8d930b89d283",
"metadata": {},
"source": [
"#### Passing an image as a base64 encoded string"
]
},
{
"cell_type": "markdown",
"id": "be1688a5",
"metadata": {},
"source": [
"At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c58f1dd0",
"metadata": {},
"outputs": [],
"source": [
"import IPython\n",
"import requests\n",
"\n",
"image_url = \"https://picsum.photos/seed/kitten/300/200\"\n",
"image_content = requests.get(image_url).content\n",
"\n",
"IPython.display.Image(image_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c721629-42eb-4006-bf68-0296f7925ebc",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"## Works for simpler images. For larger images, see actual implementation\n",
"b64_string = base64.b64encode(image_content).decode(\"utf-8\")\n",
"\n",
"llm.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe this image:\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/png;base64,{b64_string}\"},\n",
" },\n",
" ]\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ba958424-28d7-4bc2-9c8e-bd571066853f",
"metadata": {},
"source": [
"#### Directly within the string\n",
"\n",
"The NVIDIA API uniquely accepts images as base64 images inlined within `<img/>` HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00c06a9a-497b-4192-a842-b075e27401aa",
"metadata": {},
"outputs": [],
"source": [
"base64_with_mime_type = f\"data:image/png;base64,{b64_string}\"\n",
"llm.invoke(f'What\\'s in this image?\\n<img src=\"{base64_with_mime_type}\" />')"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
"metadata": {
"id": "137662a6"
},
"source": [
"## Example usage within a RunnableWithMessageHistory"
]
},
{
"cell_type": "markdown",
"id": "79efa62d",
"metadata": {
"id": "79efa62d"
},
"source": [
"Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to using `ConversationChain`. Below, we show the [LangChain RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) example applied to the `mistralai/mixtral-8x22b-instruct-v0.1` model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "082ccb21-91e1-4e71-a9ba-4bff1e89f105",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd2c6bc1",
"metadata": {
"id": "fd2c6bc1"
},
"outputs": [],
"source": [
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"# store is a dictionary that maps session IDs to their corresponding chat histories.\n",
"store = {} # memory is maintained outside the chain\n",
"\n",
"\n",
"# A function that returns the chat history for a given session ID.\n",
"def get_session_history(session_id: str) -> InMemoryChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chat = ChatNVIDIA(\n",
" model=\"mistralai/mixtral-8x22b-instruct-v0.1\",\n",
" temperature=0.1,\n",
" max_tokens=100,\n",
" top_p=1.0,\n",
")\n",
"\n",
"# Define a RunnableConfig object, with a `configurable` key. session_id determines thread\n",
"config = {\"configurable\": {\"session_id\": \"1\"}}\n",
"\n",
"conversation = RunnableWithMessageHistory(\n",
" chat,\n",
" get_session_history,\n",
")\n",
"\n",
"conversation.invoke(\n",
" \"Hi I'm Srijan Dubey.\", # input or query\n",
" config=config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "uHIMZxVSVNBC",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 284
},
"id": "uHIMZxVSVNBC",
"outputId": "79acc89d-a820-4f2c-bac2-afe99da95580"
},
"outputs": [],
"source": [
"conversation.invoke(\n",
" \"I'm doing well! Just having a conversation with an AI.\",\n",
" config=config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "LyD1xVKmVSs4",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 350
},
"id": "LyD1xVKmVSs4",
"outputId": "a1714513-a8fd-4d14-f974-233e39d5c4f5"
},
"outputs": [],
"source": [
"conversation.invoke(\n",
" \"Tell me about yourself.\",\n",
" config=config,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f3cbbba0",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Starting in v0.2, `ChatNVIDIA` supports [bind_tools](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html#langchain_core.language_models.chat_models.BaseChatModel.bind_tools).\n",
"\n",
"`ChatNVIDIA` provides integration with the variety of models on [build.nvidia.com](https://build.nvidia.com) as well as local NIMs. Not all these models are trained for tool calling. Be sure to select a model that does have tool calling for your experimention and applications."
]
},
{
"cell_type": "markdown",
"id": "6f7b535e",
"metadata": {},
"source": [
"You can get a list of models that are known to support tool calling with,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e36c8911",
"metadata": {},
"outputs": [],
"source": [
"tool_models = [\n",
" model for model in ChatNVIDIA.get_available_models() if model.supports_tools\n",
"]\n",
"tool_models"
]
},
{
"cell_type": "markdown",
"id": "b01d75a7",
"metadata": {},
"source": [
"With a tool capable model,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd54f174",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"from pydantic import Field\n",
"\n",
"\n",
"@tool\n",
"def get_current_weather(\n",
" location: str = Field(..., description=\"The location to get the weather for.\"),\n",
"):\n",
" \"\"\"Get the current weather for a location.\"\"\"\n",
" ...\n",
"\n",
"\n",
"llm = ChatNVIDIA(model=tool_models[0].id).bind_tools(tools=[get_current_weather])\n",
"response = llm.invoke(\"What is the weather in Boston?\")\n",
"response.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "e08df68c",
"metadata": {},
"source": [
"See [How to use chat models to call tools](https://python.langchain.com/docs/how_to/tool_calling/) for additional examples."
]
},
{
"cell_type": "markdown",
"id": "a9a3c438-121d-46eb-8fb5-b8d5a13cd4a4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af585c6b-fe0a-4833-9860-a4209a71b3c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f2f25dd3-0b4a-465f-a53e-95521cdc253c",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ChatNVIDIA` features and configurations head to the API reference: https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}