Update Chat and Embedding guides (#31017)

Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
Philipp Schmid 2025-04-27 20:06:59 +02:00 committed by GitHub
parent ba2518995d
commit 79a537d308
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 705 additions and 173 deletions

View File

@ -107,7 +107,7 @@ outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessag
response object. See for example: response object. See for example:
- Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI; - Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI;
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#image-generation) with Google Gemini. - Generating [image outputs](/docs/integrations/chat/google_generative_ai/#multimodal-usage) with Google Gemini.
#### Tools #### Tools

View File

@ -1,35 +1,26 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "raw", "cell_type": "markdown",
"id": "afaf8039", "id": "d982c99f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"---\n", "---\n",
"sidebar_label: Google AI\n", "sidebar_label: Google Gemini\n",
"---" "---"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "e49f1e0d", "id": "56a6d990",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# ChatGoogleGenerativeAI\n", "# ChatGoogleGenerativeAI\n",
"\n", "\n",
"This docs will help you get started with Google AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html).\n", "Access Google's Generative AI models, including the Gemini family, directly via the Gemini API or experiment rapidly using Google AI Studio. The `langchain-google-genai` package provides the LangChain integration for these models. This is often the best starting point for individual developers.\n",
"\n", "\n",
"Google AI offers a number of different chat models. For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini).\n", "For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini). All examples use the `gemini-2.0-flash` model. Gemini 2.5 Pro and 2.5 Flash can be used via `gemini-2.5-pro-preview-03-25` and `gemini-2.5-flash-preview-04-17`. All model ids can be found in the [Gemini API docs](https://ai.google.dev/gemini-api/docs/models).\n",
"\n", "\n",
":::info Google AI vs Google Cloud Vertex AI\n",
"\n",
"Google's Gemini models are accessible through Google AI and through Google Cloud Vertex AI. Using Google AI just requires a Google account and an API key. Using Google Cloud Vertex AI requires a Google Cloud account (with term agreements and billing) but offers enterprise features like customer encryption key, virtual private cloud, and more.\n",
"\n",
"To learn more about the key features of the two APIs see the [Google docs](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai#google-ai).\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n", "### Integration details\n",
"\n", "\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_generativeai) | Package downloads | Package latest |\n", "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_generativeai) | Package downloads | Package latest |\n",
@ -37,23 +28,46 @@
"| [ChatGoogleGenerativeAI](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) | [langchain-google-genai](https://python.langchain.com/api_reference/google_genai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-genai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-genai?style=flat-square&label=%20) |\n", "| [ChatGoogleGenerativeAI](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) | [langchain-google-genai](https://python.langchain.com/api_reference/google_genai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-genai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-genai?style=flat-square&label=%20) |\n",
"\n", "\n",
"### Model features\n", "### Model features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n", "| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
"\n", "\n",
"## Setup\n", "### Setup\n",
"\n", "\n",
"To access Google AI models you'll need to create a Google Acount account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n", "To access Google AI models you'll need to create a Google Account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n",
"\n", "\n",
"### Credentials\n", "**1. Installation:**"
"\n",
"Head to https://ai.google.dev/gemini-api/docs/api-key to generate a Google AI API key. Once you've done this set the GOOGLE_API_KEY environment variable:"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", "id": "8d12ce35",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "60be0b38",
"metadata": {},
"source": [
"**2. Credentials:**\n",
"\n",
"Head to [https://ai.google.dev/gemini-api/docs/api-key](https://ai.google.dev/gemini-api/docs/api-key) (or via Google AI Studio) to generate a Google AI API key.\n",
"\n",
"### Chat Models\n",
"\n",
"Use the `ChatGoogleGenerativeAI` class to interact with Google's chat models. See the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) for full details.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fb18c875",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -66,7 +80,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210", "id": "f050e8db",
"metadata": {}, "metadata": {},
"source": [ "source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:" "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
@ -75,7 +89,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", "id": "82cb346f",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -85,27 +99,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d", "id": "273cefa0",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Google AI integration lives in the `langchain-google-genai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Instantiation\n", "## Instantiation\n",
@ -115,15 +109,15 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 4,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", "id": "7d3dc0b3",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n", "from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n", "\n",
"llm = ChatGoogleGenerativeAI(\n", "llm = ChatGoogleGenerativeAI(\n",
" model=\"gemini-2.0-flash-001\",\n", " model=\"gemini-2.0-flash\",\n",
" temperature=0,\n", " temperature=0,\n",
" max_tokens=None,\n", " max_tokens=None,\n",
" timeout=None,\n", " timeout=None,\n",
@ -134,7 +128,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "2b4f3e15", "id": "343a8c13",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Invocation" "## Invocation"
@ -142,19 +136,17 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 5,
"id": "62e0dbc3", "id": "82c5708c",
"metadata": { "metadata": {},
"tags": []
},
"outputs": [ "outputs": [
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-61cff164-40be-4f88-a2df-cca58297502f-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})" "AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-3b28d4b8-8a62-4e6c-ad4e-b53e6e825749-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})"
] ]
}, },
"execution_count": 3, "execution_count": 5,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -173,8 +165,8 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", "id": "49d2d0c2",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -191,7 +183,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", "id": "ee3f6e1d",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Chaining\n", "## Chaining\n",
@ -201,17 +193,17 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", "id": "3c8407ee",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-dd2f8fb9-62d9-4b84-9c97-ed9c34cda313-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})" "AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-e5561c6b-2beb-4411-9210-4796b576a7cd-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})"
] ]
}, },
"execution_count": 5, "execution_count": 7,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -241,22 +233,164 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "41c2ff10-a3ba-4f40-b3aa-7a395854849e", "id": "bdae9742",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Image generation\n", "## Multimodal Usage\n",
"\n", "\n",
"Some Gemini models (specifically `gemini-2.0-flash-exp`) support image generation capabilities.\n", "Gemini models can accept multimodal inputs (text, images, audio, video) and, for some models, generate multimodal outputs.\n",
"\n", "\n",
"### Text to image\n", "### Image Input\n",
"\n", "\n",
"See a simple usage example below:" "Provide image inputs along with text using a `HumanMessage` with a list content format. The `gemini-2.0-flash` model can handle images."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": null,
"id": "7589e14d-8d1b-4c82-965f-5558d80cb677", "id": "6833fe5d",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Example using a public URL (remains the same)\n",
"message_url = HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the image at the URL.\",\n",
" },\n",
" {\"type\": \"image_url\", \"image_url\": \"https://picsum.photos/seed/picsum/200/300\"},\n",
" ]\n",
")\n",
"result_url = llm.invoke([message_url])\n",
"print(f\"Response for URL image: {result_url.content}\")\n",
"\n",
"# Example using a local image file encoded in base64\n",
"image_file_path = \"/Users/philschmid/projects/google-gemini/langchain/docs/static/img/agents_vs_chains.png\"\n",
"\n",
"with open(image_file_path, \"rb\") as image_file:\n",
" encoded_image = base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
"\n",
"message_local = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the local image.\"},\n",
" {\"type\": \"image_url\", \"image_url\": f\"data:image/png;base64,{encoded_image}\"},\n",
" ]\n",
")\n",
"result_local = llm.invoke([message_local])\n",
"print(f\"Response for local image: {result_local.content}\")"
]
},
{
"cell_type": "markdown",
"id": "1b422382",
"metadata": {},
"source": [
"Other supported `image_url` formats:\n",
"- A Google Cloud Storage URI (`gs://...`). Ensure the service account has access.\n",
"- A PIL Image object (the library handles encoding).\n",
"\n",
"### Audio Input\n",
"\n",
"Provide audio file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3461836",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"# Ensure you have an audio file named 'example_audio.mp3' or provide the correct path.\n",
"audio_file_path = \"example_audio.mp3\"\n",
"audio_mime_type = \"audio/mpeg\"\n",
"\n",
"\n",
"with open(audio_file_path, \"rb\") as audio_file:\n",
" encoded_audio = base64.b64encode(audio_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Transcribe the audio.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_audio, # Use base64 string directly\n",
" \"mime_type\": audio_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for audio: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "0d898e27",
"metadata": {},
"source": [
"### Video Input\n",
"\n",
"Provide video file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3046e74b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Ensure you have a video file named 'example_video.mp4' or provide the correct path.\n",
"video_file_path = \"example_video.mp4\"\n",
"video_mime_type = \"video/mp4\"\n",
"\n",
"\n",
"with open(video_file_path, \"rb\") as video_file:\n",
" encoded_video = base64.b64encode(video_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the first few frames of the video.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_video, # Use base64 string directly\n",
" \"mime_type\": video_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for video: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "2df11d89",
"metadata": {},
"source": [
"### Image Generation (Multimodal Output)\n",
"\n",
"The `gemini-2.0-flash` model can generate text and images inline (image generation is experimental). You need to specify the desired `response_modalities`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0b7180f",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -266,17 +400,12 @@
"<IPython.core.display.Image object>" "<IPython.core.display.Image object>"
] ]
}, },
"metadata": { "metadata": {},
"image/png": {
"width": 300
}
},
"output_type": "display_data" "output_type": "display_data"
} }
], ],
"source": [ "source": [
"import base64\n", "import base64\n",
"from io import BytesIO\n",
"\n", "\n",
"from IPython.display import Image, display\n", "from IPython.display import Image, display\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n", "from langchain_google_genai import ChatGoogleGenerativeAI\n",
@ -301,7 +430,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "b14c0d87-cf7e-4d88-bda1-2ab40ec0350a", "id": "14bf00f1",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Image and text to image\n", "### Image and text to image\n",
@ -311,8 +440,8 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": null,
"id": "0f4ed7a5-980c-4b54-b743-0b988909744c", "id": "d65e195c",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -322,11 +451,7 @@
"<IPython.core.display.Image object>" "<IPython.core.display.Image object>"
] ]
}, },
"metadata": { "metadata": {},
"image/png": {
"width": 300
}
},
"output_type": "display_data" "output_type": "display_data"
} }
], ],
@ -349,7 +474,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "a62669d8-becd-495f-8f4a-82d7c5d87969", "id": "43b54d3f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can also represent an input image and query in a single message by encoding the base64 data in the [data URI scheme](https://en.wikipedia.org/wiki/Data_URI_scheme):" "You can also represent an input image and query in a single message by encoding the base64 data in the [data URI scheme](https://en.wikipedia.org/wiki/Data_URI_scheme):"
@ -357,8 +482,8 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": null,
"id": "6241da43-e210-43bc-89af-b3c480ea06e9", "id": "0dfc7e1e",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -368,11 +493,7 @@
"<IPython.core.display.Image object>" "<IPython.core.display.Image object>"
] ]
}, },
"metadata": { "metadata": {},
"image/png": {
"width": 300
}
},
"output_type": "display_data" "output_type": "display_data"
} }
], ],
@ -403,7 +524,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "cfe228d3-6773-4283-9788-87bdf6912b1c", "id": "789818d7",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can also use LangGraph to manage the conversation history for you as in [this tutorial](/docs/tutorials/chatbot/)." "You can also use LangGraph to manage the conversation history for you as in [this tutorial](/docs/tutorials/chatbot/)."
@ -411,7 +532,313 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", "id": "b037e2dc",
"metadata": {},
"source": [
"## Tool Calling\n",
"\n",
"You can equip the model with tools to call."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0d759f9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'a6248087-74c5-4b7c-9250-f335e642927c', 'type': 'tool_call'}]\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"OK. It's sunny in San Francisco.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-ac5bb52c-e244-4c72-9fbc-fb2a9cd7a72e-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.tools import tool\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the tool\n",
"@tool(description=\"Get the current weather in a given location\")\n",
"def get_weather(location: str) -> str:\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"# Initialize the model and bind the tool\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"\n",
"# Invoke the model with a query that should trigger the tool\n",
"query = \"What's the weather in San Francisco?\"\n",
"ai_msg = llm_with_tools.invoke(query)\n",
"\n",
"# Check the tool calls in the response\n",
"print(ai_msg.tool_calls)\n",
"\n",
"# Example tool call message would be needed here if you were actually running the tool\n",
"from langchain_core.messages import ToolMessage\n",
"\n",
"tool_message = ToolMessage(\n",
" content=get_weather(*ai_msg.tool_calls[0][\"args\"]),\n",
" tool_call_id=ai_msg.tool_calls[0][\"id\"],\n",
")\n",
"llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result back"
]
},
{
"cell_type": "markdown",
"id": "91d42b86",
"metadata": {},
"source": [
"## Structured Output\n",
"\n",
"Force the model to respond with a specific structure using Pydantic models."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "7457dbe4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"name='Abraham Lincoln' height_m=1.93\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the desired structure\n",
"class Person(BaseModel):\n",
" \"\"\"Information about a person.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The person's name\")\n",
" height_m: float = Field(..., description=\"The person's height in meters\")\n",
"\n",
"\n",
"# Initialize the model\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Person)\n",
"\n",
"# Invoke the model with a query asking for structured information\n",
"result = structured_llm.invoke(\n",
" \"Who was the 16th president of the USA, and how tall was he in meters?\"\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "90d4725e",
"metadata": {},
"source": [
"\n",
"\n",
"## Token Usage Tracking\n",
"\n",
"Access token usage information from the response metadata."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "edcc003e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt engineering is the art and science of crafting effective text prompts to elicit desired and accurate responses from large language models.\n",
"\n",
"Usage Metadata:\n",
"{'input_tokens': 10, 'output_tokens': 24, 'total_tokens': 34, 'input_token_details': {'cache_read': 0}}\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"result = llm.invoke(\"Explain the concept of prompt engineering in one sentence.\")\n",
"\n",
"print(result.content)\n",
"print(\"\\nUsage Metadata:\")\n",
"print(result.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "28950dbc",
"metadata": {},
"source": [
"## Built-in tools\n",
"\n",
"Google Gemini supports a variety of built-in tools ([google search](https://ai.google.dev/gemini-api/docs/grounding/search-suggestions), [code execution](https://ai.google.dev/gemini-api/docs/code-execution?lang=python)), which can be bound to the model in the usual way."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd074816",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The next total solar eclipse visible in the United States will occur on August 23, 2044. However, the path of totality will only pass through Montana, North Dakota, and South Dakota.\n",
"\n",
"For a total solar eclipse that crosses a significant portion of the continental U.S., you'll have to wait until August 12, 2045. This eclipse will start in California and end in Florida.\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"When is the next total solar eclipse in US?\",\n",
" tools=[GenAITool(google_search={})],\n",
")\n",
"\n",
"print(resp.content)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "6964be2d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Executable code: print(2*2)\n",
"\n",
"Code execution result: 4\n",
"\n",
"2*2 is 4.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/philschmid/projects/google-gemini/langchain/.venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py:580: UserWarning: \n",
" ⚠️ Warning: Output may vary each run. \n",
" - 'executable_code': Always present. \n",
" - 'execution_result' & 'image_url': May be absent for some queries. \n",
"\n",
" Validate before using in production.\n",
"\n",
" warnings.warn(\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"What is 2*2, use python\",\n",
" tools=[GenAITool(code_execution={})],\n",
")\n",
"\n",
"for c in resp.content:\n",
" if isinstance(c, dict):\n",
" if c[\"type\"] == \"code_execution_result\":\n",
" print(f\"Code execution result: {c['code_execution_result']}\")\n",
" elif c[\"type\"] == \"executable_code\":\n",
" print(f\"Executable code: {c['executable_code']}\")\n",
" else:\n",
" print(c)"
]
},
{
"cell_type": "markdown",
"id": "a27e6ff4",
"metadata": {},
"source": [
"## Native Async\n",
"\n",
"Use asynchronous methods for non-blocking calls."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c6803e57",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Async Invoke Result: The sky is blue due to a phenomenon called **Rayle...\n",
"\n",
"Async Stream Result:\n",
"The thread is free, it does not wait,\n",
"For answers slow, or tasks of fate.\n",
"A promise made, a future bright,\n",
"It moves ahead, with all its might.\n",
"\n",
"A callback waits, a signal sent,\n",
"When data's read, or job is spent.\n",
"Non-blocking code, a graceful dance,\n",
"Responsive apps, a fleeting glance.\n",
"\n",
"Async Batch Results: ['1 + 1 = 2', '2 + 2 = 4']\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"\n",
"async def run_async_calls():\n",
" # Async invoke\n",
" result_ainvoke = await llm.ainvoke(\"Why is the sky blue?\")\n",
" print(\"Async Invoke Result:\", result_ainvoke.content[:50] + \"...\")\n",
"\n",
" # Async stream\n",
" print(\"\\nAsync Stream Result:\")\n",
" async for chunk in llm.astream(\n",
" \"Write a short poem about asynchronous programming.\"\n",
" ):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print(\"\\n\")\n",
"\n",
" # Async batch\n",
" results_abatch = await llm.abatch([\"What is 1+1?\", \"What is 2+2?\"])\n",
" print(\"Async Batch Results:\", [res.content for res in results_abatch])\n",
"\n",
"\n",
"await run_async_calls()"
]
},
{
"cell_type": "markdown",
"id": "99204b32",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Safety Settings\n", "## Safety Settings\n",
@ -421,8 +848,8 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 14, "execution_count": null,
"id": "238b2f96-e573-4fac-bbf2-7e52ad926833", "id": "d4c14039",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -442,7 +869,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "5805d40c-deb8-4924-8e72-a294a0482fc9", "id": "dea38fb1",
"metadata": {}, "metadata": {},
"source": [ "source": [
"For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)." "For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)."
@ -450,7 +877,7 @@
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", "id": "d6d0e853",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## API reference\n", "## API reference\n",
@ -461,7 +888,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Python 3 (ipykernel)", "display_name": ".venv",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -475,7 +902,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.4" "version": "3.9.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -1,13 +1,76 @@
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"id": "8543d632",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Gemini\n",
"keywords: [google gemini embeddings]\n",
"---"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "afab8b36-10bb-4795-bc98-75ab2d2081bb", "id": "afab8b36-10bb-4795-bc98-75ab2d2081bb",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Google Generative AI Embeddings\n", "# Google Generative AI Embeddings (AI Studio & Gemini API)\n",
"\n", "\n",
"Connect to Google's generative AI embeddings service using the `GoogleGenerativeAIEmbeddings` class, found in the [langchain-google-genai](https://pypi.org/project/langchain-google-genai/) package." "Connect to Google's generative AI embeddings service using the `GoogleGenerativeAIEmbeddings` class, found in the [langchain-google-genai](https://pypi.org/project/langchain-google-genai/) package.\n",
"\n",
"This will help you get started with Google's Generative AI embedding models (like Gemini) using LangChain. For detailed documentation on `GoogleGenerativeAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/v0.2/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"import { ItemTable } from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"text_embedding\" item=\"Google Gemini\" />\n",
"\n",
"## Setup\n",
"\n",
"To access Google Generative AI embedding models you'll need to create a Google Cloud project, enable the Generative Language API, get an API key, and install the `langchain-google-genai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use Google Generative AI models, you must have an API key. You can create one in Google AI Studio. See the [Google documentation](https://ai.google.dev/gemini-api/docs/api-key) for instructions.\n",
"\n",
"Once you have a key, set it as an environment variable `GOOGLE_API_KEY`:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "47652620",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GOOGLE_API_KEY\"):\n",
" os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Enter your Google API key: \")"
]
},
{
"cell_type": "markdown",
"id": "67283790",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eccf1968",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
] ]
}, },
{ {
@ -28,28 +91,6 @@
"%pip install --upgrade --quiet langchain-google-genai" "%pip install --upgrade --quiet langchain-google-genai"
] ]
}, },
{
"cell_type": "markdown",
"id": "25f3f88e-164e-400d-b371-9fa488baba19",
"metadata": {},
"source": [
"## Credentials"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec89153f-8999-4aab-a21b-0bfba1cc3893",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"GOOGLE_API_KEY\" not in os.environ:\n",
" os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Provide your Google API key here\")"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "f2437b22-e364-418a-8c13-490a026cb7b5", "id": "f2437b22-e364-418a-8c13-490a026cb7b5",
@ -60,17 +101,21 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 20,
"id": "eedc551e-a1f3-4fd8-8d65-4e0784c4441b", "id": "eedc551e-a1f3-4fd8-8d65-4e0784c4441b",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"[0.05636945, 0.0048285457, -0.0762591, -0.023642512, 0.05329321]" "[-0.024917153641581535,\n",
" 0.012005362659692764,\n",
" -0.003886754624545574,\n",
" -0.05774897709488869,\n",
" 0.0020742062479257584]"
] ]
}, },
"execution_count": 6, "execution_count": 20,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -78,7 +123,7 @@
"source": [ "source": [
"from langchain_google_genai import GoogleGenerativeAIEmbeddings\n", "from langchain_google_genai import GoogleGenerativeAIEmbeddings\n",
"\n", "\n",
"embeddings = GoogleGenerativeAIEmbeddings(model=\"models/text-embedding-004\")\n", "embeddings = GoogleGenerativeAIEmbeddings(model=\"models/gemini-embedding-exp-03-07\")\n",
"vector = embeddings.embed_query(\"hello, world!\")\n", "vector = embeddings.embed_query(\"hello, world!\")\n",
"vector[:5]" "vector[:5]"
] ]
@ -95,17 +140,17 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 5,
"id": "6ec53aba-404f-4778-acd9-5d6664e79ed2", "id": "6ec53aba-404f-4778-acd9-5d6664e79ed2",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"(3, 768)" "(3, 3072)"
] ]
}, },
"execution_count": 7, "execution_count": 5,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -121,6 +166,56 @@
"len(vectors), len(vectors[0])" "len(vectors), len(vectors[0])"
] ]
}, },
{
"cell_type": "markdown",
"id": "c362bfbf",
"metadata": {},
"source": [
"## Indexing and Retrieval\n",
"\n",
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
"\n",
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "606a7f65",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangChain is the framework for building context-aware reasoning applications'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a vector store with a sample text\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"\n",
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
"\n",
"vectorstore = InMemoryVectorStore.from_texts(\n",
" [text],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Use the vectorstore as a retriever\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"# Retrieve the most similar text\n",
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
"\n",
"# show the retrieved document's content\n",
"retrieved_documents[0].page_content"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "1482486f-5617-498a-8a44-1974d3212dda", "id": "1482486f-5617-498a-8a44-1974d3212dda",
@ -129,70 +224,74 @@
"## Task type\n", "## Task type\n",
"`GoogleGenerativeAIEmbeddings` optionally support a `task_type`, which currently must be one of:\n", "`GoogleGenerativeAIEmbeddings` optionally support a `task_type`, which currently must be one of:\n",
"\n", "\n",
"- task_type_unspecified\n", "- `SEMANTIC_SIMILARITY`: Used to generate embeddings that are optimized to assess text similarity.\n",
"- retrieval_query\n", "- `CLASSIFICATION`: Used to generate embeddings that are optimized to classify texts according to preset labels.\n",
"- retrieval_document\n", "- `CLUSTERING`: Used to generate embeddings that are optimized to cluster texts based on their similarities.\n",
"- semantic_similarity\n", "- `RETRIEVAL_DOCUMENT`, `RETRIEVAL_QUERY`, `QUESTION_ANSWERING`, and `FACT_VERIFICATION`: Used to generate embeddings that are optimized for document search or information retrieval.\n",
"- classification\n", "- `CODE_RETRIEVAL_QUERY`: Used to retrieve a code block based on a natural language query, such as sort an array or reverse a linked list. Embeddings of the code blocks are computed using `RETRIEVAL_DOCUMENT`.\n",
"- clustering\n",
"\n", "\n",
"By default, we use `retrieval_document` in the `embed_documents` method and `retrieval_query` in the `embed_query` method. If you provide a task type, we will use that for all methods." "By default, we use `RETRIEVAL_DOCUMENT` in the `embed_documents` method and `RETRIEVAL_QUERY` in the `embed_query` method. If you provide a task type, we will use that for all methods."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 15, "execution_count": null,
"id": "a223bb25-2b1b-418e-a570-2f543083132e", "id": "b7acc5c2",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [ "source": [
"%pip install --upgrade --quiet matplotlib scikit-learn" "%pip install --upgrade --quiet matplotlib scikit-learn"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 33, "execution_count": 19,
"id": "f1f077db-8eb4-49f7-8866-471a8528dcdb", "id": "f1f077db-8eb4-49f7-8866-471a8528dcdb",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1\n",
"Cosine similarity with query: 0.7892893360164779\n",
"---\n",
"Document 2\n",
"Cosine similarity with query: 0.5438283285204146\n",
"---\n"
]
}
],
"source": [ "source": [
"from langchain_google_genai import GoogleGenerativeAIEmbeddings\n",
"from sklearn.metrics.pairwise import cosine_similarity\n",
"\n",
"query_embeddings = GoogleGenerativeAIEmbeddings(\n", "query_embeddings = GoogleGenerativeAIEmbeddings(\n",
" model=\"models/embedding-001\", task_type=\"retrieval_query\"\n", " model=\"models/gemini-embedding-exp-03-07\", task_type=\"RETRIEVAL_QUERY\"\n",
")\n", ")\n",
"doc_embeddings = GoogleGenerativeAIEmbeddings(\n", "doc_embeddings = GoogleGenerativeAIEmbeddings(\n",
" model=\"models/embedding-001\", task_type=\"retrieval_document\"\n", " model=\"models/gemini-embedding-exp-03-07\", task_type=\"RETRIEVAL_DOCUMENT\"\n",
")" ")\n",
"\n",
"q_embed = query_embeddings.embed_query(\"What is the capital of France?\")\n",
"d_embed = doc_embeddings.embed_documents(\n",
" [\"The capital of France is Paris.\", \"Philipp is likes to eat pizza.\"]\n",
")\n",
"\n",
"for i, d in enumerate(d_embed):\n",
" print(f\"Document {i+1}:\")\n",
" print(f\"Cosine similarity with query: {cosine_similarity([q_embed], [d])[0][0]}\")\n",
" print(\"---\")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "79bd4a5e-75ba-413c-befa-86167c938caf", "id": "f45ea7b1",
"metadata": {}, "metadata": {},
"source": [ "source": [
"All of these will be embedded with the 'retrieval_query' task set\n", "## API Reference\n",
"```python\n", "\n",
"query_vecs = [query_embeddings.embed_query(q) for q in [query, query_2, answer_1]]\n", "For detailed documentation on `GoogleGenerativeAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html).\n"
"```\n",
"All of these will be embedded with the 'retrieval_document' task set\n",
"```python\n",
"doc_vecs = [doc_embeddings.embed_query(q) for q in [query, query_2, answer_1]]\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "9e1fae5e-0f84-4812-89f5-7d4d71affbc1",
"metadata": {},
"source": [
"In retrieval, relative distance matters. In the image above, you can see the difference in similarity scores between the \"relevant doc\" and \"simil stronger delta between the similar query and relevant doc on the latter case."
] ]
}, },
{ {
@ -211,7 +310,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Python 3 (ipykernel)", "display_name": ".venv",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -225,7 +324,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.1" "version": "3.9.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -366,6 +366,12 @@ const FEATURE_TABLES = {
package: "langchain-openai", package: "langchain-openai",
apiLink: "https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html" apiLink: "https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
}, },
{
name: "Google Gemini",
link: "google-generative-ai",
package: "langchain-google-genai",
apiLink: "https://python.langchain.com/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html"
},
{ {
name: "Together", name: "Together",
link: "together", link: "together",