From c8b1c3a7e7dd36ba821e0b578b0b012ba9f5e661 Mon Sep 17 00:00:00 2001 From: Aditya <31382824+Adi8885@users.noreply.github.com> Date: Sat, 31 Aug 2024 02:28:21 +0530 Subject: [PATCH] docs: update documentation for Vertex Embeddings Models (#25745) - **Description:update documentation for Vertex Embeddings Models - **Issue:NA - **Dependencies:NA - **Twitter handle:NA --------- Co-authored-by: adityarane@google.com --- .../google_vertex_ai_palm.ipynb | 285 +++++++++++++++--- 1 file changed, 249 insertions(+), 36 deletions(-) diff --git a/docs/docs/integrations/text_embedding/google_vertex_ai_palm.ipynb b/docs/docs/integrations/text_embedding/google_vertex_ai_palm.ipynb index ca632a4bcd6..42a40cf6fe9 100644 --- a/docs/docs/integrations/text_embedding/google_vertex_ai_palm.ipynb +++ b/docs/docs/integrations/text_embedding/google_vertex_ai_palm.ipynb @@ -1,89 +1,307 @@ { "cells": [ { - "cell_type": "markdown", + "cell_type": "raw", + "id": "afaf8039", "metadata": {}, "source": [ - "# Google Vertex AI PaLM \n", + "---\n", + "sidebar_label: Google Vertex AI \n", + "keywords: [Vertex AI, vertexai , Google Cloud, embeddings]\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "9a3d6f34", + "metadata": {}, + "source": [ + "# Google Vertex AI Embeddings \n", "\n", - ">[Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) is a service on Google Cloud exposing the embedding models. \n", + "This will help you get started with Google Vertex AI Embeddings models using LangChain. For detailed documentation on `Google Vertex AI Embeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/v0.2/api_reference/google_vertexai/embeddings/langchain_google_vertexai.embeddings.VertexAIEmbeddings.html).\n", "\n", - "Note: This integration is separate from the Google PaLM integration.\n", + "## Overview\n", + "### Integration details\n", "\n", - "By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n", + "| Provider | Package |\n", + "|:--------:|:-------:|\n", + "| [Google](https://python.langchain.com/v0.2/docs/integrations/platforms/google/) | [langchain-google-vertexai](https://python.langchain.com/v0.2/api_reference/google_vertexai/embeddings/langchain_google_vertexai.embeddings.VertexAIEmbeddings.html) |\n", "\n", - "To use Vertex AI PaLM you must have the `langchain-google-vertexai` Python package installed and either:\n", - "- Have credentials configured for your environment (gcloud, workload identity, etc...)\n", - "- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n", + "## Setup\n", "\n", - "This codebase uses the `google.auth` library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.\n", + "To access Google Vertex AI Embeddings models you'll need to \n", + "- Create a Google Cloud account \n", + "- Install the `langchain-google-vertexai` integration package.\n", "\n", - "For more information, see: \n", - "- https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n", - "- https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n", - "\n" + "\n", + "\n", + "\n", + "### Credentials\n", + "\n", + "\n", + "Head to [Google Cloud](https://cloud.google.com/free/) to sign up to create an account. Once you've done this set the GOOGLE_APPLICATION_CREDENTIALS environment variable:\n", + "\n", + "For more information, see:\n", + "\n", + "https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n", + "https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth" + ] + }, + { + "cell_type": "markdown", + "id": "caaba519-3476-423b-a5e4-d99a10929506", + "metadata": {}, + "source": [ + "**OPTIONAL : Authenticate your notebook environment (Colab only)**\n", + "\n", + "If you're running this notebook on Google Colab, run the cell below to authenticate your environment." ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "tags": [] - }, + "id": "b0770000-3667-439b-8c46-acc5af7c8e40", + "metadata": {}, "outputs": [], "source": [ - "%pip install --upgrade --quiet langchain langchain-google-vertexai" + "import sys\n", + "\n", + "if \"google.colab\" in sys.modules:\n", + " from google.colab import auth\n", + "\n", + " auth.authenticate_user()" + ] + }, + { + "cell_type": "markdown", + "id": "9fbd4a33-2480-4ad1-8d56-aec730b3662b", + "metadata": {}, + "source": [ + "**Set Google Cloud project information and initialize Vertex AI SDK**\n", + "\n", + "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n", + "\n", + "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)." ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, + "id": "36521c2a", "metadata": {}, "outputs": [], "source": [ - "from langchain_google_vertexai import VertexAIEmbeddings" + "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n", + "LOCATION = \"us-central1\" # @param {type:\"string\"}\n", + "\n", + "import vertexai\n", + "\n", + "vertexai.init(project=PROJECT_ID, location=LOCATION)" + ] + }, + { + "cell_type": "markdown", + "id": "c84fb993", + "metadata": {}, + "source": [ + "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "39a4953b", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")" + ] + }, + { + "cell_type": "markdown", + "id": "d9664366", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "The LangChain Google Vertex AI Embeddings integration lives in the `langchain-google-vertexai` package:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "64853226", + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -qU langchain-google-vertexai" + ] + }, + { + "cell_type": "markdown", + "id": "45dd1724", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object and generate embeddings:\n", + ">Check the list of [Supported Models](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings#supported-models)" ] }, { "cell_type": "code", "execution_count": 2, + "id": "9ea7a09b", "metadata": {}, "outputs": [], "source": [ - "embeddings = VertexAIEmbeddings()" + "from langchain_google_vertexai import VertexAIEmbeddings\n", + "\n", + "# Initialize the a specific Embeddings Model version\n", + "embeddings = VertexAIEmbeddings(model_name=\"text-embedding-004\")" + ] + }, + { + "cell_type": "markdown", + "id": "77d271b6", + "metadata": {}, + "source": [ + "## Indexing and Retrieval\n", + "\n", + "Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [working with external knowledge tutorials](/docs/tutorials/#working-with-external-knowledge).\n", + "\n", + "Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`." ] }, { "cell_type": "code", "execution_count": 3, + "id": "d817716b", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "'LangChain is the framework for building context-aware reasoning applications'" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "text = \"This is a test document.\"" + "# Create a vector store with a sample text\n", + "from langchain_core.vectorstores import InMemoryVectorStore\n", + "\n", + "text = \"LangChain is the framework for building context-aware reasoning applications\"\n", + "\n", + "vectorstore = InMemoryVectorStore.from_texts(\n", + " [text],\n", + " embedding=embeddings,\n", + ")\n", + "\n", + "# Use the vectorstore as a retriever\n", + "retriever = vectorstore.as_retriever()\n", + "\n", + "# Retrieve the most similar text\n", + "retrieved_documents = retriever.invoke(\"What is LangChain?\")\n", + "\n", + "# show the retrieved document's content\n", + "retrieved_documents[0].page_content" + ] + }, + { + "cell_type": "markdown", + "id": "e02b9855", + "metadata": {}, + "source": [ + "## Direct Usage\n", + "\n", + "Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n", + "\n", + "You can directly call these methods to get embeddings for your own use cases.\n", + "\n", + "### Embed single texts\n", + "\n", + "You can embed single texts or documents with `embed_query`:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "0d2befcd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[-0.02831101417541504, 0.022063178941607475, -0.07454229146242142, 0.006448323838412762, 0.001955120\n" + ] + } + ], + "source": [ + "single_vector = embeddings.embed_query(text)\n", + "print(str(single_vector)[:100]) # Show the first 100 characters of the vector" + ] + }, + { + "cell_type": "markdown", + "id": "1b5a7d03", + "metadata": {}, + "source": [ + "### Embed multiple texts\n", + "\n", + "You can embed multiple texts with `embed_documents`:" ] }, { "cell_type": "code", "execution_count": 5, + "id": "2f4d6e97", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[-0.01092718355357647, 0.01213780976831913, -0.05650627985596657, 0.006737854331731796, 0.0085973171\n", + "[0.010135706514120102, 0.01234869472682476, -0.07284046709537506, 0.00027134662377648056, 0.01546290\n" + ] + } + ], "source": [ - "query_result = embeddings.embed_query(text)" + "text2 = (\n", + " \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n", + ")\n", + "two_vectors = embeddings.embed_documents([text, text2])\n", + "for vector in two_vectors:\n", + " print(str(vector)[:100]) # Show the first 100 characters of the vector" ] }, { - "cell_type": "code", - "execution_count": 6, + "cell_type": "markdown", + "id": "98785c12", "metadata": {}, - "outputs": [], "source": [ - "doc_result = embeddings.embed_documents([text])" + "## API Reference\n", + "\n", + "For detailed documentation on `Google Vertex AI Embeddings\n", + "` features and configuration options, please refer to the [API reference](https://python.langchain.com/v0.2/api_reference/google_vertexai/embeddings/langchain_google_vertexai.embeddings.VertexAIEmbeddings.html).\n" ] } ], "metadata": { + "environment": { + "kernel": "python310", + "name": "tf2-gpu.2-6.m104", + "type": "gcloud", + "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-6:m104" + }, "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": ".venv", "language": "python", "name": "python3" }, @@ -97,14 +315,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.12" - }, - "vscode": { - "interpreter": { - "hash": "cc99336516f23363341912c6723b01ace86f02e26b4290be1efc0677e2e2ec24" - } + "version": "3.11.6" } }, "nbformat": 4, - "nbformat_minor": 4 + "nbformat_minor": 5 }