diff --git a/docs/docs/integrations/chat/google_generative_ai.ipynb b/docs/docs/integrations/chat/google_generative_ai.ipynb index 8620197c9d8..fb6029577d1 100644 --- a/docs/docs/integrations/chat/google_generative_ai.ipynb +++ b/docs/docs/integrations/chat/google_generative_ai.ipynb @@ -31,7 +31,7 @@ "\n", "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", - "| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n", + "| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n", "\n", "### Setup\n", "\n", @@ -653,15 +653,35 @@ "\n", "# Initialize the model\n", "llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n", - "structured_llm = llm.with_structured_output(Person)\n", + "\n", + "# Method 1: Default function calling approach\n", + "structured_llm_default = llm.with_structured_output(Person)\n", + "\n", + "# Method 2: Native JSON mode\n", + "structured_llm_json = llm.with_structured_output(Person, method=\"json_mode\")\n", "\n", "# Invoke the model with a query asking for structured information\n", - "result = structured_llm.invoke(\n", + "result = structured_llm_json.invoke(\n", " \"Who was the 16th president of the USA, and how tall was he in meters?\"\n", ")\n", "print(result)" ] }, + { + "cell_type": "markdown", + "id": "g9w06ld1ggq", + "metadata": {}, + "source": [ + "### Structured Output Methods\n", + "\n", + "Two methods are supported for structured output:\n", + "\n", + "- **`method=\"function_calling\"` (default)**: Uses tool calling to extract structured data. Compatible with all Gemini models.\n", + "- **`method=\"json_mode\"`**: Uses Gemini's native structured output with `responseSchema`. More reliable but requires Gemini 1.5+ models.\n", + "\n", + "The `json_mode` method is **recommended for better reliability** as it constrains the model's generation process directly rather than relying on post-processing tool calls." + ] + }, { "cell_type": "markdown", "id": "90d4725e",