diff --git a/docs/docs/integrations/chat/anthropic.ipynb b/docs/docs/integrations/chat/anthropic.ipynb index f055d408593..36ab35e4e70 100644 --- a/docs/docs/integrations/chat/anthropic.ipynb +++ b/docs/docs/integrations/chat/anthropic.ipynb @@ -1,991 +1,1030 @@ { - "cells": [ - { - "cell_type": "raw", - "id": "afaf8039", - "metadata": {}, - "source": [ - "---\n", - "sidebar_label: Anthropic\n", - "---" - ] - }, - { - "cell_type": "markdown", - "id": "e49f1e0d", - "metadata": {}, - "source": [ - "# ChatAnthropic\n", - "\n", - "This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n", - "\n", - "Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n", - "\n", - "\n", - ":::info AWS Bedrock and Google VertexAI\n", - "\n", - "Note that certain Anthropic models can also be accessed via AWS Bedrock and Google VertexAI. See the [ChatBedrock](/docs/integrations/chat/bedrock/) and [ChatVertexAI](/docs/integrations/chat/google_vertex_ai_palm/) integrations to use Anthropic models via these services.\n", - "\n", - ":::\n", - "\n", - "## Overview\n", - "### Integration details\n", - "\n", - "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/anthropic) | Package downloads | Package latest |\n", - "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", - "| [ChatAnthropic](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [langchain-anthropic](https://python.langchain.com/api_reference/anthropic/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-anthropic?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-anthropic?style=flat-square&label=%20) |\n", - "\n", - "### Model features\n", - "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", - "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", - "| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n", - "\n", - "## Setup\n", - "\n", - "To access Anthropic models you'll need to create an Anthropic account, get an API key, and install the `langchain-anthropic` integration package.\n", - "\n", - "### Credentials\n", - "\n", - "Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", - "metadata": {}, - "outputs": [], - "source": [ - "import getpass\n", - "import os\n", - "\n", - "if \"ANTHROPIC_API_KEY\" not in os.environ:\n", - " os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")" - ] - }, - { - "cell_type": "markdown", - "id": "72ee0c4b-9764-423a-9dbf-95129e185210", - "metadata": {}, - "source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:" - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", - "metadata": {}, - "outputs": [], - "source": [ - "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n", - "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"" - ] - }, - { - "cell_type": "markdown", - "id": "0730d6a1-c893-4840-9817-5e5251676d5d", - "metadata": {}, - "source": [ - "### Installation\n", - "\n", - "The LangChain Anthropic integration lives in the `langchain-anthropic` package:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "652d6238-1f87-422a-b135-f5abbb8652fc", - "metadata": {}, - "outputs": [], - "source": [ - "%pip install -qU langchain-anthropic" - ] - }, - { - "cell_type": "markdown", - "id": "fe4993ad-4a9b-4021-8ebd-f0fbbc739f49", - "metadata": {}, - "source": [ - ":::info This guide requires ``langchain-anthropic>=0.3.10``\n", - "\n", - ":::" - ] - }, - { - "cell_type": "markdown", - "id": "a38cde65-254d-4219-a441-068766c0d4b5", - "metadata": {}, - "source": [ - "## Instantiation\n", - "\n", - "Now we can instantiate our model object and generate chat completions:" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain_anthropic import ChatAnthropic\n", - "\n", - "llm = ChatAnthropic(\n", - " model=\"claude-3-5-sonnet-20240620\",\n", - " temperature=0,\n", - " max_tokens=1024,\n", - " timeout=None,\n", - " max_retries=2,\n", - " # other params...\n", - ")" - ] - }, - { - "cell_type": "markdown", - "id": "2b4f3e15", - "metadata": {}, - "source": [ - "## Invocation\n" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "62e0dbc3", - "metadata": { - "tags": [] - }, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content=\"J'adore la programmation.\", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "messages = [\n", - " (\n", - " \"system\",\n", - " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", - " ),\n", - " (\"human\", \"I love programming.\"),\n", - "]\n", - "ai_msg = llm.invoke(messages)\n", - "ai_msg" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "J'adore la programmation.\n" - ] - } - ], - "source": [ - "print(ai_msg.content)" - ] - }, - { - "cell_type": "markdown", - "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", - "metadata": {}, - "source": [ - "## Chaining\n", - "\n", - "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content=\"Here's the German translation:\\n\\nIch liebe Programmieren.\", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langchain_core.prompts import ChatPromptTemplate\n", - "\n", - "prompt = ChatPromptTemplate.from_messages(\n", - " [\n", - " (\n", - " \"system\",\n", - " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", - " ),\n", - " (\"human\", \"{input}\"),\n", - " ]\n", - ")\n", - "\n", - "chain = prompt | llm\n", - "chain.invoke(\n", - " {\n", - " \"input_language\": \"English\",\n", - " \"output_language\": \"German\",\n", - " \"input\": \"I love programming.\",\n", - " }\n", - ")" - ] - }, - { - "cell_type": "markdown", - "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", - "metadata": {}, - "source": [ - "## Content blocks\n", - "\n", - "Content from a single Anthropic AI message can either be a single string or a **list of content blocks**. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized `AIMessage.tool_calls`):" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "4a374a24-2534-4e6f-825b-30fab7bbe0cb", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'text': \"To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.\",\n", - " 'type': 'text'},\n", - " {'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',\n", - " 'input': {'location': 'Los Angeles, CA'},\n", - " 'name': 'GetWeather',\n", - " 'type': 'tool_use'},\n", - " {'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',\n", - " 'input': {'location': 'New York, NY'},\n", - " 'name': 'GetWeather',\n", - " 'type': 'tool_use'}]" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from pydantic import BaseModel, Field\n", - "\n", - "\n", - "class GetWeather(BaseModel):\n", - " \"\"\"Get the current weather in a given location\"\"\"\n", - "\n", - " location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n", - "\n", - "\n", - "llm_with_tools = llm.bind_tools([GetWeather])\n", - "ai_msg = llm_with_tools.invoke(\"Which city is hotter today: LA or NY?\")\n", - "ai_msg.content" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "6b4a1ead-952c-489f-a8d4-355d3fb55f3f", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'name': 'GetWeather',\n", - " 'args': {'location': 'Los Angeles, CA'},\n", - " 'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},\n", - " {'name': 'GetWeather',\n", - " 'args': {'location': 'New York, NY'},\n", - " 'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "ai_msg.tool_calls" - ] - }, - { - "cell_type": "markdown", - "id": "6e36d25c-f358-49e5-aefa-b99fbd3fec6b", - "metadata": {}, - "source": [ - "## Extended thinking\n", - "\n", - "Claude 3.7 Sonnet supports an [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) feature, which will output the step-by-step reasoning process that led to its final answer.\n", - "\n", - "To use it, specify the `thinking` parameter when initializing `ChatAnthropic`. It can also be passed in as a kwarg during invocation.\n", - "\n", - "You will need to specify a token budget to use this feature. See usage example below:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "a34cf93b-8522-43a6-a3f3-8a189ddf54a7", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[\n", - " {\n", - " \"signature\": \"ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==\",\n", - " \"thinking\": \"To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\\n\\nI can try to estimate this first. \\n$3^3 = 27$\\n$4^3 = 64$\\n\\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\\n\\nLet me try to compute this more precisely. I can use the cube root function:\\n\\ncube root of 50.653 = 50.653^(1/3)\\n\\nLet me calculate this:\\n50.653^(1/3) \\u2248 3.6998\\n\\nLet me verify:\\n3.6998^3 \\u2248 50.6533\\n\\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\\n\\nActually, let me compute this more precisely:\\n50.653^(1/3) \\u2248 3.69981\\n\\nLet me verify once more:\\n3.69981^3 \\u2248 50.652998\\n\\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.\",\n", - " \"type\": \"thinking\"\n", - " },\n", - " {\n", - " \"text\": \"The cube root of 50.653 is approximately 3.6998.\\n\\nTo verify: 3.6998\\u00b3 = 50.6530, which is very close to our original number.\",\n", - " \"type\": \"text\"\n", - " }\n", - "]\n" - ] - } - ], - "source": [ - "import json\n", - "\n", - "from langchain_anthropic import ChatAnthropic\n", - "\n", - "llm = ChatAnthropic(\n", - " model=\"claude-3-7-sonnet-latest\",\n", - " max_tokens=5000,\n", - " thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n", - ")\n", - "\n", - "response = llm.invoke(\"What is the cube root of 50.653?\")\n", - "print(json.dumps(response.content, indent=2))" - ] - }, - { - "cell_type": "markdown", - "id": "34349dfe-5d81-4887-a4f4-cd01e9587cdc", - "metadata": {}, - "source": [ - "## Prompt caching\n", - "\n", - "Anthropic supports [caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) of [elements of your prompts](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#what-can-be-cached), including messages, tool definitions, tool results, images and documents. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n", - "\n", - "To enable caching on an element of a prompt, mark its associated content block using the `cache_control` key. See examples below:\n", - "\n", - "### Messages" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "babb44a5-33f7-4200-9dfc-be867cf2c217", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "First invocation:\n", - "{'cache_read': 0, 'cache_creation': 1458}\n", - "\n", - "Second:\n", - "{'cache_read': 1458, 'cache_creation': 0}\n" - ] - } - ], - "source": [ - "import requests\n", - "from langchain_anthropic import ChatAnthropic\n", - "\n", - "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", - "\n", - "# Pull LangChain readme\n", - "get_response = requests.get(\n", - " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", - ")\n", - "readme = get_response.text\n", - "\n", - "messages = [\n", - " {\n", - " \"role\": \"system\",\n", - " \"content\": [\n", - " {\n", - " \"type\": \"text\",\n", - " \"text\": \"You are a technology expert.\",\n", - " },\n", - " {\n", - " \"type\": \"text\",\n", - " \"text\": f\"{readme}\",\n", - " # highlight-next-line\n", - " \"cache_control\": {\"type\": \"ephemeral\"},\n", - " },\n", - " ],\n", - " },\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"What's LangChain, according to its README?\",\n", - " },\n", - "]\n", - "\n", - "response_1 = llm.invoke(messages)\n", - "response_2 = llm.invoke(messages)\n", - "\n", - "usage_1 = response_1.usage_metadata[\"input_token_details\"]\n", - "usage_2 = response_2.usage_metadata[\"input_token_details\"]\n", - "\n", - "print(f\"First invocation:\\n{usage_1}\")\n", - "print(f\"\\nSecond:\\n{usage_2}\")" - ] - }, - { - "cell_type": "markdown", - "id": "141ce9c5-012d-4502-9d61-4a413b5d959a", - "metadata": {}, - "source": [ - "### Tools" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "1de82015-810f-4ed4-a08b-9866ea8746ce", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "First invocation:\n", - "{'cache_read': 0, 'cache_creation': 1809}\n", - "\n", - "Second:\n", - "{'cache_read': 1809, 'cache_creation': 0}\n" - ] - } - ], - "source": [ - "from langchain_anthropic import convert_to_anthropic_tool\n", - "from langchain_core.tools import tool\n", - "\n", - "# For demonstration purposes, we artificially expand the\n", - "# tool description.\n", - "description = (\n", - " f\"Get the weather at a location. By the way, check out this readme: {readme}\"\n", - ")\n", - "\n", - "\n", - "@tool(description=description)\n", - "def get_weather(location: str) -> str:\n", - " return \"It's sunny.\"\n", - "\n", - "\n", - "# Enable caching on the tool\n", - "# highlight-start\n", - "weather_tool = convert_to_anthropic_tool(get_weather)\n", - "weather_tool[\"cache_control\"] = {\"type\": \"ephemeral\"}\n", - "# highlight-end\n", - "\n", - "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", - "llm_with_tools = llm.bind_tools([weather_tool])\n", - "query = \"What's the weather in San Francisco?\"\n", - "\n", - "response_1 = llm_with_tools.invoke(query)\n", - "response_2 = llm_with_tools.invoke(query)\n", - "\n", - "usage_1 = response_1.usage_metadata[\"input_token_details\"]\n", - "usage_2 = response_2.usage_metadata[\"input_token_details\"]\n", - "\n", - "print(f\"First invocation:\\n{usage_1}\")\n", - "print(f\"\\nSecond:\\n{usage_2}\")" - ] - }, - { - "cell_type": "markdown", - "id": "a763830d-82cb-448a-ab30-f561522791b9", - "metadata": {}, - "source": [ - "### Incremental caching in conversational applications\n", - "\n", - "Prompt caching can be used in [multi-turn conversations](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation) to maintain context from earlier messages without redundant processing.\n", - "\n", - "We can enable incremental caching by marking the final message with `cache_control`. Claude will automatically use the longest previously-cached prefix for follow-up messages.\n", - "\n", - "Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain [chatbot tutorial](/docs/tutorials/chatbot/), but add a custom [reducer](https://langchain-ai.github.io/langgraph/concepts/low_level/#reducers) that automatically marks the last content block in each user message with `cache_control`. See below:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "07fde4db-344c-49bc-a5b4-99e2d20fb394", - "metadata": {}, - "outputs": [], - "source": [ - "import requests\n", - "from langchain_anthropic import ChatAnthropic\n", - "from langgraph.checkpoint.memory import MemorySaver\n", - "from langgraph.graph import START, StateGraph, add_messages\n", - "from typing_extensions import Annotated, TypedDict\n", - "\n", - "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", - "\n", - "# Pull LangChain readme\n", - "get_response = requests.get(\n", - " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", - ")\n", - "readme = get_response.text\n", - "\n", - "\n", - "def messages_reducer(left: list, right: list) -> list:\n", - " # Update last user message\n", - " for i in range(len(right) - 1, -1, -1):\n", - " if right[i].type == \"human\":\n", - " right[i].content[-1][\"cache_control\"] = {\"type\": \"ephemeral\"}\n", - " break\n", - "\n", - " return add_messages(left, right)\n", - "\n", - "\n", - "class State(TypedDict):\n", - " messages: Annotated[list, messages_reducer]\n", - "\n", - "\n", - "workflow = StateGraph(state_schema=State)\n", - "\n", - "\n", - "# Define the function that calls the model\n", - "def call_model(state: State):\n", - " response = llm.invoke(state[\"messages\"])\n", - " return {\"messages\": [response]}\n", - "\n", - "\n", - "# Define the (single) node in the graph\n", - "workflow.add_edge(START, \"model\")\n", - "workflow.add_node(\"model\", call_model)\n", - "\n", - "# Add memory\n", - "memory = MemorySaver()\n", - "app = workflow.compile(checkpointer=memory)" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "40013035-eb22-4327-8aaf-1ee974d9ff46", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?\n", - "\n", - "{'cache_read': 0, 'cache_creation': 0}\n" - ] - } - ], - "source": [ - "from langchain_core.messages import HumanMessage\n", - "\n", - "config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n", - "\n", - "query = \"Hi! I'm Bob.\"\n", - "\n", - "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", - "output = app.invoke({\"messages\": [input_message]}, config)\n", - "output[\"messages\"][-1].pretty_print()\n", - "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "22371f68-7913-4c4f-ab4a-2b4265095469", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:\n", - "\n", - "LangChain is:\n", - "- A framework for developing LLM-powered applications\n", - "- Helps chain together components and integrations to simplify AI application development\n", - "- Provides a standard interface for models, embeddings, vector stores, etc.\n", - "\n", - "Key features/benefits:\n", - "- Real-time data augmentation (connect LLMs to diverse data sources)\n", - "- Model interoperability (swap models easily as needed)\n", - "- Large ecosystem of integrations\n", - "\n", - "The LangChain ecosystem includes:\n", - "- LangSmith - For evaluations and observability\n", - "- LangGraph - For building complex agents with customizable architecture\n", - "- LangGraph Platform - For deployment and scaling of agents\n", - "\n", - "The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.\n", - "\n", - "Is there anything specific about LangChain you'd like to know more about, Bob?\n", - "\n", - "{'cache_read': 0, 'cache_creation': 1498}\n" - ] - } - ], - "source": [ - "query = f\"Check out this readme: {readme}\"\n", - "\n", - "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", - "output = app.invoke({\"messages\": [input_message]}, config)\n", - "output[\"messages\"][-1].pretty_print()\n", - "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "0e6798fc-8a80-4324-b4e3-f18706256c61", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "==================================\u001b[1m Ai Message \u001b[0m==================================\n", - "\n", - "Your name is Bob. You introduced yourself at the beginning of our conversation.\n", - "\n", - "{'cache_read': 1498, 'cache_creation': 269}\n" - ] - } - ], - "source": [ - "query = \"What was my name again?\"\n", - "\n", - "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", - "output = app.invoke({\"messages\": [input_message]}, config)\n", - "output[\"messages\"][-1].pretty_print()\n", - "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" - ] - }, - { - "cell_type": "markdown", - "id": "aa4b3647-c672-4782-a88c-a55fd3bf969f", - "metadata": {}, - "source": [ - "In the [LangSmith trace](https://smith.langchain.com/public/4d0584d8-5f9e-4b91-8704-93ba2ccf416a/r), toggling \"raw output\" will show exactly what messages are sent to the chat model, including `cache_control` keys." - ] - }, - { - "cell_type": "markdown", - "id": "029009f2-2795-418b-b5fc-fb996c6fe99e", - "metadata": {}, - "source": [ - "## Token-efficient tool use\n", - "\n", - "Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "206cff65-33b8-4a88-9b1a-050b4d57772a", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]\n", - "\n", - "Total tokens: 408\n" - ] - } - ], - "source": [ - "from langchain_anthropic import ChatAnthropic\n", - "from langchain_core.tools import tool\n", - "\n", - "llm = ChatAnthropic(\n", - " model=\"claude-3-7-sonnet-20250219\",\n", - " temperature=0,\n", - " # highlight-start\n", - " model_kwargs={\n", - " \"extra_headers\": {\"anthropic-beta\": \"token-efficient-tools-2025-02-19\"}\n", - " },\n", - " # highlight-end\n", - ")\n", - "\n", - "\n", - "@tool\n", - "def get_weather(location: str) -> str:\n", - " \"\"\"Get the weather at a location.\"\"\"\n", - " return \"It's sunny.\"\n", - "\n", - "\n", - "llm_with_tools = llm.bind_tools([get_weather])\n", - "response = llm_with_tools.invoke(\"What's the weather in San Francisco?\")\n", - "print(response.tool_calls)\n", - "print(f'\\nTotal tokens: {response.usage_metadata[\"total_tokens\"]}')" - ] - }, - { - "cell_type": "markdown", - "id": "301d372f-4dec-43e6-b58c-eee25633e1a6", - "metadata": {}, - "source": [ - "## Citations\n", - "\n", - "Anthropic supports a [citations](https://docs.anthropic.com/en/docs/build-with-claude/citations) feature that lets Claude attach context to its answers based on source documents supplied by the user. When [document content blocks](https://docs.anthropic.com/en/docs/build-with-claude/citations#document-types) with `\"citations\": {\"enabled\": True}` are included in a query, Claude may generate citations in its response.\n", - "\n", - "### Simple example\n", - "\n", - "In this example we pass a [plain text document](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents). In the background, Claude [automatically chunks](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents) the input text into sentences, which are used when generating citations." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "e5370e6e-5a9a-4546-848b-5f5bf313c3e7", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'text': 'Based on the document, ', 'type': 'text'},\n", - " {'text': 'the grass is green',\n", - " 'type': 'text',\n", - " 'citations': [{'type': 'char_location',\n", - " 'cited_text': 'The grass is green. ',\n", - " 'document_index': 0,\n", - " 'document_title': 'My Document',\n", - " 'start_char_index': 0,\n", - " 'end_char_index': 20}]},\n", - " {'text': ', and ', 'type': 'text'},\n", - " {'text': 'the sky is blue',\n", - " 'type': 'text',\n", - " 'citations': [{'type': 'char_location',\n", - " 'cited_text': 'The sky is blue.',\n", - " 'document_index': 0,\n", - " 'document_title': 'My Document',\n", - " 'start_char_index': 20,\n", - " 'end_char_index': 36}]},\n", - " {'text': '.', 'type': 'text'}]" - ] - }, - "execution_count": 2, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langchain_anthropic import ChatAnthropic\n", - "\n", - "llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n", - "\n", - "messages = [\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": [\n", - " {\n", - " \"type\": \"document\",\n", - " \"source\": {\n", - " \"type\": \"text\",\n", - " \"media_type\": \"text/plain\",\n", - " \"data\": \"The grass is green. The sky is blue.\",\n", - " },\n", - " \"title\": \"My Document\",\n", - " \"context\": \"This is a trustworthy document.\",\n", - " \"citations\": {\"enabled\": True},\n", - " },\n", - " {\"type\": \"text\", \"text\": \"What color is the grass and sky?\"},\n", - " ],\n", - " }\n", - "]\n", - "response = llm.invoke(messages)\n", - "response.content" - ] - }, - { - "cell_type": "markdown", - "id": "69956596-0e6c-492b-934d-c08ed3c9de9a", - "metadata": {}, - "source": [ - "### Using with text splitters\n", - "\n", - "Anthropic also lets you specify your own splits using [custom document](https://docs.anthropic.com/en/docs/build-with-claude/citations#custom-content-documents) types. LangChain [text splitters](/docs/concepts/text_splitters/) can be used to generate meaningful splits for this purpose. See the below example, where we split the LangChain README (a markdown document) and pass it to Claude as context:" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "04cc2841-7987-47a5-906c-09ea7fa28323", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'text': \"You can find LangChain's tutorials at https://python.langchain.com/docs/tutorials/\\n\\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.\",\n", - " 'type': 'text',\n", - " 'citations': [{'type': 'content_block_location',\n", - " 'cited_text': \"[Tutorials](https://python.langchain.com/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.\",\n", - " 'document_index': 0,\n", - " 'document_title': None,\n", - " 'start_block_index': 243,\n", - " 'end_block_index': 248}]}]" - ] - }, - "execution_count": 3, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "import requests\n", - "from langchain_anthropic import ChatAnthropic\n", - "from langchain_text_splitters import MarkdownTextSplitter\n", - "\n", - "\n", - "def format_to_anthropic_documents(documents: list[str]):\n", - " return {\n", - " \"type\": \"document\",\n", - " \"source\": {\n", - " \"type\": \"content\",\n", - " \"content\": [{\"type\": \"text\", \"text\": document} for document in documents],\n", - " },\n", - " \"citations\": {\"enabled\": True},\n", - " }\n", - "\n", - "\n", - "# Pull readme\n", - "get_response = requests.get(\n", - " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", - ")\n", - "readme = get_response.text\n", - "\n", - "# Split into chunks\n", - "splitter = MarkdownTextSplitter(\n", - " chunk_overlap=0,\n", - " chunk_size=50,\n", - ")\n", - "documents = splitter.split_text(readme)\n", - "\n", - "# Construct message\n", - "message = {\n", - " \"role\": \"user\",\n", - " \"content\": [\n", - " format_to_anthropic_documents(documents),\n", - " {\"type\": \"text\", \"text\": \"Give me a link to LangChain's tutorials.\"},\n", - " ],\n", - "}\n", - "\n", - "# Query LLM\n", - "llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n", - "response = llm.invoke([message])\n", - "\n", - "response.content" - ] - }, - { - "cell_type": "markdown", - "id": "cbfec7a9-d9df-4d12-844e-d922456dd9bf", - "metadata": {}, - "source": [ - "## Built-in tools\n", - "\n", - "Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "30a0af36-2327-4b1d-9ba5-e47cb72db0be", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.\n" - ] - }, - { - "data": { - "text/plain": [ - "[{'name': 'str_replace_editor',\n", - " 'args': {'command': 'view', 'path': '/repo/primes.py'},\n", - " 'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',\n", - " 'type': 'tool_call'}]" - ] - }, - "execution_count": 1, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langchain_anthropic import ChatAnthropic\n", - "\n", - "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", - "\n", - "tool = {\"type\": \"text_editor_20250124\", \"name\": \"str_replace_editor\"}\n", - "llm_with_tools = llm.bind_tools([tool])\n", - "\n", - "response = llm_with_tools.invoke(\n", - " \"There's a syntax error in my primes.py file. Can you help me fix it?\"\n", - ")\n", - "print(response.text())\n", - "response.tool_calls" - ] - }, - { - "cell_type": "markdown", - "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", - "metadata": {}, - "source": [ - "## API reference\n", - "\n", - "For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.4" - } + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": {}, + "source": [ + "---\n", + "sidebar_label: Anthropic\n", + "---" + ] }, - "nbformat": 4, - "nbformat_minor": 5 + { + "cell_type": "markdown", + "id": "e49f1e0d", + "metadata": {}, + "source": [ + "# ChatAnthropic\n", + "\n", + "This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n", + "\n", + "Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n", + "\n", + "\n", + ":::info AWS Bedrock and Google VertexAI\n", + "\n", + "Note that certain Anthropic models can also be accessed via AWS Bedrock and Google VertexAI. See the [ChatBedrock](/docs/integrations/chat/bedrock/) and [ChatVertexAI](/docs/integrations/chat/google_vertex_ai_palm/) integrations to use Anthropic models via these services.\n", + "\n", + ":::\n", + "\n", + "## Overview\n", + "### Integration details\n", + "\n", + "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/anthropic) | Package downloads | Package latest |\n", + "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", + "| [ChatAnthropic](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html) | [langchain-anthropic](https://python.langchain.com/api_reference/anthropic/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-anthropic?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-anthropic?style=flat-square&label=%20) |\n", + "\n", + "### Model features\n", + "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", + "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", + "| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n", + "\n", + "## Setup\n", + "\n", + "To access Anthropic models you'll need to create an Anthropic account, get an API key, and install the `langchain-anthropic` integration package.\n", + "\n", + "### Credentials\n", + "\n", + "Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", + "metadata": {}, + "outputs": [], + "source": [ + "import getpass\n", + "import os\n", + "\n", + "if \"ANTHROPIC_API_KEY\" not in os.environ:\n", + " os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")" + ] + }, + { + "cell_type": "markdown", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", + "metadata": {}, + "source": [ + "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n", + "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"" + ] + }, + { + "cell_type": "markdown", + "id": "0730d6a1-c893-4840-9817-5e5251676d5d", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "The LangChain Anthropic integration lives in the `langchain-anthropic` package:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "652d6238-1f87-422a-b135-f5abbb8652fc", + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -qU langchain-anthropic" + ] + }, + { + "cell_type": "markdown", + "id": "fe4993ad-4a9b-4021-8ebd-f0fbbc739f49", + "metadata": {}, + "source": [ + ":::info This guide requires ``langchain-anthropic>=0.3.10``\n", + "\n", + ":::" + ] + }, + { + "cell_type": "markdown", + "id": "a38cde65-254d-4219-a441-068766c0d4b5", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object and generate chat completions:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(\n", + " model=\"claude-3-5-sonnet-20240620\",\n", + " temperature=0,\n", + " max_tokens=1024,\n", + " timeout=None,\n", + " max_retries=2,\n", + " # other params...\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "2b4f3e15", + "metadata": {}, + "source": [ + "## Invocation\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "62e0dbc3", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content=\"J'adore la programmation.\", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "messages = [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", + " ),\n", + " (\"human\", \"I love programming.\"),\n", + "]\n", + "ai_msg = llm.invoke(messages)\n", + "ai_msg" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "J'adore la programmation.\n" + ] + } + ], + "source": [ + "print(ai_msg.content)" + ] + }, + { + "cell_type": "markdown", + "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", + "metadata": {}, + "source": [ + "## Chaining\n", + "\n", + "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content=\"Here's the German translation:\\n\\nIch liebe Programmieren.\", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain_core.prompts import ChatPromptTemplate\n", + "\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " ]\n", + ")\n", + "\n", + "chain = prompt | llm\n", + "chain.invoke(\n", + " {\n", + " \"input_language\": \"English\",\n", + " \"output_language\": \"German\",\n", + " \"input\": \"I love programming.\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", + "metadata": {}, + "source": [ + "## Content blocks\n", + "\n", + "Content from a single Anthropic AI message can either be a single string or a **list of content blocks**. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized `AIMessage.tool_calls`):" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "4a374a24-2534-4e6f-825b-30fab7bbe0cb", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'text': \"To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.\",\n", + " 'type': 'text'},\n", + " {'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',\n", + " 'input': {'location': 'Los Angeles, CA'},\n", + " 'name': 'GetWeather',\n", + " 'type': 'tool_use'},\n", + " {'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',\n", + " 'input': {'location': 'New York, NY'},\n", + " 'name': 'GetWeather',\n", + " 'type': 'tool_use'}]" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from pydantic import BaseModel, Field\n", + "\n", + "\n", + "class GetWeather(BaseModel):\n", + " \"\"\"Get the current weather in a given location\"\"\"\n", + "\n", + " location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n", + "\n", + "\n", + "llm_with_tools = llm.bind_tools([GetWeather])\n", + "ai_msg = llm_with_tools.invoke(\"Which city is hotter today: LA or NY?\")\n", + "ai_msg.content" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "6b4a1ead-952c-489f-a8d4-355d3fb55f3f", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'name': 'GetWeather',\n", + " 'args': {'location': 'Los Angeles, CA'},\n", + " 'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},\n", + " {'name': 'GetWeather',\n", + " 'args': {'location': 'New York, NY'},\n", + " 'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "ai_msg.tool_calls" + ] + }, + { + "cell_type": "markdown", + "id": "6e36d25c-f358-49e5-aefa-b99fbd3fec6b", + "metadata": {}, + "source": [ + "## Extended thinking\n", + "\n", + "Claude 3.7 Sonnet supports an [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) feature, which will output the step-by-step reasoning process that led to its final answer.\n", + "\n", + "To use it, specify the `thinking` parameter when initializing `ChatAnthropic`. It can also be passed in as a kwarg during invocation.\n", + "\n", + "You will need to specify a token budget to use this feature. See usage example below:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "a34cf93b-8522-43a6-a3f3-8a189ddf54a7", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[\n", + " {\n", + " \"signature\": \"ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==\",\n", + " \"thinking\": \"To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\\n\\nI can try to estimate this first. \\n$3^3 = 27$\\n$4^3 = 64$\\n\\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\\n\\nLet me try to compute this more precisely. I can use the cube root function:\\n\\ncube root of 50.653 = 50.653^(1/3)\\n\\nLet me calculate this:\\n50.653^(1/3) \\u2248 3.6998\\n\\nLet me verify:\\n3.6998^3 \\u2248 50.6533\\n\\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\\n\\nActually, let me compute this more precisely:\\n50.653^(1/3) \\u2248 3.69981\\n\\nLet me verify once more:\\n3.69981^3 \\u2248 50.652998\\n\\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.\",\n", + " \"type\": \"thinking\"\n", + " },\n", + " {\n", + " \"text\": \"The cube root of 50.653 is approximately 3.6998.\\n\\nTo verify: 3.6998\\u00b3 = 50.6530, which is very close to our original number.\",\n", + " \"type\": \"text\"\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "import json\n", + "\n", + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(\n", + " model=\"claude-3-7-sonnet-latest\",\n", + " max_tokens=5000,\n", + " thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n", + ")\n", + "\n", + "response = llm.invoke(\"What is the cube root of 50.653?\")\n", + "print(json.dumps(response.content, indent=2))" + ] + }, + { + "cell_type": "markdown", + "id": "34349dfe-5d81-4887-a4f4-cd01e9587cdc", + "metadata": {}, + "source": [ + "## Prompt caching\n", + "\n", + "Anthropic supports [caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) of [elements of your prompts](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#what-can-be-cached), including messages, tool definitions, tool results, images and documents. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n", + "\n", + "To enable caching on an element of a prompt, mark its associated content block using the `cache_control` key. See examples below:\n", + "\n", + "### Messages" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "babb44a5-33f7-4200-9dfc-be867cf2c217", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "First invocation:\n", + "{'cache_read': 0, 'cache_creation': 1458}\n", + "\n", + "Second:\n", + "{'cache_read': 1458, 'cache_creation': 0}\n" + ] + } + ], + "source": [ + "import requests\n", + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", + "\n", + "# Pull LangChain readme\n", + "get_response = requests.get(\n", + " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", + ")\n", + "readme = get_response.text\n", + "\n", + "messages = [\n", + " {\n", + " \"role\": \"system\",\n", + " \"content\": [\n", + " {\n", + " \"type\": \"text\",\n", + " \"text\": \"You are a technology expert.\",\n", + " },\n", + " {\n", + " \"type\": \"text\",\n", + " \"text\": f\"{readme}\",\n", + " # highlight-next-line\n", + " \"cache_control\": {\"type\": \"ephemeral\"},\n", + " },\n", + " ],\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"What's LangChain, according to its README?\",\n", + " },\n", + "]\n", + "\n", + "response_1 = llm.invoke(messages)\n", + "response_2 = llm.invoke(messages)\n", + "\n", + "usage_1 = response_1.usage_metadata[\"input_token_details\"]\n", + "usage_2 = response_2.usage_metadata[\"input_token_details\"]\n", + "\n", + "print(f\"First invocation:\\n{usage_1}\")\n", + "print(f\"\\nSecond:\\n{usage_2}\")" + ] + }, + { + "cell_type": "markdown", + "id": "141ce9c5-012d-4502-9d61-4a413b5d959a", + "metadata": {}, + "source": [ + "### Tools" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "1de82015-810f-4ed4-a08b-9866ea8746ce", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "First invocation:\n", + "{'cache_read': 0, 'cache_creation': 1809}\n", + "\n", + "Second:\n", + "{'cache_read': 1809, 'cache_creation': 0}\n" + ] + } + ], + "source": [ + "from langchain_anthropic import convert_to_anthropic_tool\n", + "from langchain_core.tools import tool\n", + "\n", + "# For demonstration purposes, we artificially expand the\n", + "# tool description.\n", + "description = (\n", + " f\"Get the weather at a location. By the way, check out this readme: {readme}\"\n", + ")\n", + "\n", + "\n", + "@tool(description=description)\n", + "def get_weather(location: str) -> str:\n", + " return \"It's sunny.\"\n", + "\n", + "\n", + "# Enable caching on the tool\n", + "# highlight-start\n", + "weather_tool = convert_to_anthropic_tool(get_weather)\n", + "weather_tool[\"cache_control\"] = {\"type\": \"ephemeral\"}\n", + "# highlight-end\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", + "llm_with_tools = llm.bind_tools([weather_tool])\n", + "query = \"What's the weather in San Francisco?\"\n", + "\n", + "response_1 = llm_with_tools.invoke(query)\n", + "response_2 = llm_with_tools.invoke(query)\n", + "\n", + "usage_1 = response_1.usage_metadata[\"input_token_details\"]\n", + "usage_2 = response_2.usage_metadata[\"input_token_details\"]\n", + "\n", + "print(f\"First invocation:\\n{usage_1}\")\n", + "print(f\"\\nSecond:\\n{usage_2}\")" + ] + }, + { + "cell_type": "markdown", + "id": "a763830d-82cb-448a-ab30-f561522791b9", + "metadata": {}, + "source": [ + "### Incremental caching in conversational applications\n", + "\n", + "Prompt caching can be used in [multi-turn conversations](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation) to maintain context from earlier messages without redundant processing.\n", + "\n", + "We can enable incremental caching by marking the final message with `cache_control`. Claude will automatically use the longest previously-cached prefix for follow-up messages.\n", + "\n", + "Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain [chatbot tutorial](/docs/tutorials/chatbot/), but add a custom [reducer](https://langchain-ai.github.io/langgraph/concepts/low_level/#reducers) that automatically marks the last content block in each user message with `cache_control`. See below:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "07fde4db-344c-49bc-a5b4-99e2d20fb394", + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "from langchain_anthropic import ChatAnthropic\n", + "from langgraph.checkpoint.memory import MemorySaver\n", + "from langgraph.graph import START, StateGraph, add_messages\n", + "from typing_extensions import Annotated, TypedDict\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", + "\n", + "# Pull LangChain readme\n", + "get_response = requests.get(\n", + " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", + ")\n", + "readme = get_response.text\n", + "\n", + "\n", + "def messages_reducer(left: list, right: list) -> list:\n", + " # Update last user message\n", + " for i in range(len(right) - 1, -1, -1):\n", + " if right[i].type == \"human\":\n", + " right[i].content[-1][\"cache_control\"] = {\"type\": \"ephemeral\"}\n", + " break\n", + "\n", + " return add_messages(left, right)\n", + "\n", + "\n", + "class State(TypedDict):\n", + " messages: Annotated[list, messages_reducer]\n", + "\n", + "\n", + "workflow = StateGraph(state_schema=State)\n", + "\n", + "\n", + "# Define the function that calls the model\n", + "def call_model(state: State):\n", + " response = llm.invoke(state[\"messages\"])\n", + " return {\"messages\": [response]}\n", + "\n", + "\n", + "# Define the (single) node in the graph\n", + "workflow.add_edge(START, \"model\")\n", + "workflow.add_node(\"model\", call_model)\n", + "\n", + "# Add memory\n", + "memory = MemorySaver()\n", + "app = workflow.compile(checkpointer=memory)" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "40013035-eb22-4327-8aaf-1ee974d9ff46", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?\n", + "\n", + "{'cache_read': 0, 'cache_creation': 0}\n" + ] + } + ], + "source": [ + "from langchain_core.messages import HumanMessage\n", + "\n", + "config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n", + "\n", + "query = \"Hi! I'm Bob.\"\n", + "\n", + "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", + "output = app.invoke({\"messages\": [input_message]}, config)\n", + "output[\"messages\"][-1].pretty_print()\n", + "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "22371f68-7913-4c4f-ab4a-2b4265095469", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:\n", + "\n", + "LangChain is:\n", + "- A framework for developing LLM-powered applications\n", + "- Helps chain together components and integrations to simplify AI application development\n", + "- Provides a standard interface for models, embeddings, vector stores, etc.\n", + "\n", + "Key features/benefits:\n", + "- Real-time data augmentation (connect LLMs to diverse data sources)\n", + "- Model interoperability (swap models easily as needed)\n", + "- Large ecosystem of integrations\n", + "\n", + "The LangChain ecosystem includes:\n", + "- LangSmith - For evaluations and observability\n", + "- LangGraph - For building complex agents with customizable architecture\n", + "- LangGraph Platform - For deployment and scaling of agents\n", + "\n", + "The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.\n", + "\n", + "Is there anything specific about LangChain you'd like to know more about, Bob?\n", + "\n", + "{'cache_read': 0, 'cache_creation': 1498}\n" + ] + } + ], + "source": [ + "query = f\"Check out this readme: {readme}\"\n", + "\n", + "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", + "output = app.invoke({\"messages\": [input_message]}, config)\n", + "output[\"messages\"][-1].pretty_print()\n", + "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "0e6798fc-8a80-4324-b4e3-f18706256c61", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Your name is Bob. You introduced yourself at the beginning of our conversation.\n", + "\n", + "{'cache_read': 1498, 'cache_creation': 269}\n" + ] + } + ], + "source": [ + "query = \"What was my name again?\"\n", + "\n", + "input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n", + "output = app.invoke({\"messages\": [input_message]}, config)\n", + "output[\"messages\"][-1].pretty_print()\n", + "print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')" + ] + }, + { + "cell_type": "markdown", + "id": "aa4b3647-c672-4782-a88c-a55fd3bf969f", + "metadata": {}, + "source": [ + "In the [LangSmith trace](https://smith.langchain.com/public/4d0584d8-5f9e-4b91-8704-93ba2ccf416a/r), toggling \"raw output\" will show exactly what messages are sent to the chat model, including `cache_control` keys." + ] + }, + { + "cell_type": "markdown", + "id": "029009f2-2795-418b-b5fc-fb996c6fe99e", + "metadata": {}, + "source": [ + "## Token-efficient tool use\n", + "\n", + "Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "206cff65-33b8-4a88-9b1a-050b4d57772a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]\n", + "\n", + "Total tokens: 408\n" + ] + } + ], + "source": [ + "from langchain_anthropic import ChatAnthropic\n", + "from langchain_core.tools import tool\n", + "\n", + "llm = ChatAnthropic(\n", + " model=\"claude-3-7-sonnet-20250219\",\n", + " temperature=0,\n", + " # highlight-start\n", + " model_kwargs={\n", + " \"extra_headers\": {\"anthropic-beta\": \"token-efficient-tools-2025-02-19\"}\n", + " },\n", + " # highlight-end\n", + ")\n", + "\n", + "\n", + "@tool\n", + "def get_weather(location: str) -> str:\n", + " \"\"\"Get the weather at a location.\"\"\"\n", + " return \"It's sunny.\"\n", + "\n", + "\n", + "llm_with_tools = llm.bind_tools([get_weather])\n", + "response = llm_with_tools.invoke(\"What's the weather in San Francisco?\")\n", + "print(response.tool_calls)\n", + "print(f'\\nTotal tokens: {response.usage_metadata[\"total_tokens\"]}')" + ] + }, + { + "cell_type": "markdown", + "id": "301d372f-4dec-43e6-b58c-eee25633e1a6", + "metadata": {}, + "source": [ + "## Citations\n", + "\n", + "Anthropic supports a [citations](https://docs.anthropic.com/en/docs/build-with-claude/citations) feature that lets Claude attach context to its answers based on source documents supplied by the user. When [document content blocks](https://docs.anthropic.com/en/docs/build-with-claude/citations#document-types) with `\"citations\": {\"enabled\": True}` are included in a query, Claude may generate citations in its response.\n", + "\n", + "### Simple example\n", + "\n", + "In this example we pass a [plain text document](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents). In the background, Claude [automatically chunks](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents) the input text into sentences, which are used when generating citations." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "e5370e6e-5a9a-4546-848b-5f5bf313c3e7", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'text': 'Based on the document, ', 'type': 'text'},\n", + " {'text': 'the grass is green',\n", + " 'type': 'text',\n", + " 'citations': [{'type': 'char_location',\n", + " 'cited_text': 'The grass is green. ',\n", + " 'document_index': 0,\n", + " 'document_title': 'My Document',\n", + " 'start_char_index': 0,\n", + " 'end_char_index': 20}]},\n", + " {'text': ', and ', 'type': 'text'},\n", + " {'text': 'the sky is blue',\n", + " 'type': 'text',\n", + " 'citations': [{'type': 'char_location',\n", + " 'cited_text': 'The sky is blue.',\n", + " 'document_index': 0,\n", + " 'document_title': 'My Document',\n", + " 'start_char_index': 20,\n", + " 'end_char_index': 36}]},\n", + " {'text': '.', 'type': 'text'}]" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n", + "\n", + "messages = [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": [\n", + " {\n", + " \"type\": \"document\",\n", + " \"source\": {\n", + " \"type\": \"text\",\n", + " \"media_type\": \"text/plain\",\n", + " \"data\": \"The grass is green. The sky is blue.\",\n", + " },\n", + " \"title\": \"My Document\",\n", + " \"context\": \"This is a trustworthy document.\",\n", + " \"citations\": {\"enabled\": True},\n", + " },\n", + " {\"type\": \"text\", \"text\": \"What color is the grass and sky?\"},\n", + " ],\n", + " }\n", + "]\n", + "response = llm.invoke(messages)\n", + "response.content" + ] + }, + { + "cell_type": "markdown", + "id": "69956596-0e6c-492b-934d-c08ed3c9de9a", + "metadata": {}, + "source": [ + "### Using with text splitters\n", + "\n", + "Anthropic also lets you specify your own splits using [custom document](https://docs.anthropic.com/en/docs/build-with-claude/citations#custom-content-documents) types. LangChain [text splitters](/docs/concepts/text_splitters/) can be used to generate meaningful splits for this purpose. See the below example, where we split the LangChain README (a markdown document) and pass it to Claude as context:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "04cc2841-7987-47a5-906c-09ea7fa28323", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'text': \"You can find LangChain's tutorials at https://python.langchain.com/docs/tutorials/\\n\\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.\",\n", + " 'type': 'text',\n", + " 'citations': [{'type': 'content_block_location',\n", + " 'cited_text': \"[Tutorials](https://python.langchain.com/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.\",\n", + " 'document_index': 0,\n", + " 'document_title': None,\n", + " 'start_block_index': 243,\n", + " 'end_block_index': 248}]}]" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import requests\n", + "from langchain_anthropic import ChatAnthropic\n", + "from langchain_text_splitters import MarkdownTextSplitter\n", + "\n", + "\n", + "def format_to_anthropic_documents(documents: list[str]):\n", + " return {\n", + " \"type\": \"document\",\n", + " \"source\": {\n", + " \"type\": \"content\",\n", + " \"content\": [{\"type\": \"text\", \"text\": document} for document in documents],\n", + " },\n", + " \"citations\": {\"enabled\": True},\n", + " }\n", + "\n", + "\n", + "# Pull readme\n", + "get_response = requests.get(\n", + " \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n", + ")\n", + "readme = get_response.text\n", + "\n", + "# Split into chunks\n", + "splitter = MarkdownTextSplitter(\n", + " chunk_overlap=0,\n", + " chunk_size=50,\n", + ")\n", + "documents = splitter.split_text(readme)\n", + "\n", + "# Construct message\n", + "message = {\n", + " \"role\": \"user\",\n", + " \"content\": [\n", + " format_to_anthropic_documents(documents),\n", + " {\"type\": \"text\", \"text\": \"Give me a link to LangChain's tutorials.\"},\n", + " ],\n", + "}\n", + "\n", + "# Query LLM\n", + "llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n", + "response = llm.invoke([message])\n", + "\n", + "response.content" + ] + }, + { + "cell_type": "markdown", + "id": "cbfec7a9-d9df-4d12-844e-d922456dd9bf", + "metadata": {}, + "source": [ + "## Built-in tools\n", + "\n", + "Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:" + ] + }, + { + "cell_type": "markdown", + "id": "74988918-f740-4cf6-a02c-7ea8f1927740", + "metadata": {}, + "source": [ + "### Web search\n", + "\n", + "Claude can use a [web search tool](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool) to run searches and ground its responses with citations." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "a83d4c91-9b7c-49ad-86d6-11a8fdc781b6", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-5-sonnet-latest\")\n", + "\n", + "tool = {\"type\": \"web_search_20250305\", \"name\": \"web_search\", \"max_uses\": 3}\n", + "llm_with_tools = llm.bind_tools([tool])\n", + "\n", + "response = llm_with_tools.invoke(\"How do I update a web app to TypeScript 5.5?\")" + ] + }, + { + "cell_type": "markdown", + "id": "2fd5d545-a40d-42b1-ad0c-0a79e2536c9b", + "metadata": {}, + "source": [ + "### Text editor\n", + "\n", + "The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool) for details." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "30a0af36-2327-4b1d-9ba5-e47cb72db0be", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.\n" + ] + }, + { + "data": { + "text/plain": [ + "[{'name': 'str_replace_editor',\n", + " 'args': {'command': 'view', 'path': '/repo/primes.py'},\n", + " 'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',\n", + " 'type': 'tool_call'}]" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain_anthropic import ChatAnthropic\n", + "\n", + "llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n", + "\n", + "tool = {\"type\": \"text_editor_20250124\", \"name\": \"str_replace_editor\"}\n", + "llm_with_tools = llm.bind_tools([tool])\n", + "\n", + "response = llm_with_tools.invoke(\n", + " \"There's a syntax error in my primes.py file. Can you help me fix it?\"\n", + ")\n", + "print(response.text())\n", + "response.tool_calls" + ] + }, + { + "cell_type": "markdown", + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", + "metadata": {}, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 } diff --git a/libs/partners/anthropic/langchain_anthropic/chat_models.py b/libs/partners/anthropic/langchain_anthropic/chat_models.py index e3e1bebc40d..661ef286c2b 100644 --- a/libs/partners/anthropic/langchain_anthropic/chat_models.py +++ b/libs/partners/anthropic/langchain_anthropic/chat_models.py @@ -866,29 +866,46 @@ class ChatAnthropic(BaseChatModel): See LangChain `docs `_ for more detail. - .. code-block:: python + Web search: - from langchain_anthropic import ChatAnthropic + .. code-block:: python - llm = ChatAnthropic(model="claude-3-7-sonnet-20250219") + from langchain_anthropic import ChatAnthropic - tool = {"type": "text_editor_20250124", "name": "str_replace_editor"} - llm_with_tools = llm.bind_tools([tool]) + llm = ChatAnthropic(model="claude-3-5-sonnet-latest") - response = llm_with_tools.invoke( - "There's a syntax error in my primes.py file. Can you help me fix it?" - ) - print(response.text()) - response.tool_calls + tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3} + llm_with_tools = llm.bind_tools([tool]) - .. code-block:: none + response = llm_with_tools.invoke( + "How do I update a web app to TypeScript 5.5?" + ) - I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error. + Text editor: - [{'name': 'str_replace_editor', - 'args': {'command': 'view', 'path': '/repo/primes.py'}, - 'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ', - 'type': 'tool_call'}] + .. code-block:: python + + from langchain_anthropic import ChatAnthropic + + llm = ChatAnthropic(model="claude-3-7-sonnet-20250219") + + tool = {"type": "text_editor_20250124", "name": "str_replace_editor"} + llm_with_tools = llm.bind_tools([tool]) + + response = llm_with_tools.invoke( + "There's a syntax error in my primes.py file. Can you help me fix it?" + ) + print(response.text()) + response.tool_calls + + .. code-block:: none + + I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error. + + [{'name': 'str_replace_editor', + 'args': {'command': 'view', 'path': '/repo/primes.py'}, + 'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ', + 'type': 'tool_call'}] Response metadata .. code-block:: python