mirror of
https://github.com/hwchase17/langchain.git
synced 2025-10-21 17:25:34 +00:00
Anthropic models (including via Bedrock and other cloud platforms) accept a status/is_error attribute on tool messages/results (specifically in `tool_result` content blocks for Anthropic API). Adding a ToolMessage.status attribute so that users can set this attribute when using those models
langchain-anthropic
This package contains the LangChain integration for Anthropic's generative models.
Installation
pip install -U langchain-anthropic
Chat Models
Anthropic recommends using their chat models over text completions.
You can see their recommended models here.
To use, you should have an Anthropic API key configured. Initialize the model as:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage
model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, max_tokens=1024)
Define the input message
message = HumanMessage(content="What is the capital of France?")
Generate a response using the model
response = model.invoke([message])
For a more detailed walkthrough see here.
LLMs (Legacy)
You can use the Claude 2 models for text completions.
from langchain_anthropic import AnthropicLLM
model = AnthropicLLM(model="claude-2.1", temperature=0, max_tokens=1024)
response = model.invoke("The best restaurant in San Francisco is: ")