langchain/docs/docs/integrations/llms/bedrock.ipynb
Supreet Takkar ae33979813
community[patch]: Allow adding ARNs as model_id to support Amazon Bedrock custom models (#16800)
- **Description:** Adds an additional class variable to `BedrockBase`
called `provider` that allows sending a model provider such as amazon,
cohere, ai21, etc.
Up until now, the model provider is extracted from the `model_id` using
the first part before the `.`, such as `amazon` for
`amazon.titan-text-express-v1` (see [supported list of Bedrock model IDs
here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html)).
But for custom Bedrock models where the ARN of the provisioned
throughput must be supplied, the `model_id` is like
`arn:aws:bedrock:...` so the `model_id` cannot be extracted from this. A
model `provider` is required by the LangChain Bedrock class to perform
model-based processing. To allow the same processing to be performed for
custom-models of a specific base model type, passing this `provider`
argument can help solve the issues.
The alternative considered here was the use of
`provider.arn:aws:bedrock:...` which then requires ARN to be extracted
and passed separately when invoking the model. The proposed solution
here is simpler and also does not cause issues for current models
already using the Bedrock class.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-05 14:28:03 -08:00

203 lines
6.3 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Bedrock"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of \n",
"> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, \n",
"> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to \n",
"> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, \n",
"> you can easily experiment with and evaluate top FMs for your use case, privately customize them with \n",
"> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build \n",
"> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is \n",
"> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy \n",
"> generative AI capabilities into your applications using the AWS services you are already familiar with.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet boto3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.llms import Bedrock\n",
"\n",
"llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\", model_id=\"amazon.titan-text-express-v1\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using in a conversation chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"conversation = ConversationChain(\n",
" llm=llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"\n",
"conversation.predict(input=\"Hi there!\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Conversation Chain With Streaming"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_community.llms import Bedrock\n",
"\n",
"llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" model_id=\"amazon.titan-text-express-v1\",\n",
" streaming=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conversation = ConversationChain(\n",
" llm=llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"\n",
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"custom_llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" provider=\"cohere\",\n",
" model_id=\"<Custom model ARN>\", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model\n",
" model_kwargs={\"temperature\": 1},\n",
" streaming=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")\n",
"\n",
"conversation = ConversationChain(\n",
" llm=custom_llm, verbose=True, memory=ConversationBufferMemory()\n",
")\n",
"conversation.predict(input=\"What is the recipe of mayonnaise?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Guardrails for Amazon Bedrock example \n",
"\n",
"## Guardrails for Amazon Bedrock (Preview) \n",
"[Guardrails for Amazon Bedrock](https://aws.amazon.com/bedrock/guardrails/) evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models.\n",
"**Note**: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if youd like access to this feature.\n",
"In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Any\n",
"\n",
"from langchain_core.callbacks import AsyncCallbackHandler\n",
"\n",
"\n",
"class BedrockAsyncCallbackHandler(AsyncCallbackHandler):\n",
" # Async callback handler that can be used to handle callbacks from langchain.\n",
"\n",
" async def on_llm_error(self, error: BaseException, **kwargs: Any) -> Any:\n",
" reason = kwargs.get(\"reason\")\n",
" if reason == \"GUARDRAIL_INTERVENED\":\n",
" print(f\"Guardrails: {kwargs}\")\n",
"\n",
"\n",
"# Guardrails for Amazon Bedrock with trace\n",
"llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" model_id=\"<Model_ID>\",\n",
" model_kwargs={},\n",
" guardrails={\"id\": \"<Guardrail_ID>\", \"version\": \"<Version>\", \"trace\": True},\n",
" callbacks=[BedrockAsyncCallbackHandler()],\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}