|
|
|
@ -2,7 +2,7 @@
|
|
|
|
|
"cells": [
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "raw",
|
|
|
|
|
"id": "641f8cb0",
|
|
|
|
|
"id": "afaf8039",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"---\n",
|
|
|
|
@ -12,20 +12,89 @@
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "38f26d7a",
|
|
|
|
|
"id": "e49f1e0d",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"# AzureChatOpenAI\n",
|
|
|
|
|
"\n",
|
|
|
|
|
">[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview) provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.\n",
|
|
|
|
|
"This guide will help you get started with AzureOpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html).\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"This notebook goes over how to connect to an Azure-hosted OpenAI endpoint. First, we need to install the `langchain-openai` package."
|
|
|
|
|
"Azure OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).\n",
|
|
|
|
|
"\n",
|
|
|
|
|
":::info Azure OpenAI vs OpenAI\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Azure OpenAI refers to OpenAI models hosted on the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). OpenAI also provides its own model APIs. To access OpenAI services directly, use the [ChatOpenAI integration](/docs/integrations/chat/openai/).\n",
|
|
|
|
|
"\n",
|
|
|
|
|
":::\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"## Overview\n",
|
|
|
|
|
"### Integration details\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/azure) | Package downloads | Package latest |\n",
|
|
|
|
|
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
|
|
|
|
"| [AzureChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html) | [langchain-openai](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ❌ | beta | ✅ |  |  |\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"### Model features\n",
|
|
|
|
|
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
|
|
|
|
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
|
|
|
|
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
|
|
|
|
|
"\n",
|
|
|
|
|
"## Setup\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"To access AzureOpenAI models you'll need to create an Azure account, create a deployment of an Azure OpenAI model, get the name and endpoint for your deployment, get an Azure OpenAI API key, and install the `langchain-openai` integration package.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"### Credentials\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Head to the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line%2Cpython-new&pivots=programming-language-python) to create your deployment and generate an API key. Once you've done this set the AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables:"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
"id": "d83ba7de",
|
|
|
|
|
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
|
"import getpass\n",
|
|
|
|
|
"import os\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\"Enter your AzureOpenAI API key: \")\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://YOUR-ENDPOINT.openai.azure.com/\""
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
|
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
|
|
|
|
|
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"### Installation\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"The LangChain AzureOpenAI integration lives in the `langchain-openai` package:"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
@ -34,65 +103,56 @@
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "e39133c8",
|
|
|
|
|
"metadata": {
|
|
|
|
|
"vscode": {
|
|
|
|
|
"languageId": "raw"
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"Next, let's set some environment variables to help us connect to the Azure OpenAI service. You can find these values in the Azure portal."
|
|
|
|
|
"## Instantiation\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Now we can instantiate our model object and generate chat completions.\n",
|
|
|
|
|
"- Replace `azure_deployment` with the name of your deployment,\n",
|
|
|
|
|
"- You can find the latest supported `api_version` here: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
"id": "1d8d73bd",
|
|
|
|
|
"execution_count": 1,
|
|
|
|
|
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
|
"import os\n",
|
|
|
|
|
"from langchain_openai import AzureChatOpenAI\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_API_KEY\"] = \"...\"\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://<your-endpoint>.openai.azure.com/\"\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_API_VERSION\"] = \"2023-06-01-preview\"\n",
|
|
|
|
|
"os.environ[\"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME\"] = \"chat\""
|
|
|
|
|
"llm = AzureChatOpenAI(\n",
|
|
|
|
|
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
|
|
|
|
|
" api_version=\"2024-05-01-preview\",\n",
|
|
|
|
|
" temperature=0,\n",
|
|
|
|
|
" max_tokens=None,\n",
|
|
|
|
|
" timeout=None,\n",
|
|
|
|
|
" max_retries=2,\n",
|
|
|
|
|
" # other params...\n",
|
|
|
|
|
")"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "e7b160f8",
|
|
|
|
|
"id": "2b4f3e15",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"Next, let's construct our model and chat with it:"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 3,
|
|
|
|
|
"id": "cbe4bb58-ba13-4355-8af9-cd990dc47a64",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
|
"from langchain_core.messages import HumanMessage\n",
|
|
|
|
|
"from langchain_openai import AzureChatOpenAI\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"model = AzureChatOpenAI(\n",
|
|
|
|
|
" openai_api_version=os.environ[\"AZURE_OPENAI_API_VERSION\"],\n",
|
|
|
|
|
" azure_deployment=os.environ[\"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME\"],\n",
|
|
|
|
|
")"
|
|
|
|
|
"## Invocation"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 4,
|
|
|
|
|
"id": "99509140",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"id": "62e0dbc3",
|
|
|
|
|
"metadata": {
|
|
|
|
|
"tags": []
|
|
|
|
|
},
|
|
|
|
|
"outputs": [
|
|
|
|
|
{
|
|
|
|
|
"data": {
|
|
|
|
|
"text/plain": [
|
|
|
|
|
"AIMessage(content=\"J'adore programmer.\", response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-25ed88db-38f2-4b0c-a943-a03f217711a9-0')"
|
|
|
|
|
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-a6a732c2-cb02-4e50-9a9c-ab30eab034fc-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
"execution_count": 4,
|
|
|
|
@ -101,95 +161,165 @@
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"message = HumanMessage(\n",
|
|
|
|
|
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"model.invoke([message])"
|
|
|
|
|
"messages = [\n",
|
|
|
|
|
" (\n",
|
|
|
|
|
" \"system\",\n",
|
|
|
|
|
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
|
|
|
|
|
" ),\n",
|
|
|
|
|
" (\"human\", \"I love programming.\"),\n",
|
|
|
|
|
"]\n",
|
|
|
|
|
"ai_msg = llm.invoke(messages)\n",
|
|
|
|
|
"ai_msg"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 11,
|
|
|
|
|
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [
|
|
|
|
|
{
|
|
|
|
|
"name": "stdout",
|
|
|
|
|
"output_type": "stream",
|
|
|
|
|
"text": [
|
|
|
|
|
"J'adore la programmation.\n"
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"print(ai_msg.content)"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "f27fa24d",
|
|
|
|
|
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"## Model Version\n",
|
|
|
|
|
"Azure OpenAI responses contain `model` property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
|
|
|
|
|
"## Chaining\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 12,
|
|
|
|
|
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [
|
|
|
|
|
{
|
|
|
|
|
"data": {
|
|
|
|
|
"text/plain": [
|
|
|
|
|
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-084967d7-06f2-441f-b5c1-477e2a9e9d03-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
"execution_count": 12,
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"output_type": "execute_result"
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"prompt = ChatPromptTemplate.from_messages(\n",
|
|
|
|
|
" [\n",
|
|
|
|
|
" (\n",
|
|
|
|
|
" \"system\",\n",
|
|
|
|
|
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
|
|
|
|
|
" ),\n",
|
|
|
|
|
" (\"human\", \"{input}\"),\n",
|
|
|
|
|
" ]\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"chain = prompt | llm\n",
|
|
|
|
|
"chain.invoke(\n",
|
|
|
|
|
" {\n",
|
|
|
|
|
" \"input_language\": \"English\",\n",
|
|
|
|
|
" \"output_language\": \"German\",\n",
|
|
|
|
|
" \"input\": \"I love programming.\",\n",
|
|
|
|
|
" }\n",
|
|
|
|
|
")"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"## Specifying model version\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Azure OpenAI responses contain `model_name` response metadata property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the specific version of the model, which is set on the deployment in Azure. E.g. it does not distinguish between `gpt-35-turbo-0125` and `gpt-35-turbo-0301`. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 5,
|
|
|
|
|
"id": "0531798a",
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
"id": "04b36e75-e8b7-4721-899e-76301ac2ecd9",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [],
|
|
|
|
|
"source": [
|
|
|
|
|
"from langchain_community.callbacks import get_openai_callback"
|
|
|
|
|
"%pip install -qU langchain-community"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 7,
|
|
|
|
|
"id": "aceddb72",
|
|
|
|
|
"metadata": {
|
|
|
|
|
"scrolled": true
|
|
|
|
|
},
|
|
|
|
|
"execution_count": 5,
|
|
|
|
|
"id": "84c411b0-1790-4798-8bb7-47d8ece4c2dc",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [
|
|
|
|
|
{
|
|
|
|
|
"name": "stdout",
|
|
|
|
|
"output_type": "stream",
|
|
|
|
|
"text": [
|
|
|
|
|
"Total Cost (USD): $0.000041\n"
|
|
|
|
|
"Total Cost (USD): $0.000063\n"
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"model = AzureChatOpenAI(\n",
|
|
|
|
|
" openai_api_version=os.environ[\"AZURE_OPENAI_API_VERSION\"],\n",
|
|
|
|
|
" azure_deployment=os.environ[\n",
|
|
|
|
|
" \"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME\"\n",
|
|
|
|
|
" ], # in Azure, this deployment has version 0613 - input and output tokens are counted separately\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"from langchain_community.callbacks import get_openai_callback\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"with get_openai_callback() as cb:\n",
|
|
|
|
|
" model.invoke([message])\n",
|
|
|
|
|
" llm.invoke(messages)\n",
|
|
|
|
|
" print(\n",
|
|
|
|
|
" f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\"\n",
|
|
|
|
|
" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "2e61eefd",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"We can provide the model version to `AzureChatOpenAI` constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly."
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
"execution_count": 11,
|
|
|
|
|
"id": "8d5e54e9",
|
|
|
|
|
"execution_count": 6,
|
|
|
|
|
"id": "21234693-d92b-4d69-8a7f-55aa062084bf",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"outputs": [
|
|
|
|
|
{
|
|
|
|
|
"name": "stdout",
|
|
|
|
|
"output_type": "stream",
|
|
|
|
|
"text": [
|
|
|
|
|
"Total Cost (USD): $0.000044\n"
|
|
|
|
|
"Total Cost (USD): $0.000078\n"
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"source": [
|
|
|
|
|
"model0301 = AzureChatOpenAI(\n",
|
|
|
|
|
" openai_api_version=os.environ[\"AZURE_OPENAI_API_VERSION\"],\n",
|
|
|
|
|
" azure_deployment=os.environ[\"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME\"],\n",
|
|
|
|
|
"llm_0301 = AzureChatOpenAI(\n",
|
|
|
|
|
" azure_deployment=\"YOUR-DEPLOYMENT\",\n",
|
|
|
|
|
" api_version=\"2024-05-01-preview\",\n",
|
|
|
|
|
" model_version=\"0301\",\n",
|
|
|
|
|
")\n",
|
|
|
|
|
"with get_openai_callback() as cb:\n",
|
|
|
|
|
" model0301.invoke([message])\n",
|
|
|
|
|
" llm_0301.invoke(messages)\n",
|
|
|
|
|
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
|
|
|
|
"metadata": {},
|
|
|
|
|
"source": [
|
|
|
|
|
"## API reference\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"For detailed documentation of all AzureChatOpenAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html"
|
|
|
|
|
]
|
|
|
|
|
}
|
|
|
|
|
],
|
|
|
|
|
"metadata": {
|
|
|
|
@ -208,7 +338,7 @@
|
|
|
|
|
"name": "python",
|
|
|
|
|
"nbconvert_exporter": "python",
|
|
|
|
|
"pygments_lexer": "ipython3",
|
|
|
|
|
"version": "3.11.4"
|
|
|
|
|
"version": "3.11.9"
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
"nbformat": 4,
|
|
|
|
|