diff --git a/docs/extras/modules/model_io/models/llms/integrations/azure_openai_example.ipynb b/docs/extras/modules/model_io/models/llms/integrations/azure_openai_example.ipynb index f11aba76dc0..2f3aa1c2ae4 100644 --- a/docs/extras/modules/model_io/models/llms/integrations/azure_openai_example.ipynb +++ b/docs/extras/modules/model_io/models/llms/integrations/azure_openai_example.ipynb @@ -36,6 +36,8 @@ "## Deployments\n", "With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.\n", "\n", + "_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/modules/model_io/models/chat/integrations/azure_chat_openai)._\n", + "\n", "Let's say your deployment name is `text-davinci-002-prod`. In the `openai` Python API, you can specify this deployment with the `engine` parameter. For example:\n", "\n", "```python\n", @@ -176,7 +178,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.11.3" }, "vscode": { "interpreter": {