mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-04 19:35:08 +00:00
OpenAI recently added a `stream_options` parameter to its chat completions API (see [release notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)). When this parameter is set to `{"usage": True}`, an extra "empty" message is added to the end of a stream containing token usage. Here we propagate token usage to `AIMessage.usage_metadata`. We enable this feature by default. Streams would now include an extra chunk at the end, **after** the chunk with `response_metadata={'finish_reason': 'stop'}`. New behavior: ``` [AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'), AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'), AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'), AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'), AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})] ``` Old behavior (accessible by passing `stream_options={"include_usage": False}` into (a)stream: ``` [AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'), AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'), AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'), AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')] ``` From what I can tell this is not yet implemented in Azure, so we enable only for ChatOpenAI.
langchain-openai
This package contains the LangChain integrations for OpenAI through their openai SDK.
Installation and Setup
- Install the LangChain partner package
pip install langchain-openai
- Get an OpenAI api key and set it as an environment variable (
OPENAI_API_KEY)
LLM
See a usage example.
from langchain_openai import OpenAI
If you are using a model hosted on Azure, you should use different wrapper for that:
from langchain_openai import AzureOpenAI
For a more detailed walkthrough of the Azure wrapper, see here
Chat model
See a usage example.
from langchain_openai import ChatOpenAI
If you are using a model hosted on Azure, you should use different wrapper for that:
from langchain_openai import AzureChatOpenAI
For a more detailed walkthrough of the Azure wrapper, see here
Text Embedding Model
See a usage example
from langchain_openai import OpenAIEmbeddings
If you are using a model hosted on Azure, you should use different wrapper for that:
from langchain_openai import AzureOpenAIEmbeddings
For a more detailed walkthrough of the Azure wrapper, see here