mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-15 23:57:21 +00:00
Fix token_usage None issue in ChatOpenAI with local Chatglm2-6B (#14493)
When using local Chatglm2-6B by changing OPENAI_BASE_URL to localhost, the token_usage in ChatOpenAI becomes None. This leads to an AttributeError when trying to access token_usage.items(). This commit adds a check to ensure token_usage is not None before accessing its items. This change prevents the AttributeError and allows ChatOpenAI to work seamlessly with a local Chatglm2-6B model, aligning with the way it operates with the OpenAI API. <!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. --> Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This commit is contained in:
parent
6080c98108
commit
e780433f6b
@ -367,11 +367,12 @@ class ChatOpenAI(BaseChatModel):
|
||||
# Happens in streaming
|
||||
continue
|
||||
token_usage = output["token_usage"]
|
||||
for k, v in token_usage.items():
|
||||
if k in overall_token_usage:
|
||||
overall_token_usage[k] += v
|
||||
else:
|
||||
overall_token_usage[k] = v
|
||||
if token_usage is not None:
|
||||
for k, v in token_usage.items():
|
||||
if k in overall_token_usage:
|
||||
overall_token_usage[k] += v
|
||||
else:
|
||||
overall_token_usage[k] = v
|
||||
if system_fingerprint is None:
|
||||
system_fingerprint = output.get("system_fingerprint")
|
||||
combined = {"token_usage": overall_token_usage, "model_name": self.model_name}
|
||||
|
Loading…
Reference in New Issue
Block a user