standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)

We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.

To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.

More context: https://github.com/langchain-ai/langchain/pull/30487
This commit is contained in:
ccurme
2025-03-26 12:20:53 -04:00
committed by GitHub
parent 20f82502e5
commit 22d1a7d7b6
9 changed files with 75 additions and 12 deletions

View File

@@ -329,6 +329,7 @@ class Chat__ModuleName__(BaseChatModel):
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
"model_name": self.model_name,
},
usage_metadata={
"input_tokens": ct_input_tokens,
@@ -391,7 +392,10 @@ class Chat__ModuleName__(BaseChatModel):
# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
message=AIMessageChunk(
content="",
response_metadata={"time_in_sec": 3, "model_name": self.model_name},
)
)
if run_manager:
# This is optional in newer versions of LangChain