standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)

We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.

To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.

More context: https://github.com/langchain-ai/langchain/pull/30487
This commit is contained in:
ccurme
2025-03-26 12:20:53 -04:00
committed by GitHub
parent 20f82502e5
commit 22d1a7d7b6
9 changed files with 75 additions and 12 deletions

View File

@@ -412,6 +412,9 @@ class ChatModelUnitTests(ChatModelTests):
def returns_usage_metadata(self) -> bool:
return False
Models supporting ``usage_metadata`` should also return the name of the
underlying model in the ``response_metadata`` of the AIMessage.
.. dropdown:: supports_anthropic_inputs
Boolean property indicating whether the chat model supports Anthropic-style