together, standard-tests: specify tool_choice in standard tests (#25548)

Here we allow standard tests to specify a value for `tool_choice` via a
`tool_choice_value` property, which defaults to None.

Chat models [available in
Together](https://docs.together.ai/docs/chat-models) have issues passing
standard tool calling tests:
- llama 3.1 models currently [appear to rely on user-side
parsing](https://docs.together.ai/docs/llama-3-function-calling) in
Together;
- Mixtral-8x7B and Mistral-7B (currently tested) consistently do not
call tools in some tests.

Specifying tool_choice also lets us remove an existing `xfail` and use a
smaller model in Groq tests.
This commit is contained in:
ccurme
2024-08-19 16:37:36 -04:00
committed by GitHub
parent 015ab91b83
commit c5bf114c0f
6 changed files with 83 additions and 10 deletions

View File

@@ -1,6 +1,6 @@
"""Standard LangChain interface tests"""
from typing import Type
from typing import Optional, Type
from langchain_core.language_models import BaseChatModel
from langchain_standard_tests.integration_tests import ( # type: ignore[import-not-found]
@@ -18,3 +18,8 @@ class TestMistralStandard(ChatModelIntegrationTests):
@property
def chat_model_params(self) -> dict:
return {"model": "mistral-large-latest", "temperature": 0}
@property
def tool_choice_value(self) -> Optional[str]:
"""Value to use for tool choice when used in tests."""
return "any"