mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-03 14:18:03 +00:00
langchain-openai: Support token counting for o-series models in ChatOpenAI (#30542)
Related to #30344 Add support for token counting for o-series models in `test_token_counts.py`. * **Update `_MODELS` and `_CHAT_MODELS` dictionaries** - Add "o1", "o3", and "gpt-4o" to `_MODELS` and `_CHAT_MODELS` dictionaries. * **Update token counts** - Add token counts for "o1", "o3", and "gpt-4o" models. --- For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30542?shareId=ab208bf7-80a3-4b8d-80c4-2287486fedae).
This commit is contained in:
parent
d075ad21a0
commit
e7883d5b9f
@ -10,10 +10,13 @@ _EXPECTED_NUM_TOKENS = {
|
||||
"gpt-4": 12,
|
||||
"gpt-4-32k": 12,
|
||||
"gpt-3.5-turbo": 12,
|
||||
"o1": 12,
|
||||
"o3": 12,
|
||||
"gpt-4o": 11,
|
||||
}
|
||||
|
||||
_MODELS = models = ["ada", "babbage", "curie", "davinci"]
|
||||
_CHAT_MODELS = ["gpt-4", "gpt-4-32k", "gpt-3.5-turbo"]
|
||||
_CHAT_MODELS = ["gpt-4", "gpt-4-32k", "gpt-3.5-turbo", "o1", "o3", "gpt-4o"]
|
||||
|
||||
|
||||
@pytest.mark.xfail(reason="Old models require different tiktoken cached file")
|
||||
|
Loading…
Reference in New Issue
Block a user