Support "usage_metadata" for LiteLLM streaming calls.
This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
Support "usage_metadata" for LiteLLM.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
**Description:**
Since `ChatLiteLLM` is forwarding most parameters to
`litellm.completion(...)`, there is no reason to set other default
values than the ones defined by `litellm`.
In the case of parameter 'n', it also provokes an issue when trying to
call a serverless endpoint on Azure, as it is considered an extra
parameter. So we need to keep it optional.
We can debate about backward compatibility of this change: in my
opinion, there should not be big issues since from my experience,
calling `litellm.completion()` without these parameters works fine.
**Issue:**
- #29679
**Dependencies:** None
- [x] **PR title**: docs: (community) update ChatLiteLLM
- [x] **PR message**:
- **Description:** updated description of model_kwargs parameter which
was wrongly describing for temperature.
- **Issue:** #29862
- **Dependencies:** N/A
- [x] **Add tests and docs**: N/A
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Thank you for contributing to LangChain!
- **PR title**: "community: Remove all other keys in ChatLiteLLM and add
api_key"
- **PR message**: Currently, no api_key are passed to LiteLLM, and
LiteLLM only takes on api_key parameter. Therefore I removed all current
`*_api_key` attributes (They are not used), and added `api_key` that is
passed to ChatLiteLLM.
- Should fix issue #27826
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
- **Description:** Invalid `tool_choice` is given to `ChatLiteLLM` to
`bind_tools` due to it's parent's class default value being pass through
`with_structured_output`.
- **Issue:** #28176
It's a follow-up to https://github.com/langchain-ai/langchain/pull/23765
Now the tools can be bound by calling `bind_tools`
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
prompt = "Which city is hotter today and which is bigger: LA or NY?"
# tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]
tools = [GetWeather, GetPopulation]
llm = ChatLiteLLM(model="claude-3-sonnet-20240229").bind_tools(tools)
ai_msg = llm.invoke(prompt)
print(ai_msg.tool_calls)
```
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
When `model_kwargs={"tools": tools}` are passed to `ChatLiteLLM`, they
are executed, but the response is not recognized correctly
Let's add `tool_calls` to the `additional_kwargs`
Thank you for contributing to LangChain!
## ChatAnthropic
I used the following example to verify the output of llm with tools:
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_anthropic import ChatAnthropic
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm = ChatAnthropic(model="claude-3-sonnet-20240229")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
print(ai_msg.tool_calls)
```
I get the following response:
```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01UfDA89knrhw3vFV9X47neT'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01NrYVRYae7m7z7tBgyPb3Gd'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01EPFEpDgzL6vV2dTpD9SVP5'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01B5J6tPJXgwwfhQX9BHP2dt'}]
```
## LiteLLM
Based on https://litellm.vercel.app/docs/completion/function_call
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
import litellm
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]
response = litellm.completion(model="claude-3-sonnet-20240229", messages=[{'role': 'user', 'content': prompt}], tools=tools)
print(response.choices[0].message.tool_calls)
```
```python
[ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetWeather'), id='toolu_01HeDWV5vP7BDFfytH5FJsja', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetWeather'), id='toolu_01EiLesUSEr3YK1DaE2jxsQv', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetPopulation'), id='toolu_01Xz26zvkBDRxEUEWm9pX6xa', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetPopulation'), id='toolu_01SDqKnsLjvUXuBsgAZdEEpp', type='function')]
```
## ChatLiteLLM
When I try the following
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]
llm = ChatLiteLLM(model="claude-3-sonnet-20240229", model_kwargs={"tools": tools})
ai_msg = llm.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls)
```
```python
content="Okay, let's find out the current weather and populations for Los Angeles and New York City:" response_metadata={'token_usage': Usage(prompt_tokens=329, completion_tokens=193, total_tokens=522), 'model': 'claude-3-sonnet-20240229', 'finish_reason': 'tool_calls'} id='run-748b7a84-84f4-497e-bba1-320bd4823937-0'
[]
```
---
When I apply the changes of this PR, the output is
```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_017D2tGjiaiakB1HadsEFZ4e'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01WrDpJfVqLkPejWzonPCbLW'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_016UKyYrVAV9Pz99iZGgGU7V'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01Sgv1imExFX1oiR1Cw88zKy'}]
```
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
This PR fixes an issue with not able to use unlimited/infinity tokens
from the respective provider for the LiteLLM provider.
This is an issue when working in an agent environment that the token
usage can drastically increase beyond the initial value set causing
unexpected behavior.
- Description: callback on_llm_new_token before yield chunk for
_stream/_astream for some chat models, make all chat models in a
consistent behaviour.
- Issue: N/A
- Dependencies: N/A
- Description: Add missing chunk parameter for _stream/_astream for some
chat models, make all chat models in a consistent behaviour.
- Issue: N/A
- Dependencies: N/A
- **Description:** This PR adds `api_base` to `_client_params` in the
`chat_model` of LiteLLM to ensure it's included in API calls.
Previously, `api_base` was set on the client but was not included in the
parameters passed to the completion function. This change ensures that
`api_base` is correctly passed to all API calls.
- **Issue:** #14338
- **Tag maintainer:** @hwchase17 @agola11
- **Twitter handle:** @LMS_David_RS