community[patch]: Update model client to support vision model in Tong… (#21474)

- **Description:** Tongyi uses different client for chat model and
vision model. This PR chooses proper client based on model name to
support both chat model and vision model. Reference [tongyi
document](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api?spm=a2c4g.11186623.0.0.27404c9a7upm11)
for details.

```
from langchain_core.messages import HumanMessage
from langchain_community.chat_models import ChatTongyi

llm = ChatTongyi(model_name='qwen-vl-max')
image_message = {
    "image": "https://lilianweng.github.io/posts/2023-06-23-agent/agent-overview.png"
}
text_message = {
    "text": "summarize this picture",
}
message = HumanMessage(content=[text_message, image_message])
llm.invoke([message])
```

- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None
This commit is contained in:
Pengcheng Liu
2024-05-22 02:58:27 +08:00
committed by GitHub
parent 98b64f3ae3
commit 4cf523949a
3 changed files with 67 additions and 1 deletions

View File

@@ -281,7 +281,10 @@ class ChatTongyi(BaseChatModel):
"Please install it with `pip install dashscope --upgrade`."
)
try:
values["client"] = dashscope.Generation
if "vl" in values["model_name"]:
values["client"] = dashscope.MultiModalConversation
else:
values["client"] = dashscope.Generation
except AttributeError:
raise ValueError(
"`dashscope` has no `Generation` attribute, this is likely "