mirror of
https://github.com/hwchase17/langchain.git
synced 2026-01-25 14:35:49 +00:00
## Description The models in DashScope support multiple SystemMessage. Here is the [Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk), and the example code on the document page: ```python import os from openai import OpenAI client = OpenAI( api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处替换您的API-KEY base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务base_url ) # 初始化messages列表 completion = client.chat.completions.create( model="qwen-long", messages=[ {'role': 'system', 'content': 'You are a helpful assistant.'}, # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。 {'role': 'system', 'content': 'fileid://file-fe-xxx'}, {'role': 'user', 'content': '这篇文章讲了什么?'} ], stream=True, stream_options={"include_usage": True} ) full_content = "" for chunk in completion: if chunk.choices and chunk.choices[0].delta.content: # 拼接输出内容 full_content += chunk.choices[0].delta.content print(chunk.model_dump()) print({full_content}) ``` Tip: The example code is for OpenAI, but the document said that it also supports the DataScope API, and I tested it, and it works. ``` Is the Dashscope SDK invocation method compatible? Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation. ```
🦜️🧑🤝🧑 LangChain Community
Quick Install
pip install langchain-community
What is it?
LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application.
For full documentation see the API reference.
📕 Releases & Versioning
langchain-community is currently on version 0.0.x
All changes will be accompanied by a patch version increase.
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.