docs: update docs for yuan2 in LLMs and Chat models integration. (#19028)

update yuan2.0 notebook in LLMs and Chat models.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This commit is contained in:
wulixuan 2024-03-16 07:03:18 +08:00 committed by GitHub
parent eec023766e
commit f79d0cb9fb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 9 additions and 10 deletions

View File

@ -4,7 +4,7 @@
"cell_type": "raw",
"source": [
"---\n",
"sidebar_label: YUAN2\n",
"sidebar_label: Yuan2.0\n",
"---"
],
"metadata": {
@ -22,7 +22,7 @@
}
},
"source": [
"# YUAN2.0\n",
"# Yuan2.0\n",
"\n",
"This notebook shows how to use [YUAN2 API](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/inference_server.md) in LangChain with the langchain.chat_models.ChatYuan2.\n",
"\n",
@ -96,9 +96,9 @@
},
"source": [
"### Setting Up Your API server\n",
"Setting up your OpenAI compatible API server following [yuan2 openai api server](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md).\n",
"If you deployed api server locally, you can simply set `api_key=\"EMPTY\"` or anything you want.\n",
"Just make sure, the `api_base` is set correctly."
"Setting up your OpenAI compatible API server following [yuan2 openai api server](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/Yuan2_fastchat.md).\n",
"If you deployed api server locally, you can simply set `yuan2_api_key=\"EMPTY\"` or anything you want.\n",
"Just make sure, the `yuan2_api_base` is set correctly."
]
},
{
@ -187,7 +187,7 @@
},
"outputs": [],
"source": [
"print(chat(messages))"
"print(chat.invoke(messages))"
]
},
{
@ -247,7 +247,7 @@
},
"outputs": [],
"source": [
"chat(messages)"
"chat.invoke(messages)"
]
},
{

View File

@ -45,7 +45,7 @@
"outputs": [],
"source": [
"# default infer_api for a local deployed Yuan2.0 inference server\n",
"infer_api = \"http://127.0.0.1:8000\"\n",
"infer_api = \"http://127.0.0.1:8000/yuan\"\n",
"\n",
"# direct access endpoint in a proxied environment\n",
"# import os\n",
@ -56,7 +56,6 @@
" max_tokens=2048,\n",
" temp=1.0,\n",
" top_p=0.9,\n",
" top_k=40,\n",
" use_history=False,\n",
")\n",
"\n",
@ -89,7 +88,7 @@
},
"outputs": [],
"source": [
"print(yuan_llm(question))"
"print(yuan_llm.invoke(question))"
]
}
],