mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-18 05:21:21 +00:00
More than 300 files - will fail check_diff. Will merge after Vercel deploy succeeds Still occurrences that need changing - will update more later
229 lines
6.7 KiB
Plaintext
229 lines
6.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "5e45f35c",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_label: EverlyAI\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
|
|
"metadata": {},
|
|
"source": [
|
|
"# ChatEverlyAI\n",
|
|
"\n",
|
|
">[EverlyAI](https://everlyai.xyz) allows you to run your ML models at scale in the cloud. It also provides API access to [several LLM models](https://everlyai.xyz).\n",
|
|
"\n",
|
|
"This notebook demonstrates the use of `langchain.chat_models.ChatEverlyAI` for [EverlyAI Hosted Endpoints](https://everlyai.xyz/).\n",
|
|
"\n",
|
|
"* Set `EVERLYAI_API_KEY` environment variable\n",
|
|
"* or use the `everlyai_api_key` keyword argument"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "d00d850917865298",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install --upgrade --quiet langchain-openai"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "72340871-ae2f-415f-b399-0777d32dc379",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"from getpass import getpass\n",
|
|
"\n",
|
|
"os.environ[\"EVERLYAI_API_KEY\"] = getpass()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5d7fc704-3ea0-4c35-96e7-89fcae6c73fa",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Let's try out LLAMA model offered on EverlyAI Hosted Endpoints"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "0dc9428d-4217-47d2-97de-f784b1764186",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Hello! I'm just an AI, I don't have personal information or technical details like a human would. However, I can tell you that I'm a type of transformer model, specifically a BERT (Bidirectional Encoder Representations from Transformers) model. B\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.schema import HumanMessage, SystemMessage\n",
|
|
"from langchain_community.chat_models import ChatEverlyAI\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" SystemMessage(content=\"You are a helpful AI that shares everything you know.\"),\n",
|
|
" HumanMessage(\n",
|
|
" content=\"Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?\"\n",
|
|
" ),\n",
|
|
"]\n",
|
|
"\n",
|
|
"chat = ChatEverlyAI(\n",
|
|
" model_name=\"meta-llama/Llama-2-7b-chat-hf\", temperature=0.3, max_tokens=64\n",
|
|
")\n",
|
|
"print(chat(messages).content)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7c4f124a-eaf7-4d78-a2c0-b0aa23fb25c4",
|
|
"metadata": {},
|
|
"source": [
|
|
"# EverlyAI also supports streaming responses"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "1f94f5d2-569e-4a2c-965e-de53c2845fbb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\n",
|
|
" *pauses for dramatic effect*\n",
|
|
"Why did the AI go to therapy?\n",
|
|
"*drumroll*\n",
|
|
"Because"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessageChunk(content=\" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\\n *pauses for dramatic effect*\\nWhy did the AI go to therapy?\\n*drumroll*\\nBecause\")"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
|
"from langchain.schema import HumanMessage, SystemMessage\n",
|
|
"from langchain_community.chat_models import ChatEverlyAI\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" SystemMessage(content=\"You are a humorous AI that delights people.\"),\n",
|
|
" HumanMessage(content=\"Tell me a joke?\"),\n",
|
|
"]\n",
|
|
"\n",
|
|
"chat = ChatEverlyAI(\n",
|
|
" model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n",
|
|
" temperature=0.3,\n",
|
|
" max_tokens=64,\n",
|
|
" streaming=True,\n",
|
|
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
|
")\n",
|
|
"chat(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7de56d98",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Let's try a different language model on EverlyAI"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "d8a44114",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\n",
|
|
"\n",
|
|
"You want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\n",
|
|
"\n",
|
|
"Why couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\n",
|
|
"\n",
|
|
"Hope that one put a spring in your step, my dear! *"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessageChunk(content=\" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\\n\\nYou want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\\n\\nWhy couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\\n\\nHope that one put a spring in your step, my dear! *\")"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
|
"from langchain.schema import HumanMessage, SystemMessage\n",
|
|
"from langchain_community.chat_models import ChatEverlyAI\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" SystemMessage(content=\"You are a humorous AI that delights people.\"),\n",
|
|
" HumanMessage(content=\"Tell me a joke?\"),\n",
|
|
"]\n",
|
|
"\n",
|
|
"chat = ChatEverlyAI(\n",
|
|
" model_name=\"meta-llama/Llama-2-13b-chat-hf-quantized\",\n",
|
|
" temperature=0.3,\n",
|
|
" max_tokens=128,\n",
|
|
" streaming=True,\n",
|
|
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
|
")\n",
|
|
"chat(messages)"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|