mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-17 07:26:16 +00:00
community[minor]: Prem AI langchain integration (#19113)
### Prem SDK integration in LangChain This PR adds the integration with [PremAI's](https://www.premai.io/) prem-sdk with langchain. User can now access to deployed models (llms/embeddings) and use it with langchain's ecosystem. This PR adds the following: ### This PR adds the following: - [x] Add chat support - [X] Adding embedding support - [X] writing integration tests - [X] writing tests for chat - [X] writing tests for embedding - [X] writing unit tests - [X] writing tests for chat - [X] writing tests for embedding - [X] Adding documentation - [X] writing documentation for chat - [X] writing documentation for embedding - [X] run `make test` - [X] run `make lint`, `make lint_diff` - [X] Final checks (spell check, lint, format and overall testing) --------- Co-authored-by: Anindyadeep Sannigrahi <anindyadeepsannigrahi@Anindyadeeps-MacBook-Pro.local> Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Erick Friis <erick@langchain.dev> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
This commit is contained in:
286
docs/docs/integrations/chat/premai.ipynb
Normal file
286
docs/docs/integrations/chat/premai.ipynb
Normal file
@@ -0,0 +1,286 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: PremAI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatPremAI\n",
|
||||
"\n",
|
||||
">[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `ChatPremAI`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation and setup\n",
|
||||
"\n",
|
||||
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install premai langchain\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:\n",
|
||||
"\n",
|
||||
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
|
||||
"\n",
|
||||
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
|
||||
"\n",
|
||||
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
|
||||
"\n",
|
||||
"4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt. \n",
|
||||
"\n",
|
||||
"Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models import ChatPremAI\n",
|
||||
"from langchain_core.messages import HumanMessage, SystemMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup ChatPremAI instance in LangChain \n",
|
||||
"\n",
|
||||
"Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.\n",
|
||||
"\n",
|
||||
"To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model. \n",
|
||||
"\n",
|
||||
"`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# First step is to set up the env variable.\n",
|
||||
"# you can also pass the API key while instantiating the model but this\n",
|
||||
"# comes under a best practices to set it as env variable.\n",
|
||||
"\n",
|
||||
"if os.environ.get(\"PREMAI_API_KEY\") is None:\n",
|
||||
" os.environ[\"PREMAI_API_KEY\"] = getpass.getpass(\"PremAI API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# By default it will use the model which was deployed through the platform\n",
|
||||
"# in my case it will is \"claude-3-haiku\"\n",
|
||||
"\n",
|
||||
"chat = ChatPremAI(project_id=8)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Calling the Model\n",
|
||||
"\n",
|
||||
"Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`. \n",
|
||||
"\n",
|
||||
"The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions. \n",
|
||||
"\n",
|
||||
"### Generation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"I am an artificial intelligence created by Anthropic. I'm here to help with a wide variety of tasks, from research and analysis to creative projects and open-ended conversation. I have general knowledge and capabilities, but I'm not a real person - I'm an AI assistant. Please let me know if you have any other questions!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"human_message = HumanMessage(content=\"Who are you?\")\n",
|
||||
"\n",
|
||||
"response = chat.invoke([human_message])\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Above looks interesting right? I set my default lanchpad system-prompt as: `Always sound like a pirate` You can also, override the default system prompt if you need to. Here's how you can do it. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"I am an artificial intelligence created by Anthropic. My purpose is to assist and converse with humans in a friendly and helpful way. I have a broad knowledge base that I can use to provide information, answer questions, and engage in discussions on a wide range of topics. Please let me know if you have any other questions - I'm here to help!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"system_message = SystemMessage(content=\"You are a friendly assistant.\")\n",
|
||||
"human_message = HumanMessage(content=\"Who are you?\")\n",
|
||||
"\n",
|
||||
"chat.invoke([system_message, human_message])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also change generation parameters while calling the model. Here's how you can do that"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='I am an artificial intelligence created by Anthropic')"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat.invoke([system_message, human_message], temperature=0.7, max_tokens=10, top_p=0.95)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Important notes:\n",
|
||||
"\n",
|
||||
"Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported. \n",
|
||||
"\n",
|
||||
"We will provide support for those two above parameters in sooner versions. \n",
|
||||
"\n",
|
||||
"### Streaming\n",
|
||||
"\n",
|
||||
"And finally, here's how you do token streaming for dynamic chat like applications. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Hello! As an AI language model, I don't have feelings or a physical state, but I'm functioning properly and ready to assist you with any questions or tasks you might have. How can I help you today?"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"for chunk in chat.stream(\"hello how are you\"):\n",
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
" sys.stdout.flush()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Hello! As an AI language model, I don't have feelings or a physical form, but I'm functioning properly and ready to assist you. How can I help you today?"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"# For some experimental reasons if you want to override the system prompt then you\n",
|
||||
"# can pass that here too. However it is not recommended to override system prompt\n",
|
||||
"# of an already deployed model.\n",
|
||||
"\n",
|
||||
"for chunk in chat.stream(\n",
|
||||
" \"hello how are you\",\n",
|
||||
" system_prompt=\"act like a dog\",\n",
|
||||
" temperature=0.7,\n",
|
||||
" max_tokens=200,\n",
|
||||
"):\n",
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
" sys.stdout.flush()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
181
docs/docs/integrations/providers/premai.md
Normal file
181
docs/docs/integrations/providers/premai.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# PremAI
|
||||
|
||||
>[PremAI](https://app.premai.io) is a unified platform that lets you build powerful production-ready GenAI-powered applications with the least effort so that you can focus more on user experience and overall growth.
|
||||
|
||||
|
||||
## ChatPremAI
|
||||
|
||||
This example goes over how to use LangChain to interact with different chat models with `ChatPremAI`
|
||||
|
||||
### Installation and setup
|
||||
|
||||
We start by installing langchain and premai-sdk. You can type the following command to install:
|
||||
|
||||
```bash
|
||||
pip install premai langchain
|
||||
```
|
||||
|
||||
Before proceeding further, please make sure that you have made an account on PremAI and already started a project. If not, then here's how you can start for free:
|
||||
|
||||
1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).
|
||||
|
||||
2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard.
|
||||
|
||||
3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application.
|
||||
|
||||
4. Head over to LaunchPad (the one with 🚀 icon). And there deploy your model of choice. Your default model will be `gpt-4`. You can also set and fix different generation parameters (like max-tokens, temperature, etc) and also pre-set your system prompt.
|
||||
|
||||
Congratulations on creating your first deployed application on PremAI 🎉 Now we can use langchain to interact with our application.
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_community.chat_models import ChatPremAI
|
||||
```
|
||||
|
||||
### Setup ChatPrem instance in LangChain
|
||||
|
||||
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
|
||||
|
||||
To use langchain with prem, you do not need to pass any model name or set any parameters with our chat client. All of those will use the default model name and parameters of the LaunchPad model.
|
||||
|
||||
`NOTE:` If you change the `model_name` or any other parameter like `temperature` while setting the client, it will override existing default configurations.
|
||||
|
||||
```python
|
||||
import os
|
||||
import getpass
|
||||
|
||||
if "PREMAI_API_KEY" not in os.environ:
|
||||
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
|
||||
|
||||
chat = ChatPremAI(project_id=8)
|
||||
```
|
||||
|
||||
### Calling the Model
|
||||
|
||||
Now you are all set. We can now start by interacting with our application. `ChatPremAI` supports two methods `invoke` (which is the same as `generate`) and `stream`.
|
||||
|
||||
The first one will give us a static result. Whereas the second one will stream tokens one by one. Here's how you can generate chat-like completions.
|
||||
|
||||
### Generation
|
||||
|
||||
```python
|
||||
human_message = HumanMessage(content="Who are you?")
|
||||
|
||||
chat.invoke([human_message])
|
||||
```
|
||||
|
||||
The above looks interesting, right? I set my default launchpad system-prompt as: `Always sound like a pirate` You can also, override the default system prompt if you need to. Here's how you can do it.
|
||||
|
||||
```python
|
||||
system_message = SystemMessage(content="You are a friendly assistant.")
|
||||
human_message = HumanMessage(content="Who are you?")
|
||||
|
||||
chat.invoke([system_message, human_message])
|
||||
```
|
||||
|
||||
You can also change generation parameters while calling the model. Here's how you can do that:
|
||||
|
||||
```python
|
||||
chat.invoke(
|
||||
[system_message, human_message],
|
||||
temperature = 0.7, max_tokens = 20, top_p = 0.95
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
### Important notes:
|
||||
|
||||
Before proceeding further, please note that the current version of ChatPrem does not support parameters: [n](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n) and [stop](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop) are not supported.
|
||||
|
||||
We will provide support for those two above parameters in later versions.
|
||||
|
||||
### Streaming
|
||||
|
||||
And finally, here's how you do token streaming for dynamic chat-like applications.
|
||||
|
||||
```python
|
||||
import sys
|
||||
|
||||
for chunk in chat.stream("hello how are you"):
|
||||
sys.stdout.write(chunk.content)
|
||||
sys.stdout.flush()
|
||||
```
|
||||
|
||||
Similar to above, if you want to override the system-prompt and the generation parameters, here's how you can do it.
|
||||
|
||||
```python
|
||||
import sys
|
||||
|
||||
for chunk in chat.stream(
|
||||
"hello how are you",
|
||||
system_prompt = "You are an helpful assistant", temperature = 0.7, max_tokens = 20
|
||||
):
|
||||
sys.stdout.write(chunk.content)
|
||||
sys.stdout.flush()
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
In this section, we are going to discuss how we can get access to different embedding models using `PremEmbeddings`. Let's start by doing some imports and defining our embedding object
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import PremEmbeddings
|
||||
```
|
||||
|
||||
Once we import our required modules, let's set up our client. For now, let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise, it will throw an error.
|
||||
|
||||
|
||||
```python
|
||||
|
||||
import os
|
||||
import getpass
|
||||
|
||||
if os.environ.get("PREMAI_API_KEY") is None:
|
||||
os.environ["PREMAI_API_KEY"] = getpass.getpass("PremAI API Key:")
|
||||
|
||||
# Define a model as a required parameter here since there is no default embedding model
|
||||
|
||||
model = "text-embedding-3-large"
|
||||
embedder = PremEmbeddings(project_id=8, model=model)
|
||||
```
|
||||
|
||||
We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support.
|
||||
|
||||
|
||||
| Provider | Slug | Context Tokens |
|
||||
|-------------|------------------------------------------|----------------|
|
||||
| cohere | embed-english-v3.0 | N/A |
|
||||
| openai | text-embedding-3-small | 8191 |
|
||||
| openai | text-embedding-3-large | 8191 |
|
||||
| openai | text-embedding-ada-002 | 8191 |
|
||||
| replicate | replicate/all-mpnet-base-v2 | N/A |
|
||||
| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |
|
||||
| mistralai | mistral-embed | 4096 |
|
||||
|
||||
To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)
|
||||
|
||||
```python
|
||||
query = "Hello, this is a test query"
|
||||
query_result = embedder.embed_query(query)
|
||||
|
||||
# Let's print the first five elements of the query embedding vector
|
||||
|
||||
print(query_result[:5])
|
||||
```
|
||||
|
||||
Finally, let's embed a document
|
||||
|
||||
```python
|
||||
documents = [
|
||||
"This is document1",
|
||||
"This is document2",
|
||||
"This is document3"
|
||||
]
|
||||
|
||||
doc_result = embedder.embed_documents(documents)
|
||||
|
||||
# Similar to the previous result, let's print the first five element
|
||||
# of the first document vector
|
||||
|
||||
print(doc_result[0][:5])
|
||||
```
|
166
docs/docs/integrations/text_embedding/premai.ipynb
Normal file
166
docs/docs/integrations/text_embedding/premai.ipynb
Normal file
@@ -0,0 +1,166 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# PremAI\n",
|
||||
"\n",
|
||||
">[PremAI](https://app.premai.io) is an unified platform that let's you build powerful production-ready GenAI powered applications with least effort, so that you can focus more on user experience and overall growth. In this section we are going to dicuss how we can get access to different embedding model using `PremAIEmbeddings`\n",
|
||||
"\n",
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"We start by installing langchain and premai-sdk. You can type the following command to install:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install premai langchain\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Before proceeding further, please make sure that you have made an account on Prem and already started a project. If not, then here's how you can start for free:\n",
|
||||
"\n",
|
||||
"1. Sign in to [PremAI](https://app.premai.io/accounts/login/), if you are coming for the first time and create your API key [here](https://app.premai.io/api_keys/).\n",
|
||||
"\n",
|
||||
"2. Go to [app.premai.io](https://app.premai.io) and this will take you to the project's dashboard. \n",
|
||||
"\n",
|
||||
"3. Create a project and this will generate a project-id (written as ID). This ID will help you to interact with your deployed application. \n",
|
||||
"\n",
|
||||
"Congratulations on creating your first deployed application on Prem 🎉 Now we can use langchain to interact with our application. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's start by doing some imports and define our embedding object\n",
|
||||
"\n",
|
||||
"from langchain_community.embeddings import PremAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Once we imported our required modules, let's setup our client. For now let's assume that our `project_id` is 8. But make sure you use your project-id, otherwise it will throw error.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if os.environ.get(\"PREMAI_API_KEY\") is None:\n",
|
||||
" os.environ[\"PREMAI_API_KEY\"] = getpass.getpass(\"PremAI API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = \"text-embedding-3-large\"\n",
|
||||
"embedder = PremAIEmbeddings(project_id=8, model=model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We have defined our embedding model. We support a lot of embedding models. Here is a table that shows the number of embedding models we support. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"| Provider | Slug | Context Tokens |\n",
|
||||
"|-------------|------------------------------------------|----------------|\n",
|
||||
"| cohere | embed-english-v3.0 | N/A |\n",
|
||||
"| openai | text-embedding-3-small | 8191 |\n",
|
||||
"| openai | text-embedding-3-large | 8191 |\n",
|
||||
"| openai | text-embedding-ada-002 | 8191 |\n",
|
||||
"| replicate | replicate/all-mpnet-base-v2 | N/A |\n",
|
||||
"| together | togethercomputer/Llama-2-7B-32K-Instruct | N/A |\n",
|
||||
"| mistralai | mistral-embed | 4096 |\n",
|
||||
"\n",
|
||||
"To change the model, you simply need to copy the `slug` and access your embedding model. Now let's start using our embedding model with a single query followed by multiple queries (which is also called as a document)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.02129288576543331, 0.0008162345038726926, -0.004556538071483374, 0.02918623760342598, -0.02547479420900345]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"query = \"Hello, this is a test query\"\n",
|
||||
"query_result = embedder.embed_query(query)\n",
|
||||
"\n",
|
||||
"# Let's print the first five elements of the query embedding vector\n",
|
||||
"\n",
|
||||
"print(query_result[:5])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally let's embed a document"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.0030691148713231087, -0.045334383845329285, -0.0161729846149683, 0.04348714277148247, -0.0036920777056366205]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"documents = [\"This is document1\", \"This is document2\", \"This is document3\"]\n",
|
||||
"\n",
|
||||
"doc_result = embedder.embed_documents(documents)\n",
|
||||
"\n",
|
||||
"# Similar to previous result, let's print the first five element\n",
|
||||
"# of the first document vector\n",
|
||||
"\n",
|
||||
"print(doc_result[0][:5])"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
Reference in New Issue
Block a user