mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-10 17:46:15 +00:00
Added missed provider pages. Added missed descriptions and links. I fixed the Ipex-LLM titles, so the ToC is now sorted properly for these titles. --------- Co-authored-by: Erick Friis <erick@langchain.dev>
179 lines
6.3 KiB
Plaintext
179 lines
6.3 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# IPEX-LLM: Local BGE Embeddings on Intel GPU\n",
|
|
"\n",
|
|
"> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\n",
|
|
"\n",
|
|
"This example goes over how to use LangChain to conduct embedding tasks with `ipex-llm` optimizations on Intel GPU. This would be helpful in applications such as RAG, document QA, etc.\n",
|
|
"\n",
|
|
"> **Note**\n",
|
|
">\n",
|
|
"> It is recommended that only Windows users with Intel Arc A-Series GPU (except for Intel Arc A300-Series or Pro A60) run this Jupyter notebook directly. For other cases (e.g. Linux users, Intel iGPU, etc.), it is recommended to run the code with Python scripts in terminal for best experiences.\n",
|
|
"\n",
|
|
"## Install Prerequisites\n",
|
|
"To benefit from IPEX-LLM on Intel GPUs, there are several prerequisite steps for tools installation and environment preparation.\n",
|
|
"\n",
|
|
"If you are a Windows user, visit the [Install IPEX-LLM on Windows with Intel GPU Guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/install_windows_gpu.md), and follow [Install Prerequisites](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/install_windows_gpu.md#install-prerequisites) to update GPU driver (optional) and install Conda.\n",
|
|
"\n",
|
|
"If you are a Linux user, visit the [Install IPEX-LLM on Linux with Intel GPU](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/install_linux_gpu.md), and follow [**Install Prerequisites**](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/install_linux_gpu.md#install-prerequisites) to install GPU driver, Intel® oneAPI Base Toolkit 2024.0, and Conda.\n",
|
|
"\n",
|
|
"## Setup\n",
|
|
"\n",
|
|
"After the prerequisites installation, you should have created a conda environment with all prerequisites installed. **Start the jupyter service in this conda environment**:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install -qU langchain langchain-community"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Install IPEX-LLM for optimizations on Intel GPU, as well as `sentence-transformers`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/\n",
|
|
"%pip install sentence-transformers"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"> **Note**\n",
|
|
">\n",
|
|
"> You can also use `https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/` as the extra-indel-url.\n",
|
|
"\n",
|
|
"## Runtime Configuration\n",
|
|
"\n",
|
|
"For optimal performance, it is recommended to set several environment variables based on your device:\n",
|
|
"\n",
|
|
"### For Windows Users with Intel Core Ultra integrated GPU"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"\n",
|
|
"os.environ[\"SYCL_CACHE_PERSISTENT\"] = \"1\"\n",
|
|
"os.environ[\"BIGDL_LLM_XMX_DISABLED\"] = \"1\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### For Windows Users with Intel Arc A-Series GPU"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"\n",
|
|
"os.environ[\"SYCL_CACHE_PERSISTENT\"] = \"1\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"> **Note**\n",
|
|
">\n",
|
|
"> For the first time that each model runs on Intel iGPU/Intel Arc A300-Series or Pro A60, it may take several minutes to compile.\n",
|
|
">\n",
|
|
"> For other GPU type, please refer to [here](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Overview/install_gpu.md#runtime-configuration) for Windows users, and [here](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Overview/install_gpu.md#runtime-configuration-1) for Linux users.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Basic Usage\n",
|
|
"\n",
|
|
"Setting `device` to `\"xpu\"` in `model_kwargs` when initializing `IpexLLMBgeEmbeddings` will put the embedding model on Intel GPU and benefit from IPEX-LLM optimizations:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_community.embeddings import IpexLLMBgeEmbeddings\n",
|
|
"\n",
|
|
"embedding_model = IpexLLMBgeEmbeddings(\n",
|
|
" model_name=\"BAAI/bge-large-en-v1.5\",\n",
|
|
" model_kwargs={\"device\": \"xpu\"},\n",
|
|
" encode_kwargs={\"normalize_embeddings\": True},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"API Reference\n",
|
|
"- [IpexLLMBgeEmbeddings](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.ipex_llm.IpexLLMBgeEmbeddings.html)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"sentence = \"IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.\"\n",
|
|
"query = \"What is IPEX-LLM?\"\n",
|
|
"\n",
|
|
"text_embeddings = embedding_model.embed_documents([sentence, query])\n",
|
|
"print(f\"text_embeddings[0][:10]: {text_embeddings[0][:10]}\")\n",
|
|
"print(f\"text_embeddings[1][:10]: {text_embeddings[1][:10]}\")\n",
|
|
"\n",
|
|
"query_embedding = embedding_model.embed_query(query)\n",
|
|
"print(f\"query_embedding[:10]: {query_embedding[:10]}\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.12"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
}
|