mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-17 21:11:40 +00:00
**Description:** * adding Quantized embedders using optimum-intel and intel-extension-for-pytorch. * added mdx documentation and example notebooks * added embedding import testing. **Dependencies:** optimum = {extras = ["neural-compressor"], version = "^1.14.0", optional = true} intel_extension_for_pytorch = {version = "^2.2.0", optional = true} Dependencies have been added to pyproject.toml for the community lib. **Twitter handle:** @peter_izsak --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
202 lines
5.0 KiB
Plaintext
202 lines
5.0 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ae6f9d9d-fe44-489c-9661-dac69683dcd2",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Embedding Documents using Optimized and Quantized Embedders\n",
|
|
"\n",
|
|
"Embedding all documents using Quantized Embedders.\n",
|
|
"\n",
|
|
"The embedders are based on optimized models, created by using [optimum-intel](https://github.com/huggingface/optimum-intel.git) and [IPEX](https://github.com/intel/intel-extension-for-pytorch).\n",
|
|
"\n",
|
|
"Example text is based on [SBERT](https://www.sbert.net/docs/pretrained_cross-encoders.html)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "b9d1a3bb-83b1-4029-ad8d-411db1fba034",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"loading configuration file inc_config.json from cache at \n",
|
|
"INCConfig {\n",
|
|
" \"distillation\": {},\n",
|
|
" \"neural_compressor_version\": \"2.4.1\",\n",
|
|
" \"optimum_version\": \"1.16.2\",\n",
|
|
" \"pruning\": {},\n",
|
|
" \"quantization\": {\n",
|
|
" \"dataset_num_samples\": 50,\n",
|
|
" \"is_static\": true\n",
|
|
" },\n",
|
|
" \"save_onnx_model\": false,\n",
|
|
" \"torch_version\": \"2.2.0\",\n",
|
|
" \"transformers_version\": \"4.37.2\"\n",
|
|
"}\n",
|
|
"\n",
|
|
"Using `INCModel` to load a TorchScript model will be deprecated in v1.15.0, to load your model please use `IPEXModel` instead.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain_community.embeddings import QuantizedBiEncoderEmbeddings\n",
|
|
"\n",
|
|
"model_name = \"Intel/bge-small-en-v1.5-rag-int8-static\"\n",
|
|
"encode_kwargs = {\"normalize_embeddings\": True} # set True to compute cosine similarity\n",
|
|
"\n",
|
|
"model = QuantizedBiEncoderEmbeddings(\n",
|
|
" model_name=model_name,\n",
|
|
" encode_kwargs=encode_kwargs,\n",
|
|
" query_instruction=\"Represent this sentence for searching relevant passages: \",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "34318164-7a6f-47b6-8690-3b1d71e1fcfc",
|
|
"metadata": {},
|
|
"source": [
|
|
"Lets ask a question, and compare to 2 documents. The first contains the answer to the question, and the second one does not. \n",
|
|
"\n",
|
|
"We can check better suits our query."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "55ff07ca-fb44-4dcf-b2d3-dde021a53983",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"question = \"How many people live in Berlin?\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "aebef832-5534-440c-a4a8-4bf56ccd8ad4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"documents = [\n",
|
|
" \"Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.\",\n",
|
|
" \"Berlin is well known for its museums.\",\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "4eec7eda-0d9b-4488-a0e8-3eedd28ab0b1",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Batches: 100%|██████████| 1/1 [00:00<00:00, 4.18it/s]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"doc_vecs = model.embed_documents(documents)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "8e6dac72-5a0b-4421-9454-aa0a49b20c66",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"query_vec = model.embed_query(question)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "ec26eb7a-a259-4bb9-b9d8-9ff345a8c798",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "9ca1ee83-2a6a-4f65-bc2f-3942a0c068c6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"doc_vecs_torch = torch.tensor(doc_vecs)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "4f6a1986-339e-443a-a2f6-ae3f3ad4266c",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"query_vec_torch = torch.tensor(query_vec)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "2b49446e-1336-46b3-b9ef-af56b4870876",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([0.7980, 0.6529])"
|
|
]
|
|
},
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"query_vec_torch @ doc_vecs_torch.T"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6cc1ac2a-9641-408e-a373-736d121fc3c7",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see that indeed the first one ranks higher."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.18"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|