mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-18 10:43:36 +00:00
docs: Update integration docs for OllamaEmbeddingsModel (#25314)
Issue: https://github.com/langchain-ai/langchain/issues/24856 --------- Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com> Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
This commit is contained in:
parent
a4ef830480
commit
8645a49f31
@ -12,34 +12,20 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "e49f1e0d",
|
"id": "9a3d6f34",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# OllamaEmbeddings\n",
|
"# OllamaEmbeddings\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook covers how to get started with Ollama embedding models.\n",
|
"This will help you get started with Ollama embedding models using LangChain. For detailed documentation on `OllamaEmbeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_ollama.embeddings.OllamaEmbeddings.html).\n",
|
||||||
|
"\n",
|
||||||
|
"## Overview\n",
|
||||||
|
"### Integration details\n",
|
||||||
|
"\n",
|
||||||
|
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||||
|
"\n",
|
||||||
|
"<ItemTable category=\"text_embedding\" item=\"Ollama\" />\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Installation"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "raw",
|
|
||||||
"id": "57f50aa5",
|
|
||||||
"metadata": {
|
|
||||||
"vscode": {
|
|
||||||
"languageId": "raw"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"source": [
|
|
||||||
"# install package\n",
|
|
||||||
"%pip install langchain_ollama"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "2b4f3e15",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Setup\n",
|
"## Setup\n",
|
||||||
"\n",
|
"\n",
|
||||||
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
|
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
|
||||||
@ -60,86 +46,209 @@
|
|||||||
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
|
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Usage"
|
"### Credentials\n",
|
||||||
|
"\n",
|
||||||
|
"There is no built-in auth mechanism for Ollama."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c84fb993",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "39a4953b",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||||
|
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d9664366",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Installation\n",
|
||||||
|
"\n",
|
||||||
|
"The LangChain Ollama integration lives in the `langchain-ollama` package:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "64853226",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"%pip install -qU langchain-ollama"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "45dd1724",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Instantiation\n",
|
||||||
|
"\n",
|
||||||
|
"Now we can instantiate our model object and generate embeddings:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "9ea7a09b",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain_ollama import OllamaEmbeddings\n",
|
||||||
|
"\n",
|
||||||
|
"embeddings = OllamaEmbeddings(\n",
|
||||||
|
" model=\"llama3\",\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "77d271b6",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Indexing and Retrieval\n",
|
||||||
|
"\n",
|
||||||
|
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [working with external knowledge tutorials](/docs/tutorials/#working-with-external-knowledge).\n",
|
||||||
|
"\n",
|
||||||
|
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 4,
|
"execution_count": 4,
|
||||||
"id": "62e0dbc3",
|
"id": "d817716b",
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from langchain_ollama import OllamaEmbeddings\n",
|
|
||||||
"\n",
|
|
||||||
"embeddings = OllamaEmbeddings(model=\"llama3\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 5,
|
|
||||||
"id": "12fcfb4b",
|
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"[1.1588108539581299,\n",
|
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||||
" -3.3943021297454834,\n",
|
|
||||||
" 0.8108075261116028,\n",
|
|
||||||
" 0.48006290197372437,\n",
|
|
||||||
" -1.8064439296722412,\n",
|
|
||||||
" -0.5782400965690613,\n",
|
|
||||||
" 1.8570188283920288,\n",
|
|
||||||
" 2.2842330932617188,\n",
|
|
||||||
" -2.836144208908081,\n",
|
|
||||||
" -0.6422690153121948,\n",
|
|
||||||
" ...]"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 4,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"embeddings.embed_query(\"My query to look up\")"
|
"# Create a vector store with a sample text\n",
|
||||||
|
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||||
|
"\n",
|
||||||
|
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||||
|
"\n",
|
||||||
|
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||||
|
" [text],\n",
|
||||||
|
" embedding=embeddings,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Use the vectorstore as a retriever\n",
|
||||||
|
"retriever = vectorstore.as_retriever()\n",
|
||||||
|
"\n",
|
||||||
|
"# Retrieve the most similar text\n",
|
||||||
|
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||||
|
"\n",
|
||||||
|
"# show the retrieved document's content\n",
|
||||||
|
"retrieved_documents[0].page_content"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e02b9855",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Direct Usage\n",
|
||||||
|
"\n",
|
||||||
|
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||||
|
"\n",
|
||||||
|
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||||
|
"\n",
|
||||||
|
"### Embed single texts\n",
|
||||||
|
"\n",
|
||||||
|
"You can embed single texts or documents with `embed_query`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "0d2befcd",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"[-0.001288981, 0.006547121, 0.018376578, 0.025603496, 0.009599175, -0.0042578303, -0.023250086, -0.0\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"single_vector = embeddings.embed_query(text)\n",
|
||||||
|
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "1b5a7d03",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Embed multiple texts\n",
|
||||||
|
"\n",
|
||||||
|
"You can embed multiple texts with `embed_documents`:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 6,
|
||||||
"id": "1f2e6104",
|
"id": "2f4d6e97",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
{
|
||||||
"data": {
|
"name": "stdout",
|
||||||
"text/plain": [
|
"output_type": "stream",
|
||||||
"[[0.026717308908700943,\n",
|
"text": [
|
||||||
" -3.073253870010376,\n",
|
"[-0.0013138362, 0.006438795, 0.018304596, 0.025530428, 0.009717592, -0.004225636, -0.023363983, -0.0\n",
|
||||||
" -0.983579158782959,\n",
|
"[-0.010317663, 0.01632489, 0.0070348927, 0.017076202, 0.008924255, 0.007399284, -0.023064945, -0.003\n"
|
||||||
" -1.3976373672485352,\n",
|
|
||||||
" 0.3153868317604065,\n",
|
|
||||||
" -0.9198529124259949,\n",
|
|
||||||
" -0.5000395178794861,\n",
|
|
||||||
" -2.8302183151245117,\n",
|
|
||||||
" 0.48412731289863586,\n",
|
|
||||||
" -1.3201743364334106,\n",
|
|
||||||
" ...]]"
|
|
||||||
]
|
]
|
||||||
},
|
|
||||||
"execution_count": 8,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"# async embed documents\n",
|
"text2 = (\n",
|
||||||
"await embeddings.aembed_documents(\n",
|
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||||
" [\"This is a content of the document\", \"This is another document\"]\n",
|
")\n",
|
||||||
")"
|
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||||
|
"for vector in two_vectors:\n",
|
||||||
|
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "98785c12",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## API Reference\n",
|
||||||
|
"\n",
|
||||||
|
"For detailed documentation on `OllamaEmbeddings` features and configuration options, please refer to the [API reference](https://api.python.langchain.com/en/latest/embeddings/langchain_ollama.embeddings.OllamaEmbeddings.html).\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@ -159,7 +268,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.12.3"
|
"version": "3.9.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
Loading…
Reference in New Issue
Block a user