langchain/libs/partners/ollama
2025-07-08 16:53:18 -04:00
..
langchain_ollama ollama[patch]: fix model validation, ensure per-call reasoning can be set, tests (#31927) 2025-07-08 16:39:41 -04:00
scripts partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781) 2025-04-11 07:18:44 -04:00
tests ollama[patch]: fix model validation, ensure per-call reasoning can be set, tests (#31927) 2025-07-08 16:39:41 -04:00
.gitignore
LICENSE
Makefile ollama[patch]: ruff fixes and rules (#31924) 2025-07-08 13:42:19 -04:00
pyproject.toml ollama: bump core (#31929) 2025-07-08 16:53:18 -04:00
README.md ollama: update tests, docs (#31736) 2025-06-25 20:13:20 +00:00
uv.lock ollama: release 0.3.4 (#31928) 2025-07-08 16:44:36 -04:00

langchain-ollama

This package contains the LangChain integration with Ollama

Installation

pip install -U langchain-ollama

For the package to work, you will need to install and run the Ollama server locally (download).

To run integration tests (make integration_tests), you will need the following models installed in your Ollama server:

  • llama3.1
  • deepseek-r1:1.5b

Install these models by running:

ollama pull <name-of-model>

Chat Models

ChatOllama class exposes chat models from Ollama.

from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.1")
llm.invoke("Sing a ballad of LangChain.")

Embeddings

OllamaEmbeddings class exposes embeddings from Ollama.

from langchain_ollama import OllamaEmbeddings

embeddings = OllamaEmbeddings(model="llama3.1")
embeddings.embed_query("What is the meaning of life?")

LLMs

OllamaLLM class exposes traditional LLMs from Ollama.

from langchain_ollama import OllamaLLM

llm = OllamaLLM(model="llama3.1")
llm.invoke("The meaning of life is")