langchain/libs/partners/ollama
2025-08-11 15:10:42 -04:00
..
langchain_ollama Merge branch 'master' into wip-v0.4 2025-08-11 15:10:42 -04:00
scripts partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781) 2025-04-11 07:18:44 -04:00
tests Merge branch 'master' into wip-v0.4 2025-08-11 15:10:42 -04:00
.gitignore
LICENSE
Makefile feat: port various nit changes from wip-v0.4 (#32506) 2025-08-11 15:09:08 -04:00
pyproject.toml Merge branch 'master' into wip-v0.4 2025-08-11 15:10:42 -04:00
README.md feat: minor core work, v1 standard tests & (most of) v1 ollama (#32315) 2025-08-06 18:22:02 -04:00
uv.lock Merge branch 'master' into wip-v0.4 2025-08-11 15:10:42 -04:00

langchain-ollama

This package contains the LangChain integration with Ollama

Installation

pip install -U langchain-ollama

For the package to work, you will need to install and run the Ollama server locally (download).

To run integration tests (make integration_tests), you will need the following models installed in your Ollama server:

  • llama3.1
  • deepseek-r1:1.5b

Install these models by running:

ollama pull <name-of-model>

Chat Models

ChatOllama class exposes chat models from Ollama.

from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.1")
llm.invoke("Sing a ballad of LangChain.")

v1 Chat Models

For v1 chat models, you can use the ChatOllama class with the v1 namespace.

from langchain_ollama.v1.chat_models import ChatOllama

Embeddings

OllamaEmbeddings class exposes embeddings from Ollama.

from langchain_ollama import OllamaEmbeddings

embeddings = OllamaEmbeddings(model="llama3.1")
embeddings.embed_query("What is the meaning of life?")

LLMs

OllamaLLM class exposes traditional LLMs from Ollama.

from langchain_ollama import OllamaLLM

llm = OllamaLLM(model="llama3.1")
llm.invoke("The meaning of life is")