* Ensure access to local model during `ChatOllama` instantiation
(#27720). This adds a new param `validate_model_on_init` (default:
`true`)
* Catch a few more errors from the Ollama client to assist users
- There was some ambiguous wording that has been updated to hopefully
clarify the functionality of `reasoning_format` in ChatGroq.
- Added support for `reasoning_effort`
- Added links to see models capable of `reasoning_format` and
`reasoning_effort`
- Other minor nits
- docs: for the Ollama notebooks, improve the specificity of some links,
add `homebrew` install info, update some wording
- tests: reduce number of local models needed to run in half from 4 → 2
(shedding 8gb of required installs)
- bump deps (non-breaking) in anticipation of upcoming "thinking" PR
**Description**:
Add a `async_client_kwargs` field to ollama chat/llm/embeddings adapters
that is passed to async httpx client constructor.
**Motivation:**
In my use-case:
- chat/embedding model adapters may be created frequently, sometimes to
be called just once or to never be called at all
- they may be used in bots sunc and async mode (not known at the moment
they are created)
So, I want to keep a static transport instance maintaining connection
pool, so model adapters can be created and destroyed freely. But that
doesn't work when both sync and async functions are in use as I can only
pass one transport instance for both sync and async client, while
transport types must be different for them. So I can't make both sync
and async calls use shared transport with current model adapter
interfaces.
In this PR I add a separate `async_client_kwargs` that gets passed to
async client constructor, so it will be possible to pass a separate
transport instance. For sake of backwards compatibility, it is merged
with `client_kwargs`, so nothing changes when it is not set.
I am unable to run linter right now, but the changes look ok.
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.
Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
Replacement for PR #30191 (@ccurme)
**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.
Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.
This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.
As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.
Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)
**Issue**: resolves#30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)
**Dependencies**: no new dependencies
[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing
---------
Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
# Description
This PR adds reasoning model support for `langchain-ollama` by
extracting reasoning token blocks, like those used in deepseek. It was
inspired by
[ollama-deep-researcher](https://github.com/langchain-ai/ollama-deep-researcher),
specifically the parsing of [thinking
blocks](6d1aaf2139/src/assistant/graph.py (L91)):
```python
# TODO: This is a hack to remove the <think> tags w/ Deepseek models
# It appears very challenging to prompt them out of the responses
while "<think>" in running_summary and "</think>" in running_summary:
start = running_summary.find("<think>")
end = running_summary.find("</think>") + len("</think>")
running_summary = running_summary[:start] + running_summary[end:]
```
This notes that it is very hard to remove the reasoning block from
prompting, but we actually want the model to reason in order to increase
model performance. This implementation extracts the thinking block, so
the client can still expect a proper message to be returned by
`ChatOllama` (and use the reasoning content separately when desired).
This implementation takes the same approach as
[ChatDeepseek](5d581ba22c/libs/partners/deepseek/langchain_deepseek/chat_models.py (L215)),
which adds the reasoning content to
chunk.additional_kwargs.reasoning_content;
```python
if hasattr(response.choices[0].message, "reasoning_content"): # type: ignore
rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
response.choices[0].message.reasoning_content # type: ignore
)
```
This should probably be handled upstream in ollama + ollama-python, but
this seems like a reasonably effective solution. This is a standalone
example of what is happening;
```python
async def deepseek_message_astream(
llm: BaseChatModel,
messages: list[BaseMessage],
config: RunnableConfig | None = None,
*,
model_target: str = "deepseek-r1",
**kwargs: Any,
) -> AsyncIterator[BaseMessageChunk]:
"""Stream responses from Deepseek models, filtering out <think> tags.
Args:
llm: The language model to stream from
messages: The messages to send to the model
Yields:
Filtered chunks from the model response
"""
# check if the model is deepseek based
if (llm.name and model_target not in llm.name) or (hasattr(llm, "model") and model_target not in llm.model):
async for chunk in llm.astream(messages, config=config, **kwargs):
yield chunk
return
# Yield with a buffer, upon completing the <think></think> tags, move them to the reasoning content and start over
buffer = ""
async for chunk in llm.astream(messages, config=config, **kwargs):
# start or append
if not buffer:
buffer = chunk.content
else:
buffer += chunk.content if hasattr(chunk, "content") else chunk
# Process buffer to remove <think> tags
if "<think>" in buffer or "</think>" in buffer:
if hasattr(chunk, "tool_calls") and chunk.tool_calls:
raise NotImplementedError("tool calls during reasoning should be removed?")
if "<think>" in chunk.content or "</think>" in chunk.content:
continue
chunk.additional_kwargs["reasoning_content"] = chunk.content
chunk.content = ""
# upon block completion, reset the buffer
if "<think>" in buffer and "</think>" in buffer:
buffer = ""
yield chunk
```
# Issue
Integrating reasoning models (e.g. deepseek-r1) into existing LangChain
based workflows is hard due to the thinking blocks that are included in
the message contents. To avoid this, we could match the `ChatOllama`
integration with `ChatDeepseek` to return the reasoning content inside
`message.additional_arguments.reasoning_content` instead.
# Dependenices
None
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
From function calling to Ollama's [dedicated structured output
feature](https://ollama.com/blog/structured-outputs).
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
These are set in Github workflows, but forgot to add them to most
makefiles for convenience when developing locally.
`uv run` will automatically sync the lock file. Because many of our
development dependencies are local installs, it will pick up version
changes and update the lock file. Passing `--frozen` or setting this
environment variable disables the behavior.
- [feat] **Added backwards compatibility for OllamaEmbeddings
initialization (migration from `langchain_community.embeddings` to
`langchain_ollama.embeddings`**: "langchain_ollama"
- **Description:** Given that `OllamaEmbeddings` from
`langchain_community.embeddings` is deprecated, code is being shifted to
``langchain_ollama.embeddings`. However, this does not offer backward
compatibility of initializing the parameters and `OllamaEmbeddings`
object.
- **Issue:** #29294
- **Dependencies:** None
- **Twitter handle:** @BaqarAbbas2001
## Additional Information
Previously, `OllamaEmbeddings` from `langchain_community.embeddings`
used to support the following options:
e9abe583b2/libs/community/langchain_community/embeddings/ollama.py (L125-L139)
However, in the new package `from langchain_ollama import
OllamaEmbeddings`, there is no method to set these options. I have added
these parameters to resolve this issue.
This issue was also discussed in
https://github.com/langchain-ai/langchain/discussions/29113
## Description
This PR addresses the following:
**Fixes Issue #25343:**
- Adds additional logic to parse shallowly nested JSON-encoded strings
in tool call arguments, allowing for proper parsing of responses like
that of Llama3.1 and 3.2 with nested schemas.
**Adds Integration Test for Fix:**
- Adds a Ollama specific integration test to ensure the issue is
resolved and to prevent regressions in the future.
**Fixes Failing Integration Tests:**
- Fixes failing integration tests (even prior to changes) caused by
`llama3-groq-tool-use` model. Previously,
tests`test_structured_output_async` and
`test_structured_output_optional_param` failed due to the model not
issuing a tool call in the response. Resolved by switching to
`llama3.1`.
## Issue
Fixes#25343.
## Dependencies
No dependencies.
____
Done in collaboration with @ishaan-upadhyay @mirajismail @ZackSteine.
v0.4 of the Python SDK is already installed via the lock file in CI, but
our current implementation is not compatible with it.
This also addresses an issue introduced in
https://github.com/langchain-ai/langchain/pull/28299. @RyanMagnuson
would you mind explaining the motivation for that change? From what I
can tell the Ollama SDK [does not support
kwargs](6c44bb2729/ollama/_client.py (L286)).
Previously, unsupported kwargs were ignored, but they currently raise
`TypeError`.
Some of LangChain's standard test suite expects `tool_choice` to be
supported, so here we catch it in `bind_tools` so it is ignored and not
passed through to the client.
**Description:** The issue concerns the unexpected behavior observed
using the bind_tools method in LangChain's ChatOllama. When tools are
not bound, the llm.stream() method works as expected, returning
incremental chunks of content, which is crucial for real-time
applications such as conversational agents and live feedback systems.
However, when bind_tools([]) is used, the streaming behavior changes,
causing the output to be delivered in full chunks rather than
incrementally. This change negatively impacts the user experience by
breaking the real-time nature of the streaming mechanism.
**Issue:** #26971
---------
Co-authored-by: 4meyDam1e <amey.damle@mail.utoronto.ca>
Co-authored-by: Chester Curme <chester.curme@gmail.com>