1428 Commits

Author SHA1 Message Date
Mason Daugherty
2aed07efb6 release(deepseek): 0.1.4 (#32178) 2025-07-22 13:01:54 -04:00
Mason Daugherty
64dac1faf7 release(huggingface): 0.3.1 (#32177) 2025-07-22 13:01:34 -04:00
Mason Daugherty
58768d8aef release(xai): 0.2.5 (#32174) 2025-07-22 13:01:26 -04:00
Mason Daugherty
d65da13299 docs(ollama): add validate_model_on_init note, bump lock (#32172) 2025-07-22 10:58:45 -04:00
diego-coder
8e4396bb32 fix(ollama): robustly parse single-quoted JSON in tool calls (#32109)
**Description:**
This PR makes argument parsing for Ollama tool calls more robust. Some
LLMs—including Ollama—may return arguments as Python-style dictionaries
with single quotes (e.g., `{'a': 1}`), which are not valid JSON and
previously caused parsing to fail.
The updated `_parse_json_string` method in
`langchain_ollama.chat_models` now attempts standard JSON parsing and,
if that fails, falls back to `ast.literal_eval` for safe evaluation of
Python-style dictionaries. This improves interoperability with LLMs and
fixes a common usability issue for tool-based agents.

**Issue:**
Closes #30910

**Dependencies:**
None

**Tests:**
- Added new unit tests for double-quoted JSON, single-quoted dicts,
mixed quoting, and malformed/failure cases.
- All tests pass locally, including new coverage for single-quoted
inputs.

**Notes:**
- No breaking changes.
- No new dependencies introduced.
- Code is formatted and linted (`ruff format`, `ruff check`).
- If maintainers have suggestions for further improvements, I’m happy to
revise!

Thank you for maintaining LangChain! Looking forward to your feedback.
2025-07-21 12:11:22 -04:00
ccurme
2ef9465893 fix(anthropic): fix test (#32145) 2025-07-21 14:49:40 +00:00
ccurme
cc076ed891 fix(huggingface): update model used in standard tests (#32116) 2025-07-20 01:50:31 +00:00
Mason Daugherty
491f63ca82 release(ollama): release 0.3.5 (#32076) 2025-07-16 18:45:32 -04:00
Mason Daugherty
587c213760 bump lcok 2025-07-16 18:44:56 -04:00
Copilot
98c3bbbaf0 fix(ollama): num_gpu parameter not working in async OllamaEmbeddings method (#32074)
The `num_gpu` parameter in `OllamaEmbeddings` was not being passed to
the Ollama client in the async embedding method, causing GPU
acceleration settings to be ignored when using async operations.

## Problem

The issue was in the `aembed_documents` method where the `options`
parameter (containing `num_gpu` and other configuration) was missing:

```python
# Sync method (working correctly)
return self._client.embed(
    self.model, texts, options=self._default_params, keep_alive=self.keep_alive
)["embeddings"]

# Async method (missing options parameter)
return (
    await self._async_client.embed(
        self.model, texts, keep_alive=self.keep_alive  #  No options!
    )
)["embeddings"]
```

This meant that when users specified `num_gpu=4` (or any other GPU
configuration), it would work with sync calls but be ignored with async
calls.

## Solution

Added the missing `options=self._default_params` parameter to the async
embed call to match the sync version:

```python
# Fixed async method
return (
    await self._async_client.embed(
        self.model,
        texts,
        options=self._default_params,  #  Now includes num_gpu!
        keep_alive=self.keep_alive,
    )
)["embeddings"]
```

## Validation

-  Added unit test to verify options are correctly passed in both sync
and async methods
-  All existing tests continue to pass
-  Manual testing confirms `num_gpu` parameter now works correctly
-  Code passes linting and formatting checks

The fix ensures that GPU configuration works consistently across both
synchronous and asynchronous embedding operations.

Fixes #32059.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 18:42:52 -04:00
Inácio Nery
ea8f2a05ba feat(perplexity): expose search_results in chat model (#31468)
Description
The Perplexity chat model already returns a search_results field, but
LangChain dropped it when mapping Perplexity responses to
additional_kwargs.
This patch adds "search_results" to the allowed attribute lists in both
_stream and _generate, so downstream code can access it just like
images, citations, or related_questions.

Dependencies
None. The change is purely internal; no new imports or optional
dependencies required.


https://community.perplexity.ai/t/new-feature-search-results-field-with-richer-metadata/398

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 15:16:35 -04:00
nikk0o046
b1c7de98f5 fix(deepseek): convert tool output arrays to strings (#31913)
## Description
When ChatDeepSeek invokes a tool that returns a list, it results in an
openai.UnprocessableEntityError due to a failure in deserializing the
JSON body.

The root of the problem is that ChatDeepSeek uses BaseChatOpenAI
internally, but the APIs are not identical: OpenAI v1/chat/completions
accepts arrays as tool results, but Deepseek API does not.

As a solution added `_get_request_payload` method to ChatDeepSeek, which
inherits the behavior from BaseChatOpenAI but adds a step to stringify
tool message content in case the content is an array. I also add a unit
test for this.

From the linked issue you can find the full reproducible example the
reporter of the issue provided. After the changes it works as expected.

Source: [Deepseek
docs](https://api-docs.deepseek.com/api/create-chat-completion/)


![image](https://github.com/user-attachments/assets/a59ed3e7-6444-46d1-9dcf-97e40e4e8952)

Source: [OpenAI
docs](https://platform.openai.com/docs/api-reference/chat/create)


![image](https://github.com/user-attachments/assets/728f4fc6-e1a3-4897-b39f-6f1ade07d3dc)


## Issue
Fixes #31394

## Dependencies:
No new dependencies.

## Twitter handle:
Don't have one.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 12:19:44 -04:00
Mason Daugherty
3b9dd1eba0 docs(groq): cleanup (#32043) 2025-07-15 10:37:37 -04:00
ccurme
4b89483fe6 release(openai): 0.3.28 (#32015) 2025-07-14 10:38:11 +00:00
ccurme
de13f6ae4f fix(openai): support acknowledged safety checks in computer use (#31984) 2025-07-14 07:33:37 -03:00
Mason Daugherty
56d6d69ce9 release(groq): 0.3.6 (#31975) 2025-07-11 10:55:26 -04:00
ccurme
612ccf847a chore: [openai] bump sdk (#31958) 2025-07-10 15:53:41 -04:00
Mason Daugherty
6594eb8cc1 docs(xai): update for Grok 4 (#31953) 2025-07-10 11:06:37 -04:00
Mason Daugherty
8064d3bdc4 ollama: bump core (#31929) 2025-07-08 16:53:18 -04:00
Mason Daugherty
791c0e2e8f ollama: release 0.3.4 (#31928) 2025-07-08 16:44:36 -04:00
Mason Daugherty
0002b1dafa ollama[patch]: fix model validation, ensure per-call reasoning can be set, tests (#31927)
* update model validation due to change in [Ollama
client](https://github.com/ollama/ollama) - ensure you are running the
latest version (0.9.6) to use `validate_model_on_init`
* add code example and fix formatting for ChatOllama reasoning
* ensure that setting `reasoning` in invocation kwargs overrides
class-level setting
* tests
2025-07-08 16:39:41 -04:00
Mason Daugherty
f33a25773e chroma[nit]: descriptions in pyproject.toml (#31925) 2025-07-08 13:52:12 -04:00
Mason Daugherty
1f829aacf4 ollama[patch]: ruff fixes and rules (#31924)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 13:42:19 -04:00
Mason Daugherty
4d9eefecab fix: bump lockfiles (#31923)
* bump lockfiles after upgrading ruff
* resolve resulting linting fixes
2025-07-08 13:27:55 -04:00
Mason Daugherty
e7f1ceee67 nomic[patch]: ruff fixes and rules (#31922)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
* fix makefile targets
2025-07-08 13:25:18 -04:00
Mason Daugherty
71b361936d ruff: restore stacklevels, disable autofixing (#31919) 2025-07-08 12:55:47 -04:00
Mason Daugherty
cbb418b4bf mistralai[patch]: ruff fixes and rules (#31918)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 12:44:42 -04:00
Mason Daugherty
ae210c1590 ruff: add bugbear across packages (#31917)
WIP, other packages will get in next PRs
2025-07-08 12:22:55 -04:00
Mason Daugherty
dd76209bbd groq[patch]: ruff fixes and rules (#31904)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:25:46 -04:00
Mason Daugherty
750721b4c3 huggingface[patch]: ruff fixes and rules (#31912)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:07:57 -04:00
Mason Daugherty
06ab2972e3 fireworks[patch]: ruff fixes and rules (#31903)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:14:59 +00:00
Mason Daugherty
63e3f2dea6 exa[patch]: ruff fixes and rules (#31902)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:02:42 +00:00
Mason Daugherty
231e8d0f43 deepseek[patch]: ruff fixes and rules (#31901)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:54:44 -04:00
Mason Daugherty
38bd1abb8c chroma[patch]: ruff fixes and rules (#31900)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:45:19 -04:00
Mason Daugherty
2a7645300c anthropic[patch]: ruff fixes and rules (#31899)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 18:32:27 -04:00
Mason Daugherty
e7eac27241 ruff: more rules across the board & fixes (#31898)
* standardizes ruff dep version across all `pyproject.toml` files
* cli: ruff rules and corrections
* langchain: rules and corrections
2025-07-07 17:48:01 -04:00
Mason Daugherty
706a66eccd fix: automatically fix issues with ruff (#31897)
* Perform safe automatic fixes instead of only selecting
[isort](https://docs.astral.sh/ruff/rules/#isort-i)
2025-07-07 14:13:10 -04:00
Mason Daugherty
e686a70ee0 ollama: thinking, tool streaming, docs, tests (#31772)
* New `reasoning` (bool) param to support toggling [Ollama
thinking](https://ollama.com/blog/thinking) (#31573, #31700). If
`reasoning=True`, Ollama's `thinking` content will be placed in the
model responses' `additional_kwargs.reasoning_content`.
  * Supported by:
    * ChatOllama (class level, invocation level TODO)
    * OllamaLLM (TODO)
* Added tests to ensure streaming tool calls is successful (#29129)
* Refactored tests that relied on `extract_reasoning()`
* Myriad docs additions and consistency/typo fixes
* Improved type safety in some spots

Closes #29129
Addresses #31573 and #31700
Supersedes #31701
2025-07-07 13:56:41 -04:00
ccurme
0eb10f31c1 mistralai: release 0.2.11 (#31896) 2025-07-07 17:47:36 +00:00
m27315
013ce2c47f huggingface: fix HuggingFaceEndpoint._astream() got multiple values for argument 'stop' (#31385) 2025-07-06 15:18:53 +00:00
ccurme
3f4b355eef anthropic[patch]: pass back in citations in multi-turn conversations (#31882)
Also adds VCR cassettes for some heavy tests.
2025-07-05 17:33:22 -04:00
ccurme
ade642b7c5 Revert "infra: temporarily skip tests" (#31854)
Reverts langchain-ai/langchain#31853
2025-07-03 13:55:29 -04:00
ccurme
c9f45dc323 infra: temporarily skip tests (#31853)
Tests failed twice with different timeout errors.
2025-07-03 13:39:14 -04:00
ccurme
f88fff0b8a anthropic: release 0.3.17 (#31852) 2025-07-03 13:18:43 -04:00
Mason Daugherty
572020c4d8 ollama: add validate_model_on_init, catch more errors (#31784)
* Ensure access to local model during `ChatOllama` instantiation
(#27720). This adds a new param `validate_model_on_init` (default:
`true`)
* Catch a few more errors from the Ollama client to assist users
2025-07-03 11:07:11 -04:00
Mason Daugherty
1a3a8db3c9 docs: anthropic formatting cleanup (#31847)
inline URLs, capitalization, code blocks
2025-07-03 14:50:23 +00:00
Mason Daugherty
911b0b69ea groq: Add service tier option to ChatGroq (#31801)
- Allows users to select a [flex
processing](https://console.groq.com/docs/flex-processing) service tier
2025-07-03 10:11:18 -04:00
Mason Daugherty
eb12294583 langchain-xai[patch]: Add ruff bandit rules to linter (#31816)
- Add ruff bandit rules
- Some formatting
2025-07-01 18:59:06 +00:00
Mason Daugherty
86a698d1b6 langchain-qdrant[patch]: Add ruff bandit rules to linter (#31815)
- Add ruff bandit rules
- Address a few s101s
- Some formatting
2025-07-01 18:42:55 +00:00
Mason Daugherty
b03e326231 langchain-prompty[patch]: Add ruff bandit rules to linter (#31814)
- Add ruff bandit rules
- Address some s101 assertion warnings
- Address s506 by using `yaml.safe_load()`
2025-07-01 18:32:02 +00:00