Commit Graph

13898 Commits

Author SHA1 Message Date
Chester Curme
2b51bfe080 add ChatOpenAIV1 to init 2025-07-28 18:23:17 -04:00
Chester Curme
929e9a56e9 fix issue with streaming tool calls 2025-07-28 18:22:54 -04:00
Chester Curme
e3febdd2ef add InvalidToolCall to content block types 2025-07-28 18:22:28 -04:00
Chester Curme
c26f2447eb x 2025-07-28 18:22:11 -04:00
Chester Curme
5d7fbedb21 make deprecated methods not abstract 2025-07-28 18:21:14 -04:00
Chester Curme
abaf0c5828 revert changes to v0 BaseChatModel 2025-07-28 18:15:23 -04:00
Chester Curme
61e329637b lint 2025-07-28 11:02:37 -04:00
Chester Curme
b8fed06409 move get_num_tokens_from_messages to BaseChatModel and BaseChatModelV1 2025-07-28 10:58:57 -04:00
Chester Curme
c409f723a2 Merge branch 'standard_outputs' into cc/openai_v1
# Conflicts:
#	libs/core/langchain_core/messages/utils.py
2025-07-28 10:19:50 -04:00
ccurme
3d9e694f73
feat(core): start on v1 chat model (#32276)
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2025-07-28 10:17:06 -04:00
Mason Daugherty
c921d08b18
feat(docs): add docstring to _convert_from_v1_message() 2025-07-25 11:01:48 -04:00
Mason Daugherty
3f653011e6
nit: use block instead of content_block for consistency in convert_to_openai_image_block() 2025-07-25 10:57:22 -04:00
Mason Daugherty
ee13a3b6fa
nit: rearrange index to be grouped with other always-present fields 2025-07-25 10:16:35 -04:00
Chester Curme
61129557c0 x 2025-07-24 17:17:33 -04:00
Chester Curme
4899857042 start on openai 2025-07-24 17:12:22 -04:00
Chester Curme
041b196145 Revert "copy BaseChatModel to language_models.v1"
This reverts commit 2d031031e3.
2025-07-24 13:33:41 -04:00
Chester Curme
dd8057a034 remove type ignores for eugene 2025-07-24 13:31:50 -04:00
Chester Curme
b94f23883f move best-effort v1 conversion 2025-07-24 13:31:27 -04:00
Chester Curme
2d031031e3 copy BaseChatModel to language_models.v1 2025-07-24 09:56:45 -04:00
Chester Curme
0bb7a823c5 x 2025-07-23 15:17:46 -04:00
Chester Curme
df0a8562a9 openai: lint 2025-07-23 13:47:24 -04:00
ccurme
e9b0b84675
feat: new message formats (v0.4) (#32208)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-23 13:30:21 -04:00
Chester Curme
79bc8259e5 openai: format 2025-07-23 11:52:50 -04:00
Chester Curme
7c0d1cb324 openai: fix lint and tests 2025-07-23 09:53:46 -04:00
Chester Curme
eb8d32aff2 output_version -> str 2025-07-23 09:38:01 -04:00
Chester Curme
78d036a093 Merge branch 'wip-v0.4' into standard_outputs 2025-07-23 09:34:20 -04:00
Christophe Bornet
3496e1739e
feat(langchain): add ruff rules PL (#32079)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-07-22 23:55:32 -04:00
Chester Curme
6572656cd2 core: support both old and new data content blocks 2025-07-22 18:19:09 -04:00
Jacob Lee
0f39155f62
docs: Specify environment variables for BedrockConverse (#32194) 2025-07-22 17:37:47 -04:00
Chester Curme
e1f034c795 openai: support web search and code interpreter content blocks 2025-07-22 16:58:43 -04:00
ccurme
6aeda24a07
docs(chroma): update feature table (#32193)
Supports multi-tenancy.
2025-07-22 20:55:07 +00:00
Chester Curme
b1a02f971b fix tests 2025-07-22 16:45:19 -04:00
Mason Daugherty
3ed804a5f3
fix(perplexity): undo xfails (#32192) 2025-07-22 16:29:37 -04:00
Mason Daugherty
ca137bfe62
. 2025-07-22 16:25:02 -04:00
Mason Daugherty
fa487fb62d
fix(perplexity): temp xfail int tests (#32191)
It appears the API has changes since the 2025-04-15 release, leading to
failed integration tests.
2025-07-22 16:20:51 -04:00
ccurme
053fb16a05
revert: drop anthropic from core test matrix (#32190)
Reverts langchain-ai/langchain#32185
2025-07-22 20:13:02 +00:00
ccurme
3672bbc71e
fix(anthropic): update integration test models (#32189)
Multiple models were
[retired](https://docs.anthropic.com/en/docs/about-claude/model-deprecations#model-status)
yesterday.

Tests remain broken until we figure out what to do with the legacy
Anthropic LLM integration— currently uses their (legacy) text
completions API, for which there appear to be no remaining supported
models.
2025-07-22 19:51:39 +00:00
Mason Daugherty
a02ad3d192
docs: formatting cleanup (#32188)
* formatting cleaning
* make `init_chat_model` more prominent in list of guides
2025-07-22 15:46:15 -04:00
ccurme
0c4054a7fc
release(core): 0.3.71 (#32186) 2025-07-22 15:44:36 -04:00
ccurme
75517c3ea9
chore(infra): drop anthropic from core test matrix (#32185) 2025-07-22 19:38:58 +00:00
ccurme
ebf2e11bcb
fix(core): exclude api_key from tracing metadata (#32184)
(standard param)
2025-07-22 15:32:12 -04:00
ccurme
e41e6ec6aa
release(chroma): 0.2.5 (#32183) 2025-07-22 15:24:03 -04:00
itaismith
09769373b3
feat(chroma): Add Chroma Cloud support (#32125)
* Adding support for more Chroma client options (`HttpClient` and
`CloundClient`). This includes adding arguments necessary for
instantiating these clients.
* Adding support for Chroma's new persisted collection configuration (we
moved index configuration into this new construct).
* Delegate `Settings` configuration to Chroma's client constructors.
2025-07-22 15:14:15 -04:00
ccurme
3fc27e7a95
docs: update feature table for Chroma (#32182) 2025-07-22 18:21:17 +00:00
ccurme
8acfd677bc
fix(core): add type key when tracing in some cases (#31825) 2025-07-22 18:08:16 +00:00
Mason Daugherty
af3789b9ed
fix(deepseek): release openai version (#32181)
used sdk version instead of langchain by accident
2025-07-22 13:29:52 -04:00
Mason Daugherty
a6896794ca
release(ollama): 0.3.6 (#32180) 2025-07-22 13:24:17 -04:00
Copilot
d40fd5a3ce
feat(ollama): warn on empty load responses (#32161)
## Problem

When using `ChatOllama` with `create_react_agent`, agents would
sometimes terminate prematurely with empty responses when Ollama
returned `done_reason: 'load'` responses with no content. This caused
agents to return empty `AIMessage` objects instead of actual generated
text.

```python
from langchain_ollama import ChatOllama
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage

llm = ChatOllama(model='qwen2.5:7b', temperature=0)
agent = create_react_agent(model=llm, tools=[])

result = agent.invoke(HumanMessage('Hello'), {"configurable": {"thread_id": "1"}})
# Before fix: AIMessage(content='', response_metadata={'done_reason': 'load'})
# Expected: AIMessage with actual generated content
```

## Root Cause

The `_iterate_over_stream` and `_aiterate_over_stream` methods treated
any response with `done: True` as final, regardless of `done_reason`.
When Ollama returns `done_reason: 'load'` with empty content, it
indicates the model was loaded but no actual generation occurred - this
should not be considered a complete response.

## Solution

Modified the streaming logic to skip responses when:
- `done: True`
- `done_reason: 'load'` 
- Content is empty or contains only whitespace

This ensures agents only receive actual generated content while
preserving backward compatibility for load responses that do contain
content.

## Changes

- **`_iterate_over_stream`**: Skip empty load responses instead of
yielding them
- **`_aiterate_over_stream`**: Apply same fix to async streaming
- **Tests**: Added comprehensive test cases covering all edge cases

## Testing

All scenarios now work correctly:
-  Empty load responses are skipped (fixes original issue)
-  Load responses with actual content are preserved (backward
compatibility)
-  Normal stop responses work unchanged
-  Streaming behavior preserved
-  `create_react_agent` integration fixed

Fixes #31482.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-22 13:21:11 -04:00
Mason Daugherty
116b758498
fix: bump deps for release (#32179)
forgot to bump the `pyproject.toml` files
2025-07-22 13:12:14 -04:00
Mason Daugherty
10996a2821
release(perplexity): 0.1.2 (#32176) 2025-07-22 13:02:19 -04:00