Commit Graph

7545 Commits

Author SHA1 Message Date
ccurme
8c702397cb
fix(langchain): warn on unsupported models and add test (#32413) 2025-08-05 14:34:19 -04:00
ccurme
56ee00cb1d
fix(core): rename output_version to message_version (#32412) 2025-08-05 14:23:58 -04:00
ccurme
757bae0263
feat(langchain): support v1 chat models in init_chat_model (#32410) 2025-08-05 13:50:19 -04:00
ccurme
4559997e35
refactor(openai): move chat model to v1 namespace (#32407) 2025-08-05 13:23:29 -04:00
Mason Daugherty
e393f512fc
update README.md 2025-08-05 11:05:26 -04:00
Mason Daugherty
b94fb8c086
update README_V1.md 2025-08-05 11:03:28 -04:00
Mason Daugherty
485b0b36ab
more dumps() tests 2025-08-05 10:50:07 -04:00
Mason Daugherty
eab89965de
add type hint 2025-08-05 10:49:53 -04:00
Mason Daugherty
c9c4776fb7
linting fix? 2025-08-05 10:49:46 -04:00
Mason Daugherty
551663d0b7
namespace refactor 2025-08-05 10:28:07 -04:00
Mason Daugherty
4651457c7e
Merge remote-tracking branch 'origin/standard_outputs_copy' into mdrxy/ollama_v1 2025-08-05 09:56:17 -04:00
Mason Daugherty
c709f85c27
snapshots 2025-08-05 09:55:13 -04:00
ccurme
71f0138885
refactor(core): rename BaseChatModelV1 -> BaseChatModel in v1 namespace (#32404) 2025-08-05 09:35:33 -04:00
ccurme
c36b123c8c
fix(core): refactor new types into top-level v1 namespace (#32403) 2025-08-05 09:21:31 -04:00
ccurme
deae8cc164
feat(core): support returning v1 ToolMessage in tools (#32397) 2025-08-05 08:50:02 -04:00
Mason Daugherty
c7c47f81c4
add type hints to messages 2025-08-05 00:04:30 -04:00
Mason Daugherty
5c9ce7fd2b
remove outdated test 2025-08-04 23:47:17 -04:00
Mason Daugherty
f3c863447f
fix: core imports tests 2025-08-04 23:24:47 -04:00
Mason Daugherty
f67456f90f
capture response context 2025-08-04 23:08:47 -04:00
Mason Daugherty
2eca8240e2
final(?) unit tests 2025-08-04 17:57:45 -04:00
Mason Daugherty
c9a38be6df
add ollama unit test props 2025-08-04 17:53:17 -04:00
Mason Daugherty
308e734c78
expant unit tests v1 2025-08-04 17:44:14 -04:00
Mason Daugherty
16729de369
add some notes 2025-08-04 17:38:02 -04:00
Mason Daugherty
db41bfe1f8
more tests 2025-08-04 17:31:47 -04:00
Mason Daugherty
c91d681c83
update comment 2025-08-04 17:23:18 -04:00
Mason Daugherty
06f240b328
add test 2025-08-04 16:21:18 -04:00
Mason Daugherty
a3799b2caf
docstring for test__parse_arguments_from_tool_call 2025-08-04 16:18:51 -04:00
Mason Daugherty
38a25bfefc
add note about test_arbitrary_roles_accepted_in_chatmessages for v1 2025-08-04 16:18:40 -04:00
Mason Daugherty
63ade97a5f
Merge branch 'mdrxy/ollama_v1' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-08-04 16:06:51 -04:00
Mason Daugherty
3c3158d379
Merge branch 'standard_outputs_copy' into mdrxy/ollama_v1 2025-08-04 16:06:05 -04:00
Mason Daugherty
5b60e9362e
continue to match v0 2025-08-04 16:02:39 -04:00
Mason Daugherty
99687ce626
rename 2025-08-04 15:54:11 -04:00
Mason Daugherty
2290984cfa
refactoring 2025-08-04 15:54:07 -04:00
Mason Daugherty
2f8470d7f2
formatting 2025-08-04 15:53:32 -04:00
Mason Daugherty
7b89cabd0b
ollama: update _convert_content_blocks_to_ollama_format for robustness 2025-08-04 15:52:25 -04:00
Mason Daugherty
5b7df921df
add reasoning guard; make mime type not required alongside base64 2025-08-04 15:52:06 -04:00
Mason Daugherty
7bc4512ab7
add type guards to export 2025-08-04 15:51:42 -04:00
ccurme
b06dc7954e
fix(openai): use extras for v1 messages (#32394) 2025-08-04 14:49:20 -04:00
Mason Daugherty
3f11b041df
docs nits 2025-08-04 14:46:10 -04:00
Mason Daugherty
ade2155aee
fixes 2025-08-04 14:33:02 -04:00
Mason Daugherty
9c4e6124b6
updates 2025-08-04 14:18:43 -04:00
Mason Daugherty
08cb07e1a6
nit 2025-08-04 14:01:21 -04:00
Mason Daugherty
332215ed41
nit 2025-08-04 13:44:31 -04:00
Mason Daugherty
d4ac8ff5f7
lint 2025-08-04 13:14:04 -04:00
Mason Daugherty
719c9dfaaa
fix: handle TypeGuard import for compatibility with older Python versions 2025-08-04 13:06:42 -04:00
Mason Daugherty
a3929c71e4
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-08-04 12:57:55 -04:00
Mason Daugherty
cc56b8dbd3
Merge branch 'standard_outputs_copy' into mdrxy/ollama_v1 + updates 2025-08-04 12:57:38 -04:00
Chester Curme
bbbc11edf9 Merge branch 'wip-v0.4' into standard_outputs_copy 2025-08-04 11:55:54 -04:00
ccurme
2a268f1e24
fix(openai): use empty list in v1 messages instead of empty string for chat completions tool calls (#32392) 2025-08-04 11:54:03 -04:00
Mason Daugherty
6de1535785
fix: remove type ignores 2025-08-04 11:36:50 -04:00
ccurme
ff3153c04d
feat(core): move tool call chunks to content (v1) (#32358) 2025-08-04 11:32:11 -04:00
Mason Daugherty
8d8a61ab3b
fix: create_plaintext_block 2025-08-04 11:31:29 -04:00
Mason Daugherty
a6686d7c4f
type guards to remove casting 2025-08-04 11:29:36 -04:00
Mason Daugherty
822dd5075c
del impl plan 2025-08-04 11:15:40 -04:00
Narasimha Badrinath
dd9f5d7cde
feat(docs): add langchain-gradientai as provider (#32202)
langchain-gradientai is Digitalocean's integration with Langchain. It
will help users to build langchain applications using Digitalocean's
GradientAI platform.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-04 14:57:59 +00:00
Mason Daugherty
82492d6d89
export v1 tests 2025-08-01 12:53:37 -04:00
Mason Daugherty
eae4d1db43
integration tests 2025-08-01 12:47:08 -04:00
Mason Daugherty
6820723177
update dep in lock 2025-07-31 22:53:36 -04:00
Mason Daugherty
bc5c6751dc
fix test 2025-07-31 17:37:42 -04:00
Mason Daugherty
05d075a37f
messages.v1 mypy fixes 2025-07-31 17:32:59 -04:00
Mason Daugherty
7a0c3e0482
fix: update snapshots 2025-07-31 17:27:37 -04:00
Mason Daugherty
1b2a677eed
first pass (w/o integration) 2025-07-31 17:24:19 -04:00
Mason Daugherty
02d02ccf1b
feat: enhance serialization for v1 message classes in dump and load modules 2025-07-31 17:08:39 -04:00
Mason Daugherty
e1b8ae2e6c
content_blocks: add missing blocks to aliases and formatting 2025-07-31 17:08:10 -04:00
Mason Daugherty
2cb48b685f
messages v1: some nits, and use create_text_block() 2025-07-31 17:07:42 -04:00
Mason Daugherty
45533fc875
fix: correct syrupy import 2025-07-31 13:16:55 -04:00
Mason Daugherty
e9bb40f221
nit: lint standard-tests files 2025-07-31 13:15:39 -04:00
Mason Daugherty
4f4e831a44
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-07-31 12:23:29 -04:00
Mason Daugherty
525fa453be
fix: revert pydantic bump (#32355) 2025-07-31 12:22:23 -04:00
Mason Daugherty
48b02fead9
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-07-31 11:21:52 -04:00
Mason Daugherty
c88adfad70
fix: updatd snapshots 2025-07-31 11:21:40 -04:00
Mason Daugherty
1a04543791
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-07-31 11:19:18 -04:00
Mason Daugherty
7f1e98c7fb
fix: add index to InvalidToolCall 2025-07-31 11:19:01 -04:00
Mason Daugherty
9a4d878b2d
bump pydantic 2025-07-31 11:16:02 -04:00
Mason Daugherty
3dcfc6c389
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-07-31 11:15:27 -04:00
Mason Daugherty
44bd6fe837
feat(core): content block factories + ids + docs + tests (#32316)
## Benefits

1. **Type Safety**: Compile-time validation of required fields and
proper type setting
2. **Less Boilerplate**: No need to manually set the `type` field or
generate IDs
3. **Input Validation**: Runtime validation prevents common errors
(e.g., base64 without MIME type)
4. **Consistent Patterns**: Standardized creation patterns across all
block types
5. **Better Developer Experience**: Cleaner, more intuitive API than
manual TypedDict construction. Also follows similar other patterns (e.g.
`create_react_agent`, `init_chat_model`
2025-07-31 11:12:00 -04:00
ccurme
740d9d3e7e
fix(core): fix tracing for new message types in case of multiple messages (#32352) 2025-07-31 10:47:23 -04:00
Mason Daugherty
588fe46601
Merge branch 'standard_outputs_copy' into mdrxy/ollama_v1 2025-07-30 17:41:16 -04:00
ccurme
642262f6fe
feat(core): widen input type for output parsers (#32332) 2025-07-30 16:52:34 -04:00
Mason Daugherty
7a902c1bf5
Merge branch 'standard_outputs_copy' of github.com:langchain-ai/langchain into mdrxy/ollama_v1 2025-07-30 16:29:14 -04:00
ccurme
3afcd6132e
fix(openai): fix test on standard outputs branch (#32329) 2025-07-30 13:33:42 -04:00
Chester Curme
a0abb79f6d Merge branch 'wip-v0.4' into standard_outputs_copy 2025-07-30 13:17:08 -04:00
ccurme
309d1a232a
fix(openai): fix tracing and typing on standard outputs branch (#32326) 2025-07-30 13:02:15 -04:00
ccurme
a9e52ca605
chore(openai): bump openai sdk (#32322) 2025-07-30 10:58:18 -04:00
Mason Daugherty
502dba4af8
tweaks 2025-07-29 15:43:32 -04:00
ccurme
8cf97e838c
fix(core): lint standard outputs branch (#32311) 2025-07-29 15:38:45 -04:00
Mason Daugherty
27347cdcf5
move file 2025-07-29 14:59:33 -04:00
Mason Daugherty
589ee059f2
updates 2025-07-29 14:57:52 -04:00
Mason Daugherty
80971b69d0
implement 2025-07-29 14:34:22 -04:00
Mason Daugherty
fee695ce6d
tests updates before implementation 2025-07-29 14:27:39 -04:00
Mason Daugherty
735f264654
bump lock 2025-07-29 13:57:17 -04:00
Mason Daugherty
fbd5a238d8
fix(core): revert "fix: tool call streaming bug with inconsistent indices from Qwen3" (#32307)
Reverts langchain-ai/langchain#32160

Original issue stems from using `ChatOpenAI` to interact with a `qwen`
model. Recommended to use
[langchain-qwq](https://python.langchain.com/docs/integrations/chat/qwq/)
which is built for Qwen
2025-07-29 10:26:38 -04:00
Chester Curme
9507d0f21c ? 2025-07-29 09:12:11 -04:00
Mason Daugherty
0e287763cd
fix: lint 2025-07-28 18:49:43 -04:00
ccurme
c15e55b33c
feat(openai): v1 message format support (#32296) 2025-07-28 18:42:26 -04:00
Copilot
0b56c1bc4b
fix: tool call streaming bug with inconsistent indices from Qwen3 (#32160)
Fixes a streaming bug where models like Qwen3 (using OpenAI interface)
send tool call chunks with inconsistent indices, resulting in
duplicate/erroneous tool calls instead of a single merged tool call.

## Problem

When Qwen3 streams tool calls, it sends chunks with inconsistent `index`
values:
- First chunk: `index=1` with tool name and partial arguments  
- Subsequent chunks: `index=0` with `name=None`, `id=None` and argument
continuation

The existing `merge_lists` function only merges chunks when their
`index` values match exactly, causing these logically related chunks to
remain separate, resulting in multiple incomplete tool calls instead of
one complete tool call.

```python
# Before fix: Results in 1 valid + 1 invalid tool call
chunk1 = AIMessageChunk(tool_call_chunks=[
    {"name": "search", "args": '{"query":', "id": "call_123", "index": 1}
])
chunk2 = AIMessageChunk(tool_call_chunks=[
    {"name": None, "args": ' "test"}', "id": None, "index": 0}  
])
merged = chunk1 + chunk2  # Creates 2 separate tool calls

# After fix: Results in 1 complete tool call
merged = chunk1 + chunk2  # Creates 1 merged tool call: search({"query": "test"})
```

## Solution

Enhanced the `merge_lists` function in `langchain_core/utils/_merge.py`
with intelligent tool call chunk merging:

1. **Preserves existing behavior**: Same-index chunks still merge as
before
2. **Adds special handling**: Tool call chunks with
`name=None`/`id=None` that don't match any existing index are now merged
with the most recent complete tool call chunk
3. **Maintains backward compatibility**: All existing functionality
works unchanged
4. **Targeted fix**: Only affects tool call chunks, doesn't change
behavior for other list items

The fix specifically handles the pattern where:
- A continuation chunk has `name=None` and `id=None` (indicating it's
part of an ongoing tool call)
- No matching index is found in existing chunks
- There exists a recent tool call chunk with a valid name or ID to merge
with

## Testing

Added comprehensive test coverage including:
-  Qwen3-style chunks with different indices now merge correctly
-  Existing same-index behavior preserved  
-  Multiple distinct tool calls remain separate
-  Edge cases handled (empty chunks, orphaned continuations)
-  Backward compatibility maintained

Fixes #31511.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-28 22:31:41 +00:00
Copilot
ad88e5aaec
fix(core): resolve cache validation error by safely converting Generation to ChatGeneration objects (#32156)
## Problem

ChatLiteLLM encounters a `ValidationError` when using cache on
subsequent calls, causing the following error:

```
ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': "unexpected value; permitted: 'ChatGeneration'", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}])
```

This occurs because:
1. The cache stores `Generation` objects (with `type="Generation"`)
2. But `ChatResult` expects `ChatGeneration` objects (with
`type="ChatGeneration"` and a required `message` field)
3. When cached values are retrieved, validation fails due to the type
mismatch

## Solution

Added graceful handling in both sync (`_generate_with_cache`) and async
(`_agenerate_with_cache`) cache methods to:

1. **Detect** when cached values contain `Generation` objects instead of
expected `ChatGeneration` objects
2. **Convert** them to `ChatGeneration` objects by wrapping the text
content in an `AIMessage`
3. **Preserve** all original metadata (`generation_info`)
4. **Allow** `ChatResult` creation to succeed without validation errors

## Example

```python
# Before: This would fail with ValidationError
from langchain_community.chat_models import ChatLiteLLM
from langchain_community.cache import SQLiteCache
from langchain.globals import set_llm_cache

set_llm_cache(SQLiteCache(database_path="cache.db"))
llm = ChatLiteLLM(model_name="openai/gpt-4o", cache=True, temperature=0)

print(llm.predict("test"))  # Works fine (cache empty)
print(llm.predict("test"))  # Now works instead of ValidationError

# After: Seamlessly handles both Generation and ChatGeneration objects
```

## Changes

- **`libs/core/langchain_core/language_models/chat_models.py`**: 
  - Added `Generation` import from `langchain_core.outputs`
- Enhanced cache retrieval logic in `_generate_with_cache` and
`_agenerate_with_cache` methods
- Added conversion from `Generation` to `ChatGeneration` objects when
needed

-
**`libs/core/tests/unit_tests/language_models/chat_models/test_cache.py`**:
- Added test case to validate the conversion logic handles mixed object
types

## Impact

- **Backward Compatible**: Existing code continues to work unchanged
- **Minimal Change**: Only affects cache retrieval path, no API changes
- **Robust**: Handles both legacy cached `Generation` objects and new
`ChatGeneration` objects
- **Preserves Data**: All original content and metadata is maintained
during conversion

Fixes #22389.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-28 22:28:16 +00:00
Mason Daugherty
b7e4797e8b
release(anthropic): 0.3.18 (#32292) 2025-07-28 17:07:11 -04:00
Mason Daugherty
3a487bf720
refactor(anthropic): AnthropicLLM to use Messages API (#32290)
re: #32189
2025-07-28 16:22:58 -04:00
Mason Daugherty
8db16b5633
fix: use new Google model names in examples (#32288) 2025-07-28 19:03:42 +00:00