Commit Graph

98 Commits

Author SHA1 Message Date
Vadym Barda
37190881d3
core[patch]: add util for approximate token counting (#30373) 2025-03-19 17:48:38 +00:00
ccurme
cd1ea8e94d
openai[patch]: support Responses API (#30231)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-03-12 12:25:46 -04:00
ccurme
52b0570bec
core, openai, standard-tests: improve OpenAI compatibility with Anthropic content blocks (#30128)
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI

---

Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.

Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.

That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
2025-03-06 09:53:14 -05:00
ZhangShenao
8575d7491f
[Doc] Improve api doc (#30073)
- Update api_doc for `BaseMessage`
- add static method decorator for `retry_runnable`
2025-03-04 09:39:07 -05:00
Christophe Bornet
b3885c124f
core: Add ruff rules TC (#29268)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
Some fixes done for TC001,TC002 and TC003 but these rules are excluded
since they don't play well with Pydantic.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 19:39:05 +00:00
Bagatur
979a991dc2
core[patch]: dont deep copy merge_message_runs (#28454)
afaict no need to deep copy here, if we merge messages then we convert
them to chunks first anyways
2025-02-22 21:56:45 +00:00
Erick Friis
6c1e21d128
core: basemessage.text() (#29078) 2025-02-18 17:45:44 -08:00
Nuno Campos
fe59f2cc88
core: Fix output of convert_messages when called with BaseMessage.model_dump() (#29763)
- additional_kwargs was being nested twice
- example, response_metadata was placed inside additional_kwargs
2025-02-12 10:05:33 -08:00
Aaron V
dcfaae85d2
Core: Fix __add__ for concatting two BaseMessageChunk's (#29531)
Description:

The change allows you to use the overloaded `+` operator correctly when
`+`ing two BaseMessageChunk subclasses. Without this you *must*
instantiate a subclass for it to work.

Which feels... wrong. Base classes should be decoupled from sub classes
and should have in no way a dependency on them.

Issue:

You can't `+` a BaseMessageChunk with a BaseMessageChunk

e.g. this will explode

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import BaseMessageChunk


chunk1 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

# this will throw
new_chunk = chunk1 + chunk2
```

In case anyone ran into this issue themselves, it's probably best to use
the AIMessageChunk:

a la 

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import AIMessageChunk


chunk1 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

# No explosion!
new_chunk = chunk1 + chunk2
```

Dependencies:

None!

Twitter handle: 
`aaron_vogler`

Keeping these for later if need be:
```
baskaryan
efriis 
eyurtsev
ccurme 
vbarda
hwchase17
baskaryan
efriis
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 00:43:36 +00:00
Isaac Francisco
2bb2c9bfe8
change behavior for converting a string to openai messages (#29446) 2025-01-27 18:18:54 -08:00
Christophe Bornet
dbb6b7b103
core: Add ruff rules TRY (tryceratops) (#29388)
TRY004 ("use TypeError rather than ValueError") existing errors are
marked as ignore to preserve backward compatibility.
LMK if you prefer to fix some of them.

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-24 05:01:40 +00:00
Christophe Bornet
e4a78dfc2a
core: Bump ruff version to 0.9 (#29201)
Also run some preview autofix and formatting

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:20:09 +00:00
Christophe Bornet
1c4ce7b42b
core: Auto-fix some docstrings (#29337) 2025-01-21 13:29:53 -05:00
Christophe Bornet
e5d62c6ce7
core: Add ruff rule W293 (whitespaces) (#29272) 2025-01-20 15:16:12 -05:00
Bagatur
4a531437bb
core[patch], openai[patch]: Handle OpenAI developer msg (#28794)
- Convert developer openai messages to SystemMessage
- store additional_kwargs={"__openai_role__": "developer"} so that the
correct role can be reconstructed if needed
- update ChatOpenAI to read in openai_role

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 21:54:07 +00:00
Bagatur
e24f86e55f
core[patch]: return ToolMessage from tool (#28605) 2024-12-10 09:59:38 +00:00
William FH
ecee41ab72
fix: Handle response metadata in merge_messages_runs (#28453) 2024-12-02 13:56:23 -08:00
Bagatur
60123bef67
docs: fix trim_messages docstring (#27948) 2024-11-06 22:25:13 +00:00
William FH
5a2cfb49e0
Support message trimming on single messages (#27729)
Permit trimming message lists of length 1
2024-10-30 04:27:52 +00:00
Bagatur
968dccee04
core[patch]: convert_to_openai_tool Anthropic support (#27591) 2024-10-23 12:27:06 -07:00
Erick Friis
0ebddabf7d
docs, core: error messaging [wip] (#27397) 2024-10-17 03:39:36 +00:00
Bagatur
a4392b070d core[patch]: add convert_to_openai_messages util (#27263)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 17:10:10 +00:00
Bagatur
e3e9ee8398
core[patch]: utils for adding/subtracting usage metadata (#27203) 2024-10-08 13:15:33 -07:00
Christophe Bornet
d31ec8810a
core: Add ruff rules for error messages (EM) (#26965)
All auto-fixes

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-07 22:12:28 +00:00
Bagatur
546dc44da5
core[patch]: add UsageMetadata details (#27072) 2024-10-03 20:36:17 +00:00
Christophe Bornet
db8845a62a
core: Add ruff rules for pycodestyle Warning (W) (#26964)
All auto-fixes.
2024-09-30 09:31:43 -04:00
Christophe Bornet
7809b31b95
core[patch]: Add ruff rules for flake8-simplify (SIM) (#26848)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-27 20:13:23 +00:00
Eugene Yurtsev
de0b48c41a
docs: Upgrade examples with RunnableWithMessageHistory to langgraph memory (#26855)
This PR updates the documentation examples that used
RunnableWithMessageHistory to show how to achieve the same
implementation with langgraph memory.

Some of the underlying PRs (not all of them):

- docs[patch]: update chatbot tutorial and migration guide (#26780)
- docs[patch]: update chatbot memory how-to (#26790)
- docs[patch]: update chatbot tools how-to (#26816)
- docs: update chat history in rag how-to (#26821)
- docs: update trim messages notebook (#26793)
- docs: clean up imports in how to guide for rag qa with chat history
(#26825)
- docs[patch]: update conversational rag tutorial (#26814)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
Co-authored-by: mercyspirit <ziying.qiu@gmail.com>
Co-authored-by: aqiu7 <aqiu7@gatech.edu>
Co-authored-by: John <43506685+Coniferish@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Subhrajyoty Roy <subhrajyotyroy@gmail.com>
Co-authored-by: Rajendra Kadam <raj.725@outlook.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Devin Gaffney <itsme@devingaffney.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-27 20:04:30 +00:00
Julius Stopforth
121e79b1f0
core: Fix IndexError when trim_messages invoked with empty list (#26896)
This prevents `trim_messages` from raising an `IndexError` when invoked
with `include_system=True`, `strategy="last"`, and an empty message
list.

Fixes #26895

Dependencies: none
2024-09-26 11:29:58 -04:00
Christophe Bornet
a47b332841
core: Put Python version as a project requirement so it is considered by ruff (#26608)
Ruff doesn't know about the python version in
`[tool.poetry.dependencies]`. It can get it from
`project.requires-python`.

Notes:
* poetry seems to have issues getting the python constraints from
`requires-python` and using `python` in per dependency constraints. So I
had to duplicate the info. I will open an issue on poetry.
* `inspect.isclass()` doesn't work correctly with `GenericAlias`
(`list[...]`, `dict[..., ...]`) on Python <3.11 so I added some `not
isinstance(type, GenericAlias)` checks:

Python 3.11
```pycon
>>> import inspect
>>> inspect.isclass(list)
True
>>> inspect.isclass(list[str])
False
```

Python 3.9
```pycon
>>> import inspect
>>> inspect.isclass(list)
True
>>> inspect.isclass(list[str])
True
```

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-18 14:37:57 +00:00
Christophe Bornet
3a99467ccb
core[patch]: Add ruff rule UP006(use PEP585 annotations) (#26574)
* Added rules `UPD006` now that Pydantic is v2+

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-17 21:22:50 +00:00
Isaac Francisco
06cde06a20
core[minor]: remove beta from RemoveMessage (#26579) 2024-09-17 17:09:58 +00:00
Erick Friis
c2a3021bb0
multiple: pydantic 2 compatibility, v0.3 (#26443)
Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Dan O'Donovan <dan.odonovan@gmail.com>
Co-authored-by: Tom Daniel Grande <tomdgrande@gmail.com>
Co-authored-by: Grande <Tom.Daniel.Grande@statsbygg.no>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: ZhangShenao <15201440436@163.com>
Co-authored-by: Friso H. Kingma <fhkingma@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Morgante Pell <morgantep@google.com>
2024-09-13 14:38:45 -07:00
Christophe Bornet
ff0df5ea15
core[patch]: Add B(bugbear) ruff rules (#25520)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-28 07:09:29 +00:00
CastaChick
7d13a2f958
core[patch]: add option to specify the chunk separator in merge_message_runs (#24783)
**Description:**
LLM will stop generating text even in the middle of a sentence if
`finish_reason` is `length` (for OpenAI) or `stop_reason` is
`max_tokens` (for Anthropic).
To obtain longer outputs from LLM, we should call the message generation
API multiple times and merge the results into the text to circumvent the
API's output token limit.
The extra line breaks forced by the `merge_message_runs` function when
seamlessly merging messages can be annoying, so I added the option to
specify the chunk separator.

**Issue:**
No corresponding issues.

**Dependencies:**
No dependencies required.

**Twitter handle:**
@hanama_chem
https://x.com/hanama_chem

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-22 19:46:25 +00:00
Bagatur
0bc3845e1e
core[patch]: support oai dicts as messages (#25621)
and update langsmtih example selector docs
2024-08-21 16:13:15 +00:00
Bagatur
4bd005adb6
core[patch]: Allow bound models as token_counter in trim_messages (#25563) 2024-08-20 00:21:22 -07:00
Bagatur
e81ddb32a6
docs: fix kwargs docstring (#25010)
Fix:
![Screenshot 2024-08-02 at 5 33 37
PM](https://github.com/user-attachments/assets/7c56cdeb-ee81-454c-b3eb-86aa8a9bdc8d)
2024-08-02 19:54:54 -07:00
Bagatur
0de0cd2d31
core[patch]: merge message runs nit (#24997)
Only add separator if both chunks are non-empty
2024-08-02 20:25:43 +00:00
Bagatur
a6d1fb4275
core[patch]: introduce ToolMessage.status (#24628)
Anthropic models (including via Bedrock and other cloud platforms)
accept a status/is_error attribute on tool messages/results
(specifically in `tool_result` content blocks for Anthropic API). Adding
a ToolMessage.status attribute so that users can set this attribute when
using those models
2024-07-29 14:01:53 -07:00
Bagatur
70c71efcab
core[patch]: merge_content fix (#24526) 2024-07-22 22:20:22 -07:00
JP-Ellis
f77659463a
core[patch]: allow message utils to work with lcel (#23743)
The functions `convert_to_messages` has had an expansion of the
arguments it can take:

1. Previously, it only could take a `Sequence` in order to iterate over
it. This has been broadened slightly to an `Iterable` (which should have
no other impact).
2. Support for `PromptValue` and `BaseChatPromptTemplate` has been
added. These are generated when combining messages using the overloaded
`+` operator.

Functions which rely on `convert_to_messages` (namely `filter_messages`,
`merge_message_runs` and `trim_messages`) have had the type of their
arguments similarly expanded.

Resolves #23706.

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->

---------

Signed-off-by: JP-Ellis <josh@jpellis.me>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-15 08:58:05 -07:00
Bagatur
65321bf975
core[patch]: fix ToolCall "type" when streaming (#24218) 2024-07-13 08:59:03 -07:00
Bagatur
6166ea67a8
core[minor]: rename ToolMessage.raw_output -> artifact (#24185) 2024-07-12 09:52:44 -07:00
Bagatur
5fd1e67808
core[minor], integrations...[patch]: Support ToolCall as Tool input and ToolMessage as Tool output (#24038)
Changes:
- ToolCall, InvalidToolCall and ToolCallChunk can all accept a "type"
parameter now
- LLM integration packages add "type" to all the above
- Tool supports ToolCall inputs that have "type" specified
- Tool outputs ToolMessage when a ToolCall is passed as input
- Tools can separately specify ToolMessage.content and
ToolMessage.raw_output
- Tools emit events for validation errors (using on_tool_error and
on_tool_end)

Example:
```python
@tool("structured_api", response_format="content_and_raw_output")
def _mock_structured_tool_with_raw_output(
    arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> Tuple[str, dict]:
    """A Structured Tool"""
    return f"{arg1} {arg2}", {"arg1": arg1, "arg2": arg2, "arg3": arg3}


def test_tool_call_input_tool_message_with_raw_output() -> None:
    tool_call: Dict = {
        "name": "structured_api",
        "args": {"arg1": 1, "arg2": True, "arg3": {"img": "base64string..."}},
        "id": "123",
        "type": "tool_call",
    }
    expected = ToolMessage("1 True", raw_output=tool_call["args"], tool_call_id="123")
    tool = _mock_structured_tool_with_raw_output
    actual = tool.invoke(tool_call)
    assert actual == expected

    tool_call.pop("type")
    with pytest.raises(ValidationError):
        tool.invoke(tool_call)

    actual_content = tool.invoke(tool_call["args"])
    assert actual_content == expected.content
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-11 14:54:02 -07:00
Bagatur
6928f4c438
core[minor]: Add ToolMessage.raw_output (#23994)
Decisions to discuss:
1.  is a new attr needed or could additional_kwargs be used for this
2. is raw_output a good name for this attr
3. should raw_output default to {} or None
4. should raw_output be included in serialization
5. do we need to update repr/str to  exclude raw_output
2024-07-10 20:11:10 +00:00
Nuno Campos
160fc7f246
core: Move json parsing in base chat model / output parser to bg thread (#24031)
- add version of AIMessageChunk.__add__ that can add many chunks,
instead of only 2
- In agenerate_from_stream merge and parse chunks in bg thread
- In output parse base classes do more work in bg threads where
appropriate

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
2024-07-09 12:26:36 -07:00
ccurme
74c7198906
core, anthropic[patch]: support streaming tool calls when function has no arguments (#23915)
resolves https://github.com/langchain-ai/langchain/issues/23911

When an AIMessageChunk is instantiated, we attempt to parse tool calls
off of the tool_call_chunks.

Here we add a special-case to this parsing, where `""` will be parsed as
`{}`.

This is a reaction to how Anthropic streams tool calls in the case where
a function has no arguments:
```
{'id': 'toolu_01J8CgKcuUVrMqfTQWPYh64r', 'input': {}, 'name': 'magic_function', 'type': 'tool_use', 'index': 1}
{'partial_json': '', 'type': 'tool_use', 'index': 1}
```
The `partial_json` does not accumulate to a valid json string-- most
other providers tend to emit `"{}"` in this case.
2024-07-05 18:57:41 +00:00
Vadym Barda
9bb623381b
core[minor]: update conversion utils to handle RemoveMessage (#23840) 2024-07-03 16:13:31 -04:00
Bagatur
a0c2281540
infra: update mypy 1.10, ruff 0.5 (#23721)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
2024-07-03 10:33:27 -07:00