Removed:
- `libs/core/langchain_core/chat_history.py`: `add_user_message` and
`add_ai_message` in favor of `add_messages` and `aadd_messages`
- `libs/core/langchain_core/language_models/base.py`: `predict`,
`predict_messages`, and async versions in favor of `invoke`. removed
`_all_required_field_names` since it was a wrapper on
`get_pydantic_field_names`
- `libs/core/langchain_core/language_models/chat_models.py`:
`callback_manager` param in favor of `callbacks`. `__call__` and
`call_as_llm` method in favor of `invoke`
- `libs/core/langchain_core/language_models/llms.py`: `callback_manager`
param in favor of `callbacks`. `__call__`, `predict`, `apredict`, and
`apredict_messages` methods in favor of `invoke`
- `libs/core/langchain_core/prompts/chat.py`: `from_role_strings` and
`from_strings` in favor of `from_messages`
- `libs/core/langchain_core/prompts/pipeline.py`: removed
`PipelinePromptTemplate`
- `libs/core/langchain_core/prompts/prompt.py`: `input_variables` param
on `from_file` as it wasn't used
- `libs/core/langchain_core/tools/base.py`: `callback_manager` param in
favor of `callbacks`
- `libs/core/langchain_core/tracers/context.py`: `tracing_enabled` in
favor of `tracing_enabled_v2`
- `libs/core/langchain_core/tracers/langchain_v1.py`: entire module
- `libs/core/langchain_core/utils/loading.py`: entire module,
`try_load_from_hub`
- `libs/core/langchain_core/vectorstores/in_memory.py`: `upsert` in
favor of `add_documents`
- `libs/standard-tests/langchain_tests/integration_tests/chat_models.py`
and `libs/standard-tests/langchain_tests/unit_tests/chat_models.py`:
`tool_choice_value` as models should accept `tool_choice="any"`
- `langchain` will consequently no longer expose these items if it was
previously
---------
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
Ensures proper reStructuredText formatting by adding the required blank
line before closing docstring quotes, which resolves the "Block quote
ends without a blank line; unexpected unindent" warning.
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)
Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.
To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.
More context: https://github.com/langchain-ai/langchain/pull/30487
- Test if models support forcing tool calls via `tool_choice`. If they
do, they should support
- `"any"` to specify any tool
- the tool name as a string to force calling a particular tool
- Add `tool_choice` to signature of `BaseChatModel.bind_tools` in core
- Deprecate `tool_choice_value` in standard tests in favor of a boolean
`has_tool_choice`
Will follow up with PRs in external repos (tested in AWS and Google
already).
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI
---
Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.
Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.
That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
Motivation: dedicated structured output features are becoming more
common, such that integrations can support structured output without
supporting tool calling.
Here we make two changes:
1. Update the `has_structured_output` method to default to True if a
model supports tool calling (in addition to defaulting to True if
`with_structured_output` is overridden).
2. Update structured output tests to engage if `has_structured_output`
is True.