- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
Replacement for PR #30191 (@ccurme)
**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.
Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.
This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.
As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.
Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)
**Issue**: resolves#30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)
**Dependencies**: no new dependencies
[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing
---------
Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003
NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.
Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"
- [ ] **PR message**: **Twitter handle:** @Galaxia_graph
I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).
This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.
The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.
Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.
This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.
In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.
So, either of these imports should work:
```py
from langchain_core.load import load
from langchain_core.load.load import load
```
As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.
<details> More on why the lazy imports PR broke this support...
All was well, except when the absolute import was run first, see the
last snippet:
```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```
```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```
```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```
```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```
In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.
</details>
New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition
Reduce scope of async benchmark to save time on CI
Waiting to merge this PR until we figure out how to use walltime on
local runners.
Most easily reviewed with the "hide whitespace" option toggled.
Seeing 10-50% speed ups in import time for common structures 🚀
The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).
Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.
There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.
<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>
I've tracked the improvements with some local benchmarks:
## `pytest-benchmark` results
| Name | Before (s) | After (s) | Delta (s) | % Change |
|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |
Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.
## Local CodSpeed results
We're still working on configuring CodSpeed on CI. The local usage
produced similar results.