Commit Graph

14505 Commits

Author SHA1 Message Date
Mason Daugherty
bea249beff release(core): 1.0.0a6 (#33201) langchain-core==1.0.0a6 2025-10-01 22:49:36 -04:00
Mason Daugherty
2d29959386 Merge branch 'master' into wip-v1.0 2025-10-01 22:34:05 -04:00
Mason Daugherty
48b77752d0 release(ollama): 0.3.9 (#33200) langchain-ollama==0.3.9 2025-10-01 22:31:20 -04:00
Mason Daugherty
6f2d16e6be refactor(ollama): simplify options handling (#33199)
Fixes #32744

Don't restrict options; the client accepts any dict
2025-10-01 21:58:12 -04:00
Mason Daugherty
a9eda18e1e refactor(ollama): clean up tests (#33198) 2025-10-01 21:52:01 -04:00
Mason Daugherty
a89c549cb0 feat(ollama): add basic auth support (#32328)
support for URL authentication in the format
`https://user:password@host:port` for all LangChain Ollama clients.

Related to #32327 and #25055
2025-10-01 20:46:37 -04:00
Sydney Runkle
a336afaecd feat(langchain): use decorators for jumps instead (#33179)
The old `before_model_jump_to` classvar approach was quite clunky, this
is nicer imo and easier to document. Also moving from `jump_to` to
`can_jump_to` which is more idiomatic.

Before:

```py
class MyMiddleware(AgentMiddleware):
    before_model_jump_to: ClassVar[list[JumpTo]] = ["end"]

    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```

After

```py
class MyMiddleware(AgentMiddleware):

    @hook_config(can_jump_to=["end"])
    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```
2025-10-01 16:49:27 -07:00
Mason Daugherty
bb9b802cda fix(ollama): handle ImageContentBlock 2025-10-01 19:38:21 -04:00
Mason Daugherty
4ff83e92c1 fix locks/make filetype optional dep 2025-10-01 19:33:49 -04:00
Mason Daugherty
738dc79959 feat(core): genai standard content (#32987) 2025-10-01 19:09:15 -04:00
Mason Daugherty
9ad7f7a0cc Merge branch 'master' into wip-v1.0 2025-10-01 18:48:38 -04:00
Lauren Hirata Singh
af07949d13 fix(docs): Redirects (#33190) 2025-10-01 16:28:47 -04:00
Sydney Runkle
a10e880c00 feat(langchain_v1): add async support for create_agent (#33175)
This makes branching **much** more simple internally and helps greatly
w/ type safety for users. It just allows for one signature on hooks
instead of multiple.

Opened after https://github.com/langchain-ai/langchain/pull/33164
ballooned more than expected, w/ branching for:
* sync vs async
* runtime vs no runtime (this is self imposed)

**This also removes support for nodes w/o `runtime` in the signature.**
We can always go back and add support for nodes w/o `runtime`.

I think @christian-bromann's idea to re-export `runtime` from
langchain's agents might make sense due to the abundance of imports
here.

Check out the value of the change based on this diff:
https://github.com/langchain-ai/langchain/pull/33176
2025-10-01 19:15:39 +00:00
Eugene Yurtsev
7b5e839be3 chore(langchain_v1): use list[str] for modifyModelRequest (#33166)
Update model request to return tools by name. This will decrease the
odds of misusing the API.

We'll need to extend the type for built-in tools later.
2025-10-01 14:46:19 -04:00
ccurme
fa0955ccb1 fix(openai): fix tests on v1 branch (#33189) 2025-10-01 13:25:29 -04:00
Chester Curme
e02acdfe60 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/core/langchain_core/version.py
#	libs/core/pyproject.toml
#	libs/core/uv.lock
#	libs/partners/openai/pyproject.toml
#	libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py
#	libs/partners/openai/uv.lock
#	libs/standard-tests/pyproject.toml
#	libs/standard-tests/uv.lock
2025-10-01 11:31:45 -04:00
ccurme
740842485c fix(openai): bump min core version (#33188)
Required for new tests added in
https://github.com/langchain-ai/langchain/pull/32541 and
https://github.com/langchain-ai/langchain/pull/33183.
langchain-openai==0.3.34
2025-10-01 11:01:15 -04:00
noeliecherrier
08bb74f148 fix(mistralai): handle HTTP errors in async embed documents (#33187)
The async embed function does not properly handle HTTP errors.

For instance with large batches, Mistral AI returns `Too many inputs in
request, split into more batches.` in a 400 error.

This leads to a KeyError in `response.json()["data"]` l.288

This PR fixes the issue by:
- calling `response.raise_for_status()` before returning
- adding a retry similarly to what is done in the synchronous
counterpart `embed_documents`

I also added an integration test, but willing to move it to unit tests
if more relevant.
2025-10-01 10:57:47 -04:00
ccurme
7d78ed9b53 release(standard-tests): 0.3.22 (#33186) langchain-tests==0.3.22 2025-10-01 10:39:17 -04:00
ccurme
7ccff656eb release(core): 0.3.77 (#33185) langchain-core==0.3.77 2025-10-01 10:24:07 -04:00
ccurme
002d623f2d feat: (core, standard-tests) support PDF inputs in ToolMessages (#33183) 2025-10-01 10:16:16 -04:00
Mohammad Mohtashim
34f8031bd9 feat(langchain): Using Structured Response as Key in Output Schema for Middleware Agent (#33159)
- **Description:** Changing the key from `response` to
`structured_response` for middleware agent to keep it sync with agent
without middleware. This a breaking change.
 - **Issue:** #33154
2025-10-01 03:24:59 +00:00
Mason Daugherty
87e1bbf3b1 Merge branch 'master' into wip-v1.0 2025-09-30 17:55:40 -04:00
Mason Daugherty
e8f76e506f Merge branch 'master' into wip-v1.0 2025-09-30 17:55:23 -04:00
Mason Daugherty
a541b5bee1 chore(infra): rfc README.md for better presentation (#33172) 2025-09-30 17:44:42 -04:00
Mason Daugherty
3e970506ba chore(core): remove runnable section from README.md (#33171) 2025-09-30 17:15:31 -04:00
Mason Daugherty
d1b0196faa chore(infra): whitespace fix (#33170) 2025-09-30 17:14:55 -04:00
ccurme
aac69839a9 release(openai): 0.3.34 (#33169) 2025-09-30 16:48:39 -04:00
ccurme
64141072a3 feat(openai): support openai sdk 2.0 (#33168) 2025-09-30 16:34:00 -04:00
Mason Daugherty
c49d470e13 Merge branch 'master' into wip-v1.0 2025-09-30 15:53:17 -04:00
Mason Daugherty
208e8e8f07 Merge branch 'master' into wip-v1.0 2025-09-30 15:44:24 -04:00
Mason Daugherty
0795be2a04 docs(core): remove non-existent param from as_tool docstring (#33165) 2025-09-30 19:43:34 +00:00
Mason Daugherty
06301c701e lift langgraph version cap, temp comment out optional deps for langchain_v1 2025-09-30 15:41:16 -04:00
Eugene Yurtsev
9c97597175 chore(langchain_v1): expose middleware decorators and selected messages (#33163)
* Make it easy to improve the middleware shortcuts
* Export the messages that we're confident we'll expose
2025-09-30 14:14:57 -04:00
ccurme
b7223f45cc release(openai): 1.0.0a3 (#33162) langchain-anthropic==1.0.0a2 langchain-openai==1.0.0a3 2025-09-30 12:13:58 -04:00
ccurme
8ba8a5e301 release(anthropic): 1.0.0a2 (#33161) 2025-09-30 12:09:17 -04:00
Chester Curme
16a2d9759b revert changes to release workflow 2025-09-30 11:56:10 -04:00
Chester Curme
0e404adc60 🦍 langchain-core==1.0.0a5 2025-09-30 11:49:40 -04:00
Chester Curme
ffaedf7cfc fix 2025-09-30 11:29:39 -04:00
Chester Curme
b6ecc0b040 infra(fix): temporarily skip some release checks 2025-09-30 11:28:40 -04:00
ccurme
4d581000ad fix(infra): temporarily skip pre-release checks for alpha branch (#33160)
We have deliberately introduced a breaking change in
https://github.com/langchain-ai/langchain/pull/33021.

Will revert when we have compatible pre-releases for tested packages.
2025-09-30 11:10:06 -04:00
ccurme
06ddc57c7a release(core): 1.0.0a5 (#33158) 2025-09-30 10:37:06 -04:00
Chester Curme
49704ffc19 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/partners/anthropic/langchain_anthropic/chat_models.py
#	libs/partners/anthropic/pyproject.toml
#	libs/partners/anthropic/uv.lock
2025-09-30 09:25:19 -04:00
Sydney Runkle
eed0f6c289 feat(langchain): todo middleware (#33152)
Porting the [planning
middleware](39c0138d0f/src/deepagents/middleware.py (L21))
over from deepagents.

Also adding the ability to configure:
* System prompt
* Tool description

```py
from langchain.agents.middleware.planning import PlanningMiddleware
from langchain.agents import create_agent

agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])

result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})

print(result["todos"])  # Array of todo items with status tracking
```
2025-09-30 02:23:26 +00:00
Mason Daugherty
8926986483 chore: standardize translator named params 2025-09-29 16:42:59 -04:00
ccurme
729637a347 docs(anthropic): document support for memory tool and context management (#33149) 2025-09-29 16:38:01 -04:00
Mason Daugherty
3325196be1 fix(langchain): handle gpt-5 model name in init_chat_model (#33148)
expand to match any `gpt-*` model to openai
2025-09-29 16:16:17 -04:00
Mason Daugherty
f402fdcea3 fix(langchain): add context_management to Anthropic chat model init (#33150) 2025-09-29 16:13:47 -04:00
ccurme
ca9217c02d release(anthropic): 0.3.21 (#33147) langchain-anthropic==0.3.21 2025-09-29 19:56:28 +00:00
ccurme
f9bae40475 feat(anthropic): support memory and context management features (#33146)
https://docs.claude.com/en/docs/build-with-claude/context-editing

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-29 15:42:38 -04:00