Compare commits

...

3994 Commits

Author SHA1 Message Date
Chester Curme
2c938b787f Merge branch 'master' into cc/summarization_patch
# Conflicts:
#	libs/langchain_v1/langchain/agents/middleware/summarization.py
2025-12-02 10:07:51 -05:00
ccurme
c63f23d233 revert(model-profiles): update docs link (#34162) 2025-12-01 17:29:45 +00:00
Mason Daugherty
b7091d391d feat(anthropic): auto append relevant beta headers (#34113) 2025-12-01 12:20:41 -05:00
ccurme
7a2952210e fix(langchain): (SummarizationMiddleware) adjust token counts based on model (#34161) 2025-12-01 16:22:44 +00:00
ccurme
7549845d82 chore(anthropic): vcr integration test (#34160) 2025-12-01 15:28:28 +00:00
Chester Curme
fa18f8eda0 Merge branch 'master' into cc/summarization_patch 2025-12-01 09:39:54 -05:00
Mason Daugherty
878f033ed7 docs(langchain): docstrings for summariziation middleware types (#34158)
improving devx :)
2025-12-01 09:39:33 -05:00
Steffen Hausmann
4065106c2e fix(langchain): add types to human_in_the_loop middleware (#34137)
The `HumanInTheLoopMiddleware` is missing a type annotation for the
context schema. Without the fix in this PR, the following code does not
type check:

```
graph = create_agent(
    "gpt-5",
    tools=[send_email_tool, read_email_tool],
    middleware=[
        HumanInTheLoopMiddleware(
            interrupt_on={
                # Require approval or rejection for sending emails
                "send_email_tool": {
                    "allowed_decisions": ["approve", "reject"],
                },
                # Auto-approve reading emails
                "read_email_tool": False,
            }
        ),
    ],
    context_schema=ContextSchema,
)
```

```
Argument of type "list[HumanInTheLoopMiddleware]" cannot be assigned to parameter "middleware" of type "Sequence[AgentMiddleware[StateT_co@create_agent, ContextT@create_agent]]" in function "create_agent"
  "HumanInTheLoopMiddleware" is not assignable to "AgentMiddleware[AgentState[Unknown], ContextSchema | None]"
    Type parameter "ContextT@AgentMiddleware" is invariant, but "None" is not the same as "ContextSchema | None"
```
2025-12-01 08:46:38 -05:00
Mason Daugherty
12df938ace docs(core): update docstrings in RunnableConfig, dereference_refs (#34131) 2025-11-28 03:55:37 -05:00
Mason Daugherty
65ee43cc10 chore(infra): update agent files, remove top-level pyproject (#34128) 2025-11-27 21:06:43 -05:00
Mason Daugherty
fe7c000fc1 fix(model-profiles): update docs link (#34127) 2025-11-28 00:19:36 +00:00
Mason Daugherty
dad50e5624 chore(infra): updated allowed scopes in PR lint configuration (#34115) 2025-11-27 00:34:15 -05:00
Mason Daugherty
0a6d01e61d docs(anthropic,core,langchain): updates (#34106) 2025-11-25 17:58:09 -05:00
Mason Daugherty
c6f8b0875a style(core,langchain,qdrant): fix some docstrings for refs (#34105) 2025-11-25 13:58:53 -05:00
Mason Daugherty
4c3800d743 chore(infra): update PR template, agent files (#34104) 2025-11-25 13:58:41 -05:00
dependabot[bot]
7fe1c4b78f chore(deps): bump actions/checkout from 5 to 6 (#34083)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-24 19:10:28 -05:00
Bagatur
c375732396 fix(core): handle missing StructuredPrompt schema (#34096)
- **Description:** if you dont pass in schema= or schema_= to
StrucutredPrompt(...) today you get a confusing KeyError. Raise a more
readable ValueError instead.
- **Issue:** na
- **Dependencies:** na
2025-11-24 18:39:29 -05:00
Chester Curme
b2db842cd4 treat keep threshold as a hard cap 2025-11-24 11:21:28 -05:00
ccurme
9c21f83e82 release(langchain): 1.1 (#34090) 2025-11-24 10:27:13 -05:00
ccurme
880652b713 release: (integration packages): 1.1 (#34088) 2025-11-24 10:00:06 -05:00
Sydney Runkle
4ab94579ad feat(langchain): support SystemMessage in create_agent's system_prompt (#34055)
* `create_agent`'s `system_prompt` allows `str | SystemMessage`
* added `system_message: SystemMessage` on `ModelRequest`
* `ModelRequest.system_prompt` is a function of `system_message.text`,
now deprecated
* disallow setting `system_prompt` and `system_message`
* `ModelRequest.system_prompt` can still be set (w/ custom setattr) for
custom backwards compat, but the updates just get propogated to the
`ModelRequest.system_message`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-11-24 14:53:57 +00:00
ccurme
eb0545a173 release: (integration packages) 1.1 (#34087) 2025-11-24 09:13:01 -05:00
ccurme
a2e389de9f release(fireworks): 1.1 (#34086) 2025-11-24 09:05:43 -05:00
Alex Kondratev
01573c1375 fix(core): ensure_ascii=False in PydanticOutputParser exception formatting (#34006)
- **Description:** When formatting an error, `PydanticOutputParser`
dumps json with default `ensure_ascii=True`
  -  **Issue:** Fixes #34005
  - **Dependencies:** None

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://docs.langchain.com/oss/python/contributing) for
more.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-23 20:22:50 -05:00
Abhinav
2ba3ce81a6 fix(openai): make GPT-5 temperature validation case-insensitive (#34012)
Fixed a bug where GPT-5 temperature validation was case-sensitive,
causing issues when users
specified Azure deployment names or model names in uppercase (e.g.,
`"GPT-5-2025-01-01"`, `"GPT-5-NANO"`). The validation now correctly
handles model names regardless of case.

  Changes made:
- Updated `validate_temperature()` method in `BaseChatOpenAI` to perform
case-insensitive
  model name comparisons
- Updated `_get_encoding_model()` method to use case-insensitive checks
for tiktoken encoder
  selection
- Added comprehensive unit tests to verify case-insensitive behavior
with various case
  combinations

  **Issue:** Fixes #34003

  **Dependencies:** None

  **Test Coverage:**
  - All existing tests pass
- New test `test_gpt_5_temperature_case_insensitive` covers uppercase,
lowercase, and
  mixed-case model names
- Tests verify both non-chat GPT-5 models (temperature removed) and chat
models (temperature
  preserved)
  - Lint and format checks pass (`make lint`, `make format`)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-23 20:17:03 -05:00
dependabot[bot]
4e4e5d7337 chore(infra): bump actions/github-script from 6 to 8 (#33991)
Bumps [actions/github-script](https://github.com/actions/github-script)
from 6 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/github-script/releases">actions/github-script's
releases</a>.</em></p>
<blockquote>
<h2>v8.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update Node.js version support to 24.x by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li>README for updating actions/github-script from v7 to v8 by <a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li><a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7.1.0...v8.0.0">https://github.com/actions/github-script/compare/v7.1.0...v8.0.0</a></p>
<h2>v7.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade husky to v9 by <a
href="https://github.com/benelan"><code>@​benelan</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li>Upgrade IA Publish by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/486">actions/github-script#486</a></li>
<li>Fix workflow status badges by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/497">actions/github-script#497</a></li>
<li>Update usage of <code>actions/upload-artifact</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/512">actions/github-script#512</a></li>
<li>Clear up package name confusion by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/514">actions/github-script#514</a></li>
<li>Update dependencies with <code>npm audit fix</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/515">actions/github-script#515</a></li>
<li>Specify that the used script is JavaScript by <a
href="https://github.com/timotk"><code>@​timotk</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li>chore: Add Dependabot for NPM and Actions by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/472">actions/github-script#472</a></li>
<li>Define <code>permissions</code> in workflows and update actions by
<a href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in
<a
href="https://redirect.github.com/actions/github-script/pull/531">actions/github-script#531</a></li>
<li>chore: Add Dependabot for .github/actions/install-dependencies by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/532">actions/github-script#532</a></li>
<li>chore: Remove .vscode settings by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/533">actions/github-script#533</a></li>
<li>ci: Use github/setup-licensed by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/473">actions/github-script#473</a></li>
<li>make octokit instance available as octokit on top of github, to make
it easier to seamlessly copy examples from GitHub rest api or octokit
documentations by <a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li>Remove <code>octokit</code> README updates for v7 by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/557">actions/github-script#557</a></li>
<li>docs: add &quot;exec&quot; usage examples by <a
href="https://github.com/neilime"><code>@​neilime</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li>Bump ruby/setup-ruby from 1.213.0 to 1.222.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/563">actions/github-script#563</a></li>
<li>Bump ruby/setup-ruby from 1.222.0 to 1.229.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/575">actions/github-script#575</a></li>
<li>Clearly document passing inputs to the <code>script</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/603">actions/github-script#603</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/benelan"><code>@​benelan</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li><a href="https://github.com/timotk"><code>@​timotk</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li><a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li><a href="https://github.com/neilime"><code>@​neilime</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7...v7.1.0">https://github.com/actions/github-script/compare/v7...v7.1.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed597411d8"><code>ed59741</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/653">#653</a>
from actions/sneha-krip/readme-for-v8</li>
<li><a
href="2dc352e4ba"><code>2dc352e</code></a>
Bold minimum Actions Runner version in README</li>
<li><a
href="01e118c8d0"><code>01e118c</code></a>
Update README for Node 24 runtime requirements</li>
<li><a
href="8b222ac82e"><code>8b222ac</code></a>
Apply suggestion from <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a></li>
<li><a
href="adc0eeac99"><code>adc0eea</code></a>
README for updating actions/github-script from v7 to v8</li>
<li><a
href="20fe497b3f"><code>20fe497</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/637">#637</a>
from actions/node24</li>
<li><a
href="e7b7f222b1"><code>e7b7f22</code></a>
update licenses</li>
<li><a
href="2c81ba05f3"><code>2c81ba0</code></a>
Update Node.js version support to 24.x</li>
<li><a
href="f28e40c7f3"><code>f28e40c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/610">#610</a>
from actions/nebuk89-patch-1</li>
<li><a
href="1ae9958572"><code>1ae9958</code></a>
Update README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/github-script/compare/v6...v8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/github-script&package-manager=github_actions&previous-version=6&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-23 20:00:22 -05:00
Mason Daugherty
2a863727f9 fix(infra,core): nits (#34079)
* Add missing `nits` to allowed PR linting scopes
* Ensure `MAJOR.MINOR.PATCH` consistency in admonitions
* Ensure valid spacing in admonitions
2025-11-23 20:00:07 -05:00
dumko2001
30e2260e26 fix(core): Decouple provider prefix from model name in init_chat_mode… (#34046)
:…l logic

Addresses Issue #34007.
Fixes a bug where aliases like 'mistral:' were inferred correctly as a
provider but the prefix was not stripped from the model name, causing
API 400 errors. Added logic to strip prefix when inference succeeds.

**Description**
This PR resolves a logic error in `init_chat_model` where inferred
provider aliases (specifically `mistral:`) were correctly identified but
not stripped from the model string.

**The Problem**
When passing a string like `mistral:ministral-8b-latest`, the factory
logic correctly inferred the provider as `mistralai` but failed to enter
the string-splitting block because the alias `mistral` was not in the
hardcoded `_SUPPORTED_PROVIDERS` list. This caused the raw string
`mistral:ministral-8b-latest` to be passed to the `ChatMistralAI`
constructor, resulting in a 400 API error.

**The Fix**
I updated `_parse_model` in
`libs/langchain/langchain/chat_models/base.py`. The logic now attempts
to infer the provider from the prefix *before* determining whether to
split the string. This ensures that valid aliases trigger the stripping
logic, passing only the clean `model_name` to the integration class.

**Issue**
Fixes #34007

**Dependencies**
None.

**Verification**
Validated locally with a reproduction script:
- Input: `mistral:ministral-8b-latest`
- Result: Successfully instantiates `ChatMistralAI` with
`model="ministral-8b-latest"`.
- Validated that standard inputs (e.g., `gpt-4o`) remain unaffected.

Co-authored-by: ioop <ioop@Sidharths-MacBook-Air.local>
2025-11-23 19:52:24 -05:00
Mason Daugherty
cbaea351b2 style(core,langchain-classic,openai): fix griffe warnings (#34074) 2025-11-23 01:06:46 -05:00
ccurme
f070217c3b release(standard-tests): 1.0.2 (#34071)
Resolves https://github.com/langchain-ai/langchain/issues/34069
2025-11-22 18:35:09 -05:00
ccurme
0915682c12 chore(fireworks): update tested models (#34070) 2025-11-22 16:50:49 -05:00
Sydney Runkle
68ab9a1e56 fix: don't reorder tool calls in HITL middleware (#34023) 2025-11-22 05:10:32 -05:00
Mason Daugherty
47b79c30c0 chore(docs): fix a few refs syntax errors (#34044)
missing whitespace for some admonitions
2025-11-22 00:58:21 -05:00
ccurme
5899f980aa release(model-profiles): 0.0.5 (#34064) 2025-11-21 16:12:00 -05:00
ccurme
b0bf4afe81 release(core): 1.1.0 (#34063) 2025-11-21 15:57:25 -05:00
ccurme
33e5d01f7c feat(model-profiles): distribute data across packages (#34024) 2025-11-21 15:47:05 -05:00
Sydney Runkle
ee3373afc2 chore: add more robust test for runtime injection w/ explicit args_schema (#34051) 2025-11-20 16:51:37 +00:00
Sydney Runkle
b296f103a9 feat: ModelRetryMiddleware (#34027)
Closes https://github.com/langchain-ai/langchain/issues/33983

* Adds `ModelRetryMiddleware` modeled after `ToolRetryMiddleware`
* Uses `on_failure` modes of `error` and `continue` to match the
`exit_behavior` modes of model + tool call limit middleware
* In a backwards compatible manner, aligns the API of
`ToolRetryMiddleware`'s `on_failure` with the above
* Centralize common "retry" utils across these middlewares
2025-11-20 11:42:33 -05:00
Eugene Yurtsev
525d5c0169 release(core): 1.0.7 (#34036)
Release core 1.0.7
2025-11-19 21:17:31 +00:00
Eugene Yurtsev
c4b6ba254e fix(core): fix validation for input variables in f-string templates, restrict functionality supported by jinja2, mustache templates (#34035)
* Fix validation for input variables in f-string templates
* Restrict functionality of features supported by jinja2 and mustache
templates
2025-11-19 16:09:46 -05:00
Sydney Runkle
b7d1831f9d fix: deprecate setattr on ModelCallRequest (#34022)
* one alternative considered was setting `frozen=True` on the dataclass,
but this is breaking, so a deprecation is a nicer approach
2025-11-19 11:08:55 -05:00
ccurme
328ba36601 chore(openai): skip Azure text completions tests (#34021) 2025-11-19 09:29:12 -05:00
Sydney Runkle
6f677ef5c1 chore: temporarily skip openai integration tests (#34020)
getting around deprecated azure model issues blocking core release
2025-11-19 14:05:22 +00:00
Sydney Runkle
d47d41cbd3 release: langchain-core 1.0.6 (#34018) 2025-11-19 08:16:34 -05:00
William FH
32bbe99efc chore: Support tool runtime injection when custom args schema is prov… (#33999)
Support injection of injected args (like `InjectedToolCallId`,
`ToolRuntime`) when an `args_schema` is specified that doesn't contain
said args.

This allows for pydantic validation of other args while retaining the
ability to inject langchain specific arguments.

fixes https://github.com/langchain-ai/langchain/issues/33646
fixes https://github.com/langchain-ai/langchain/issues/31688

Taking a deep dive here reminded me that we definitely need to revisit
our internal tooling logic, but I don't think we should do that in this
PR.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
2025-11-18 17:09:59 +00:00
ccurme
990e346c46 release(anthropic): 1.1 (#33997) 2025-11-17 16:24:29 -05:00
ccurme
9b7792631d feat(anthropic): support native structured output feature and strict tool calling (#33980) 2025-11-17 16:14:20 -05:00
CKLogic
558a8fe25b feat(core): add proxy support for mermaid png rendering (#32400)
### Description

This PR adds support for configuring HTTP/HTTPS proxies when rendering
Mermaid diagrams as PNG images using the remote Mermaid.INK API. This
enhancement allows users in restricted network environments to access
the API via a proxy, making the remote rendering feature more robust and
accessible.

The changes include:
- Added optional `proxies` parameter to `draw_mermaid_png` and
`_render_mermaid_using_api` functions
- Updated `Graph.draw_mermaid_png` method to support and pass through
proxy configuration
- Enhanced docstrings with usage examples for the new parameter
- Maintained full backward compatibility with existing code

### Usage Example

```python
proxies = {
        "http": "http://127.0.0.1:7890",
        "https": "http://127.0.0.1:7890"
}

display(Image(chain.get_graph().draw_mermaid_png(proxies=proxies)))

```

### Dependencies

No new dependencies required. Uses existing `requests` library for HTTP
requests.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-17 12:45:17 -06:00
Mason Daugherty
52b1516d44 style(langchain): fix some middleware ref syntax (#33988) 2025-11-16 00:33:17 -05:00
Mason Daugherty
8a3bb73c05 release(openai): 1.0.3 (#33981)
- Respect 300k token limit for embeddings API requests #33668
- fix create_agent / response_format for Responses API #33939
- fix response.incomplete event is not handled when using
stream_mode=['messages'] #33871
2025-11-14 19:18:50 -05:00
Mason Daugherty
099c042395 refactor(openai): embedding utils and calculations (#33982)
Now returns (`_iter`, `tokens`, `indices`, token_counts`). The
`token_counts` are calculated directly during tokenization, which is
more accurate and efficient than splitting strings later.
2025-11-14 19:18:37 -05:00
Kaparthy Reddy
2d4f00a451 fix(openai): Respect 300k token limit for embeddings API requests (#33668)
## Description

Fixes #31227 - Resolves the issue where `OpenAIEmbeddings` exceeds
OpenAI's 300,000 token per request limit, causing 400 BadRequest errors.

## Problem

When embedding large document sets, LangChain would send batches
containing more than 300,000 tokens in a single API request, causing
this error:
```
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Requested 673477 tokens, max 300000 tokens per request'}}
```

The issue occurred because:
- The code chunks texts by `embedding_ctx_length` (8191 tokens per
chunk)
- Then batches chunks by `chunk_size` (default 1000 chunks per request)
- **But didn't check**: Total tokens per batch against OpenAI's 300k
limit
- Result: `1000 chunks × 8191 tokens = 8,191,000 tokens` → Exceeds
limit!

## Solution

This PR implements dynamic batching that respects the 300k token limit:

1. **Added constant**: `MAX_TOKENS_PER_REQUEST = 300000`
2. **Track token counts**: Calculate actual tokens for each chunk
3. **Dynamic batching**: Instead of fixed `chunk_size` batches,
accumulate chunks until approaching the 300k limit
4. **Applied to both sync and async**: Fixed both
`_get_len_safe_embeddings` and `_aget_len_safe_embeddings`

## Changes

- Modified `langchain_openai/embeddings/base.py`:
  - Added `MAX_TOKENS_PER_REQUEST` constant
  - Replaced fixed-size batching with token-aware dynamic batching
  - Applied to both sync (line ~478) and async (line ~527) methods
- Added test in `tests/unit_tests/embeddings/test_base.py`:
- `test_embeddings_respects_token_limit()` - Verifies large document
sets are properly batched

## Testing

All existing tests pass (280 passed, 4 xfailed, 1 xpassed).

New test verifies:
- Large document sets (500 texts × 1000 tokens = 500k tokens) are split
into multiple API calls
- Each API call respects the 300k token limit

## Usage

After this fix, users can embed large document sets without errors:
```python
from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain_text_splitters import CharacterTextSplitter

# This will now work without exceeding token limits
embeddings = OpenAIEmbeddings()
documents = CharacterTextSplitter().split_documents(large_documents)
Chroma.from_documents(documents, embeddings)
```

Resolves #31227

---------

Co-authored-by: Kaparthy Reddy <kaparthyreddy@Kaparthys-MacBook-Air.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-14 18:12:07 -05:00
Sydney Runkle
9bd401a6d4 fix: resumable shell, works w/ interrupts (#33978)
fixes https://github.com/langchain-ai/langchain/issues/33684

Now able to run this minimal snippet successfully

```py
import os

from langchain.agents import create_agent
from langchain.agents.middleware import (
    HostExecutionPolicy,
    HumanInTheLoopMiddleware,
    ShellToolMiddleware,
)
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command


shell_middleware = ShellToolMiddleware(
    workspace_root=os.getcwd(),
    env=os.environ,  # danger
    execution_policy=HostExecutionPolicy()
)

hil_middleware = HumanInTheLoopMiddleware(interrupt_on={"shell": True})

checkpointer = InMemorySaver()

agent = create_agent(
    "openai:gpt-4.1-mini",
    middleware=[shell_middleware, hil_middleware],
    checkpointer=checkpointer,
)

input_message = {"role": "user", "content": "run `which python`"}

config = {"configurable": {"thread_id": "1"}}

result = agent.invoke(
    {"messages": [input_message]},
    config=config,
    durability="exit",
)
```
2025-11-14 15:32:25 -05:00
ccurme
6aa3794b74 feat(langchain): reference model profiles for provider strategy (#33974) 2025-11-14 19:24:18 +00:00
Sydney Runkle
189dcf7295 chore: increase coverage for shell, filesystem, and summarization middleware (#33928)
cc generated, just a start here but wanted to bump things up from 70%
ish
2025-11-14 13:30:36 -05:00
Sydney Runkle
1bc88028e6 fix(anthropic): execute bash + file tools via tool node (#33960)
* use `override` instead of directly patching things on `ModelRequest`
* rely on `ToolNode` for execution of tools related to said middleware,
using `wrap_model_call` to inject the relevant claude tool specs +
allowing tool node to forward them along to corresponding langchain tool
implementations
* making the same change for the native shell tool middleware
* allowing shell tool middleware to specify a name for the shell tool
(negative diff then for claude bash middleware)


long term I think the solution might be to attach metadata to a tool to
map the provider spec to a langchain implementation, which we could also
take some lessons from on the MCP front.
2025-11-14 13:17:01 -05:00
Mason Daugherty
d2942351ce release(core): 1.0.5 (#33973) 2025-11-14 11:51:27 -05:00
Sydney Runkle
83c078f363 fix: adding missing async hooks (#33957)
* filling in missing async gaps
* using recommended tool runtime injection instead of injected state
  * updating tests to use helper function as well
2025-11-14 09:13:39 -05:00
ZhangShenao
26d39ffc4a docs: Fix doc links (#33964) 2025-11-14 09:07:32 -05:00
Mason Daugherty
421e2ceeee fix(core): don't mask exceptions (#33959) 2025-11-14 09:05:29 -05:00
Mason Daugherty
275dcbf69f docs(core): add clarity to base token counting methods (#33958)
Wasn't immediately obvious that `get_num_tokens_from_messages` adds
additional prefixes to represent user roles in conversation, which adds
to the overall token count.

```python
from langchain_google_genai import GoogleGenerativeAI

llm = GoogleGenerativeAI(model="gemini-2.5-flash")
num_tokens = llm.get_num_tokens("Hello, world!")
print(f"Number of tokens: {num_tokens}")
# Number of tokens: 4
```

```python
from langchain.messages import HumanMessage

messages = [HumanMessage(content="Hello, world!")]

num_tokens = llm.get_num_tokens_from_messages(messages)
print(f"Number of tokens: {num_tokens}")
# Number of tokens: 6
```
2025-11-13 17:15:47 -05:00
Sydney Runkle
9f87b27a5b fix: add filesystem middleware in init (#33955) 2025-11-13 15:07:33 -05:00
Mason Daugherty
b2e1196e29 chore(core,infra): nits (#33954) 2025-11-13 14:50:54 -05:00
Sydney Runkle
2dc1396380 chore(langchain): update deps (#33951) 2025-11-13 14:21:25 -05:00
Mason Daugherty
77941ab3ce feat(infra): add automatic issue labeling (#33952) 2025-11-13 14:13:52 -05:00
Mason Daugherty
ee19a30dde fix(groq): bump min ver for core dep (#33949)
Due to issue with unit tests and docs URL for exceptions
2025-11-13 11:46:54 -05:00
Mason Daugherty
5d799b3174 release(nomic): 1.0.1 (#33948)
support Python 3.14 #33655
2025-11-13 11:25:39 -05:00
Mason Daugherty
8f33a985a2 release(groq): 1.0.1 (#33947)
- fix: handle tool calls with no args #33896
- add prompt caching token usage details #33708
2025-11-13 11:25:00 -05:00
Mason Daugherty
78eeccef0e release(deepseek): 1.0.1 (#33946)
- support strict beta structured output #32727
2025-11-13 11:24:39 -05:00
ccurme
3d415441e8 fix(langchain, openai): backward compat for response_format (#33945) 2025-11-13 11:11:35 -05:00
ccurme
74385e0ebd fix(langchain, openai): fix create_agent / response_format for Responses API (#33939) 2025-11-13 10:18:15 -05:00
Christophe Bornet
2bfbc29ccc chore(core): fix some ruff TC rules (#33929)
fix some ruff TC rules but still don't enforce them as Pydantic model
fields use type annotations at runtime.
2025-11-12 14:07:19 -05:00
Christophe Bornet
ef79c26f18 chore(cli,standard-tests,text-splitters): fix some ruff TC rules (#33934)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-12 14:06:31 -05:00
ccurme
fbe32c8e89 release(anthropic): 1.0.3 (#33935) 2025-11-12 10:55:28 -05:00
Mohammad Mohtashim
2511c28f92 feat(anthropic): support code_execution_20250825 (#33925) 2025-11-12 10:44:51 -05:00
Sydney Runkle
637bb1cbbc feat: refactor tests coverage (#33927)
middleware tests have gotten quite unwieldy, major restructuring, sets
the stage for coverage increase

this is super hard to review -- as a proof that we've retained important
tests, I ran coverage on `master` and this branch and confirmed
identical coverage.

* moving all middleware related tests to `agents/middleware` folder
* consolidating related test files
* adding coverage utility to makefile
2025-11-11 10:40:12 -05:00
Mason Daugherty
3dfea96ec1 chore: update README.md files (#33919) 2025-11-10 22:51:35 -05:00
ccurme
68643153e5 feat(langchain): support async summarization in SummarizationMiddleware (#33918) 2025-11-10 15:48:51 -05:00
Abbas Syed
462762f75b test(core): add comprehensive tests for groq block translator (#33906) 2025-11-10 15:45:36 -05:00
ccurme
4f3729c004 release(model-profiles): 0.0.4 (#33917) 2025-11-10 12:06:32 -05:00
Mason Daugherty
ba428cdf54 chore(infra): add note to pr linting workflow (#33916) 2025-11-10 11:49:31 -05:00
Mason Daugherty
69c7d1b01b test(groq,openai): add retries for flaky tests (#33914) 2025-11-10 10:36:11 -05:00
Mason Daugherty
733299ec13 revert(core): "applied secrets_map in load to plain string values" (#33913)
Reverts langchain-ai/langchain#33678

Breaking API change
2025-11-10 10:29:30 -05:00
ccurme
e1adf781c6 feat(langchain): (SummarizationMiddleware) support use of model context windows when triggering summarization (#33825) 2025-11-10 10:08:52 -05:00
Shahroz Ahmad
31b5e4810c feat(deepseek): support strict beta structured output (#32727)
**Description:** This PR adds support for DeepSeek's beta strict mode
feature for structured
outputs and tool calling. It overrides `bind_tools()` and
`with_structured_output()` to automatically use
DeepSeek's beta endpoint (https://api.deepseek.com/beta) when
`strict=True`. Both methods need overriding because they're independent
entry points and user can call either directly. When DeepSeek's strict
mode graduates from beta, we can just remove both overriden methods. You
can read more about the beta feature here:
https://api-docs.deepseek.com/guides/function_calling#strict-mode-beta
  
**Issue:** Implements #32670 


**Dependencies:** None


**Sample Code**

```python
from langchain_deepseek import ChatDeepSeek
from pydantic import BaseModel, Field
from typing import Optional
import os


# Enter your DeepSeek API Key here
API_KEY = "YOUR_API_KEY"


# location, temperature, condition are required fields
# humidity is optional field with default value
class WeatherInfo(BaseModel):
    location: str = Field(description="City name")
    temperature: int = Field(description="Temperature in Celsius")
    condition: str = Field(description="Weather condition (sunny, cloudy, rainy)")
    humidity: Optional[int] = Field(default=None, description="Humidity percentage")


llm = ChatDeepSeek(
    model="deepseek-chat",
    api_key=API_KEY,
)

# just to confirm that a new instance will use the default base url (instead of beta)
print(f"Default API base: {llm.api_base}")



# Test 1: bind_tools with strict=True shoud list all the tools calls
print("\nTest 1: bind_tools with strict=True")
llm_with_tools = llm.bind_tools([WeatherInfo], strict=True)
response = llm_with_tools.invoke("Tell me the weather in New York. It's 22 degrees, sunny.")
print(response.tool_calls)



# Test 2: with_structured_output with strict=True
print("\nTest 2: with_structured_output with strict=True")
structured_llm = llm.with_structured_output(WeatherInfo, strict=True)
result = structured_llm.invoke("Tell me the weather in New York.")
print(f"  Result: {result}")
assert isinstance(result, WeatherInfo), "Result should be a WeatherInfo instance"
```

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-09 22:24:33 -05:00
Mason Daugherty
c6801fe159 chore: fix URL underlining in README.md (#33905) 2025-11-09 22:22:56 -05:00
AmazingcatAndrew
1b563067f8 fix(chroma): resolve OpenCLIP + Chroma image embedding test regression (#33899)
**Description:**  
Fixes the OpenCLIP × Chroma regression that caused nested embedding
errors when adding or searching image data.
The test case `test_openclip_chroma_embed_no_nesting_error` has been
restored and verified to work correctly with the current LangChain core
dependencies.
Functional validation confirms that `similarity_search_by_image` now
returns correct, metadata‑preserving results.

**Issue:**  
Fixes #33851

**Dependencies:**  
No new dependencies introduced.  

**Testing:**  
All tests under  
```bash
uv run --group test pytest tests/unit_tests
```  
result:
```
30 passed in 91.26s (0:01:31)
```
have passed successfully using Python 3.13.9 and uv‑managed environment.
This confirms that the regression has been fixed.  

Running  
```bash
make test
```  
still produces cleanup‑time `AttributeError: 'ProactorEventLoop' object
has no attribute '_ssock'` on Windows (Python 3.13+).
This is a benign asyncio teardown message rather than a functional
failure.
`uv run pytest` closes event loops immediately after tests, while `make
test` invokes pytest through a secondary process layer that leaves a
background loop alive at interpreter shutdown.
This difference in teardown behavior explains the extra messages seen
only when using `make test`.

**Summary:**  
- Verified the OpenCLIP + Chroma image pipeline works correctly.  
- `uv run --group test pytest` fully passes; the fix is complete.  
- The residual `_ssock` warnings occur only during
Windows asyncio cleanup and are not related to this code change.

This is my first time contributing code, please contact me with any
questions

---

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-09 21:24:33 -05:00
Mason Daugherty
1996d81d72 chore(langchain): pass on reference docstrings (middleware) (#33904) 2025-11-09 21:18:28 -05:00
Mason Daugherty
ab0677c6f1 fix(groq): handle tool calls with no args (#33896)
When Groq returns tool calls with no arguments, it sends arguments:
`'null'` (JSON null), but LangChain's core parsing expects either a dict
or converts null to Python None, which fails the `isinstance(args_,
dict)` check and incorrectly marks the tool call as invalid.

Related to #32017
2025-11-08 22:30:44 -05:00
artreimus
bdb53c93cc docs(langchain): correct IBM provider link in chat_models docstring (#33897)
**PR title**

```
docs(langchain): correct IBM provider link in chat_models docstring
```

**PR message**

**Description**
Fix broken link in the `chat_models` docstring. The **ibm** bullet
incorrectly linked to the DeepSeek provider page; update it to the
canonical IBM provider docs.

This only affects generated API reference content on
`reference.langchain.com`. No runtime behavior changes.

**Issue**
N/A (documentation-only).

**Dependencies**
None.

**Testing & quality**

* Ran `make format`, `make lint`, and `make test` in the package (no
code changes expected to affect tests).
2025-11-08 07:02:33 -06:00
Alazar Genene
94d5271cb5 fix(standard-tests): fix semantic typo in if statement (#33890) 2025-11-07 18:01:59 -05:00
ccurme
e499db4266 release(langchain): 1.0.5 (#33893) 2025-11-07 17:54:43 -05:00
npage902
cc3af82b47 fix(core): applied secrets_map in load to plain string values (#33678)
Replaces #33618 

**Description:** Fixes the bug in the `load()` function where secret
placeholders in plain dicts were not replaced, even if they match a key
in `secrets_map`, and adds a test case.

Example:
```py
obj = {"api_key": "__SECRET_API_KEY__"}
secret_key = "secret_key_1234"
secrets_map = {"__SECRET_API_KEY__": secret_key}
result = load(obj, secrets_map=secrets_map)
```
Before this change, printing `api_key` in `result` would output
`"__SECRET_API_KEY__"`. Now, it will properly output
`"secret_key_1234"`.

**Issue:** Fixes #31804 

**Dependencies:** None

`make format`, `make lint`, and `make test` have all passed on my
machine.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 17:14:13 -05:00
Mshari
9383b78be1 feat(groq): add prompt caching token usage details (#33708)
**Description:** 
Adds support for prompt caching usage metadata in ChatGroq. The
integration now captures cached token information from the Groq API
response and includes it in the `input_token_details` field of the
`usage_metadata`.

Changes:
- Created new `_create_usage_metadata()` helper function to centralize
usage metadata creation logic
- Extracts `cached_tokens` from `prompt_tokens_details` in API responses
and maps to `input_token_details.cache_read`
- Integrated the helper function in both streaming
(`_convert_chunk_to_message_chunk`) and non-streaming
(`_create_chat_result`) code paths
- Added comprehensive unit tests to verify caching metadata handling and
backward compatibility

This enables users to monitor prompt caching effectiveness when using
Groq models with prompt caching enabled.

**Issue:** N/A

**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 17:05:22 -05:00
ccurme
3c492571ab release(anthropic): 1.0.2 (#33888) 2025-11-07 16:47:25 -05:00
ccurme
f2410f7ea7 revert: Support for SystemMessage in create_agent (#33889)
Reverts langchain-ai/langchain#33640

Introduces lint errors into langchain-anthropic

Should incorporate into 1.1 instead of patch release.
2025-11-07 16:44:11 -05:00
Mason Daugherty
91560b6a7a chore(infra): expand PR labeling (#33887) 2025-11-07 16:37:35 -05:00
ccurme
b1dd448233 release(core): 1.0.4 (#33886) 2025-11-07 16:26:44 -05:00
dy93
904daf6f40 feat(core): support draw subgraph using pygraphviz (#32966)
The `draw_png()` method currently does not support drawing subgraphs.
This PR adds the ability to render subgraph outlines, improving
visualization clarity when working with nested structures.
2025-11-07 15:58:35 -05:00
Mohammad Mohtashim
8e31a5d7bd fix(core): Fix tool name check in name_dict for PydanticToolsParser (#33479)
- **Description:** The root cause of this issue is that when a user
defines `model_config` in a `BaseModel`, the `{"type": <tool_name>}`
value is derived from the title specified in `model_config` when the
results are parsed
[here](https://vscode.dev/github/keenborder786/langchain/blob/fix/tool_name_dict/libs/core/langchain_core/output_parsers/openai_tools.py#L199).
However,
[tool.__name__](https://vscode.dev/github/keenborder786/langchain/blob/fix/tool_name_dict/libs/core/langchain_core/output_parsers/openai_tools.py#L331)
uses the class name (in uppercase) of the `BaseModel`, resulting in a
`KeyError` when a custom title is provided in `model_config`.
 

The Best Solution will be to use the title provided in `model_config`
attribute if provided one since that is what `type` will be parsed to,
if not then use `tool.__name__`. But need to make sure that this works
only for Pydantic V2.

  - **Issue:** #27260

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 15:39:47 -05:00
Sydney Runkle
ee630b4539 fix: bump up default recursion limit (#33881)
Fixes https://github.com/langchain-ai/langchain/issues/33740

We don't want to depend on recursion limit here, model call limit
middleware is more appropriate
2025-11-07 13:49:12 -06:00
Jacob Lee
46971447df fix(core): Filter empty content blocks from formatted prompts (#32519)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-07 14:39:25 -05:00
Azibek
d8b94007c1 fix(huggingface): pass llm params to ChatHuggingFace (#32368)
This PR fixes #32234 and improves HuggingFace chat model integration by:

Ensuring ChatHuggingFace inherits key parameters (temperature,
max_tokens, top_p, streaming, etc.) from the underlying LLM when not
explicitly set.
Adding and updating unit tests to verify property inheritance.
No breaking changes; these updates enhance reliability and
maintainability.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 14:29:15 -05:00
Mohammad Mohtashim
cf595dcc38 chore(langchain): Support for SystemMessage in create_agent (#33640)
- **Description:** Updated Function Signature of `create_agent`, the
system prompt can be both a list and string. I see no harm in doing
this, since SystemMessage accepts both.
- **Issue:** #33630

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-11-07 13:00:38 -06:00
Copilot
d27211cfa7 fix(core): context preservation in shielded async callbacks (#32163)
The `@shielded` decorator in async callback managers was not preserving
context variables, breaking OpenTelemetry instrumentation and other
context-dependent functionality.

## Problem

When using async callbacks with the `@shielded` decorator (applied to
methods like `on_llm_end`, `on_chain_end`, etc.), context variables were
not being preserved across the shield boundary. This caused issues with:

- OpenTelemetry span context propagation
- Other instrumentation that relies on context variables
- Inconsistent context behavior between sync and async execution

The issue was reproducible with:

```python
from contextvars import copy_context
import asyncio
from langgraph.graph import StateGraph

# Sync case: context remains consistent
print("SYNC")
print(copy_context())  # Same object
graph.invoke({"result": "init"})
print(copy_context())  # Same object

# Async case: context was inconsistent (before fix)
print("ASYNC") 
asyncio.run(graph.ainvoke({"result": "init"}))
print(copy_context())  # Different object than expected
```

## Root Cause

The original `shielded` decorator implementation:

```python
async def wrapped(*args: Any, **kwargs: Any) -> Any:
    return await asyncio.shield(func(*args, **kwargs))
```

Used `asyncio.shield()` directly without preserving the current
execution context, causing context variables to be lost.

## Solution

Modified the `shielded` decorator to:

1. Capture the current context using `copy_context()`
2. Create a task with explicit context using `asyncio.create_task(coro,
context=ctx)` for Python 3.11+
3. Shield the context-aware task
4. Fallback to regular task creation for Python < 3.11

```python
async def wrapped(*args: Any, **kwargs: Any) -> Any:
    # Capture the current context to preserve context variables
    ctx = copy_context()
    coro = func(*args, **kwargs)
    
    try:
        # Create a task with the captured context to preserve context variables
        task = asyncio.create_task(coro, context=ctx)
        return await asyncio.shield(task)
    except TypeError:
        # Python < 3.11 fallback
        task = asyncio.create_task(coro)
        return await asyncio.shield(task)
```

## Testing

- Added comprehensive test
`test_shielded_callback_context_preservation()` that validates context
variables are preserved across shielded callback boundaries
- Verified the fix resolves the original LangGraph context consistency
issue
- Confirmed all existing callback manager tests still pass
- Validated OpenTelemetry-like instrumentation scenarios work correctly

The fix is minimal, maintains backward compatibility, and ensures proper
context preservation for both modern Python versions and older ones.

Fixes #31398.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 13:09:47 -05:00
Swastik-Swarup-Dash
ca1a3fbe88 fix(core): RunnablePick may not return a dict if keys is a string (#31321)
Change made From:
```python
class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
```
To:
```python
class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
```
As suggested by @cbornet 

Fixes ##31309

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 13:04:20 -05:00
williamzhu54
c955b53aed fix(core): fix Runnable parallel schema being empty when children runnable input schemas use TypedDict (#28196)
# Description
This submission is a part of a school project from our team of 4
@EminGul @williamzhu54 @annay54 @donttouch22.

Our pull request fixes the issue with RunnableParallel scheme being
empty by returning the correct schema output when children runnable
input schemas use TypedDicts.

# Issue
Fixes #24326


# Dependencies
No extra dependencies required for this fix.

# Feedback
Any feedback and advice is gladly welcomed. Please feel free to let us
know what we can change or improve upon regarding this issue.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 12:01:21 -05:00
Christophe Bornet
2a626d9608 refactor(langchain): use create_importer for HypotheticalDocumentEmbedder (#32078) 2025-11-07 11:16:00 -05:00
Abhinav
0861cba04b fix(chroma): pydantic validation error when using retriever.invoke() (#31377) 2025-11-07 10:59:16 -05:00
Lê Nam Khánh
88246f45b3 docs: fix typos in libs/core/langchain_core/utils/function_calling.py (#33873) 2025-11-07 10:34:28 -05:00
Lê Nam Khánh
1d04514354 docs: fix typos in libs/core/tests/unit_tests/utils/test_strings.py (#33875) 2025-11-07 10:34:12 -05:00
Lê Nam Khánh
c2324b8f3e docs: fix typos in libs/langchain/langchain_classic/chains/summarize/chain.py (#33877) 2025-11-07 10:33:53 -05:00
Lê Nam Khánh
957ea65d12 docs: fix typos in libs/core/tests/unit_tests/indexing/test_hashed_document.py (#33874) 2025-11-07 10:32:20 -05:00
Lê Nam Khánh
00fa38a295 docs: fix typos in libs/core/tests/unit_tests/test_tools.py (#33876) 2025-11-07 10:31:57 -05:00
Lê Nam Khánh
9d98c1b669 docs: fix typos in libs/partners/groq/langchain_groq/chat_models.py (#33878) 2025-11-07 10:31:35 -05:00
Mahmut CAVDAR
00cc9d421f fix(langchain): Update langchain-core dependency version (#33775) 2025-11-07 10:31:06 -05:00
Mohammad Mohtashim
65716cf590 feat(perplexity): Created Dedicated Output Parser to Support Reasoning Model Output for perplexity (#33670) 2025-11-07 10:17:35 -05:00
riunyfir
1b77a191f4 feat: The response.incomplete event is not handled when using stream_mode=['messages'] (#33871) 2025-11-07 09:46:11 -05:00
repeat-Q
ebfde9173c docs: expand "Why use LangChain?" section in README (#33846) 2025-11-07 09:09:05 -05:00
Lê Nam Khánh
2fe0369049 docs: fix typos in some files (#33867) 2025-11-07 09:04:29 -05:00
Mason Daugherty
e023201d42 style: some cleanup (#33857) 2025-11-06 23:50:46 -05:00
Mason Daugherty
d40e340479 chore: attribute package change versions (#33854)
Needed to disambiguate for within inherited docs
2025-11-06 16:57:30 -05:00
Sydney Runkle
9a09ed0659 fix: don't trace conditional edges and no todos in input state (#33842)
while experimenting w/ todo middleware

| Before | After |
|--------|-------|
| ![Screenshot 2025-11-05 at 1 56 21
PM](https://github.com/user-attachments/assets/63195ae4-8122-4662-8246-0fbc16cb1e22)
| ![Screenshot 2025-11-05 at 1 56 03
PM](https://github.com/user-attachments/assets/255e2fa8-e52d-4d1a-949a-33df52ee6668)
|
| Tracing conditional edges (verbose) | Not tracing conditional edges
(cleaner) |
| ![Screenshot 2025-11-05 at 1 57 56
PM](https://github.com/user-attachments/assets/449ccfe9-4c21-4c87-8e0e-6e89d7a97611)
| ![Screenshot 2025-11-05 at 1 56 58
PM](https://github.com/user-attachments/assets/c5c28d0e-2153-4572-af29-b2528761fec6)
|
| Todos in input state (cluttered) | No todos in input state (cleaner) |
2025-11-05 14:25:57 -05:00
Mason Daugherty
5f27b546dd chore: update README.md with deepagents (#33843) 2025-11-05 14:22:20 -05:00
Mason Daugherty
022fdd52c3 fix(core): handle missing dependency version information (#33844)
Follow up to #33347

This continues to make searching issues difficult
2025-11-05 14:19:55 -05:00
Sydney Runkle
7946a8f64e release: langchain v1.0.4 (#33839) 2025-11-05 12:37:58 -05:00
Sydney Runkle
7af79039fc fix: only increment thread count on successful executions (#33837)
* for run count + thread count overflow we should warn model not to call
again
* don't tally mocked tool calls in thread limit -- consider the
following
  * run limit is 1 
  * thread limit is 3
  * first run calls the tool 2 times, 1 executes, 1 is blocked
* we should only count the successful execution above towards the total
thread count
* raise more helpful warnings on invalid config
2025-11-05 10:00:07 -05:00
Sydney Runkle
1755750ca1 fix: more robust tool call limit middleware (#33817)
* improving typing (covariance)
* adding in support for continuing w/ tool calls not yet at threshold,
switching default to continue
* moving all logic into after model

```py
ExitBehavior = Literal["continue", "error", "end"]
"""How to handle execution when tool call limits are exceeded.
- `"continue"`: Block exceeded tools with error messages, let other tools continue (default)
- `"error"`: Raise a `ToolCallLimitExceededError` exception
- `"end"`: Stop execution immediately, injecting a ToolMessage and an AI message
    for the single tool call that exceeded the limit. Raises `NotImplementedError`
    if there are multiple tool calls
"""
```
2025-11-05 09:18:21 -05:00
Mason Daugherty
ddb53672e2 chore(infra): remove unused pr-title-labeler.yml (#33831) 2025-11-04 20:06:52 -05:00
Mason Daugherty
eeae34972f chore(infra): drop langchain_v1 pr lint (#33830)
Just use `langchain`
2025-11-04 19:46:05 -05:00
Mason Daugherty
47d89b1e47 fix(langchain): remove Tigris (#33829)
Removing this code as there is no possible way for it to work.

See https://github.com/langchain-ai/langchain-community/pull/159
2025-11-04 19:45:52 -05:00
Mason Daugherty
ee0bdaeb79 chore: correct langchain-community references (#33827)
fix docstrings that referenced community versions of now-native packages
2025-11-04 17:01:35 -05:00
Christophe Bornet
915c446c48 chore(core): add ruff rule PLR2004 (#33706)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-04 13:33:37 -05:00
Mason Daugherty
d1e2099408 chore(core): clean pyproject formatting (#33821) 2025-11-04 18:21:15 +00:00
Mason Daugherty
6ea15b9efa docs(model-profiles): fix typo (#33820) 2025-11-04 18:19:55 +00:00
Mason Daugherty
69f33aaff5 chore(infra): remova unused poetry_setup action (#33819) 2025-11-04 13:18:55 -05:00
Mason Daugherty
3f66f102d2 chore: update issue template xref url (#33818) 2025-11-04 13:17:42 -05:00
Mason Daugherty
c6547f58b7 style(standard-tests): refs pass (#33814) 2025-11-04 00:01:16 -05:00
Mason Daugherty
dfb05a7fa0 style: refs pass (#33813) 2025-11-03 22:11:10 -05:00
ccurme
2f67f9ddcb release(huggingface): 1.0.1 (#33803) 2025-11-03 14:49:52 -05:00
Hyejeong Jo
0e36185933 fix(huggingface): add stream_usage support for ChatHuggingFace invoke/stream (#32708) 2025-11-03 14:44:32 -05:00
Michael Li
6617865440 fix(core): add no colors check (#33780)
Patch edge case in get_color_mapping
2025-11-03 13:23:23 -05:00
ccurme
6dba4912be release(model-profiles): 0.0.3 (#33798) 2025-11-03 11:17:08 -05:00
ccurme
7a3827471b fix(model-profiles): fix pdf_inputs field (#33797) 2025-11-03 11:10:33 -05:00
ccurme
f006bc4c7e feat(langchain): add model-profiles as optional dependency (#33794) 2025-11-03 10:13:58 -05:00
Mason Daugherty
0a442644e3 test(anthropic): add vcr to test_search_result_tool_message (#33793)
To fix nondeterministic results causing integration testing to sometimes
fail

Also speeds up from 10s to 0.5

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-11-03 15:13:30 +00:00
repeat-Q
4960663546 docs: add Code of Conduct link to README (#33782)
**Description:** Add link to Code of Conduct in the Additional resources
section to make community guidelines more accessible for all
contributors.

**Rationale:** 
- **Community Health:** Making the Code of Conduct easily discoverable
helps set clear expectations for community behavior and fosters a more
inclusive, respectful environment
- **New Contributor Experience:** Many new contributors look to the
README as the primary source of project information. Having the Code of
Conduct readily available helps onboard them properly
- **Best Practices:** Prominent Code of Conduct links are considered a
best practice in open source projects and improve project accessibility
- **Low Impact:** This is a simple, non-breaking change that
significantly improves documentation completeness

**Issue:** N/A

**Dependencies:** None
2025-11-03 09:50:47 -05:00
ccurme
1381137c37 release(standard-tests): 1.0.1 (#33792) 2025-11-03 09:46:39 -05:00
ccurme
b4a042dfc4 release(core): 1.0.3 (#33768) 2025-11-03 09:19:32 -05:00
ccurme
81c4f21b52 fix(standard-tests): update multimodal tests (#33781) 2025-11-01 16:38:20 -04:00
Mason Daugherty
f2dab562a8 style: misc refs work (#33771) 2025-10-31 18:29:53 -04:00
ccurme
61196a8280 release(openai): 1.0.2 (#33769) 2025-10-31 14:21:32 -04:00
ccurme
7a97c31ac0 release(model-profiles): 0.0.2 (#33767) 2025-10-31 13:58:04 -04:00
ccurme
424214041e feat(model-profiles): support more providers (#33766) 2025-10-31 13:48:56 -04:00
ccurme
b06bd6a913 fix(model-profiles): add typing-extensions as explicit dep (#33762) 2025-10-31 11:21:55 -04:00
ccurme
1c762187e8 fix(model-profiles): remove langchain-core as a dependency (#33761) 2025-10-31 11:04:14 -04:00
Mason Daugherty
90aefc607f docs(core): improve tools module docstrings (#33755)
styling in `base.py`, content updates in
`libs/core/langchain_core/tools/convert.py`
2025-10-31 10:54:30 -04:00
ccurme
2ca73c479b fix(infra): fix release workflow for new packages (#33760) 2025-10-31 10:38:38 -04:00
ccurme
17c7c273b8 fix(infra): fix release workflow for new packages (#33759) 2025-10-31 10:21:12 -04:00
ccurme
493be259c3 feat(core): mint langchain-model-profiles and add profile property to BaseChatModel (#33728) 2025-10-31 09:44:46 -04:00
Mason Daugherty
106c6ac273 revert: "chore: skip anthropic tests while waiting on new anthropic release" (#33753)
Reverts langchain-ai/langchain#33739
2025-10-30 16:37:12 -04:00
Mason Daugherty
7aaaa371e7 release(anthropic): 1.0.1 (#33752) 2025-10-30 16:19:44 -04:00
Mason Daugherty
468dad1780 chore: use model IDs, latest anthropic models (#33747)
- standardize on using model IDs, no more aliases - makes future
maintenance easier
- use latest models in docstrings to highlight support
- remove remaining sonnet 3-7 usage due to deprecation

Depends on #33751
2025-10-30 16:13:28 -04:00
Mason Daugherty
32d294b89a fix(anthropic): clean up tests, update default model to use ID (#33751)
- use latest models in examples to highlight support
- standardize on using IDs in examples - no more aliases to improve
determinism in future tests
- bump lock
- in integration tests, fix stale casettes and use `MODEL_NAME`
uniformly where possible
- add case for default max tokens for sonnet-4-5 (was missing)
2025-10-30 16:08:18 -04:00
Mason Daugherty
dc5b7dace8 test(openai): mark tests flaky (#33750)
see:
https://github.com/langchain-ai/langchain/actions/runs/18921929210/job/54020065079#step:10:560
2025-10-30 16:07:58 -04:00
Mason Daugherty
e00b7233cf chore(langchain): fix lint_imports paths (#33749) 2025-10-30 16:06:08 -04:00
Mason Daugherty
91f7e73c27 fix(langchain): use system_prompt in integration tests (#33748) 2025-10-30 16:05:57 -04:00
Shagun Gupta
75fff151e8 fix(openai): replace pytest.warns(None) with warnings.catch_warnings in ChatOpenAI test to resolve TypeError . Resolves issue #33705 (#33741) 2025-10-30 09:22:34 -04:00
Sydney Runkle
d05a0cb80d chore: skip anthropic tests while waiting on new anthropic release (#33739)
like https://github.com/langchain-ai/langchain/pull/33312/files

temporarily skip while waiting on new anthropic release

dependent on https://github.com/langchain-ai/langchain/pull/33737
2025-10-29 16:10:42 -07:00
Sydney Runkle
d24aa69ceb chore: don't pick up alphas for testing (#33738)
reverting change made in
eaa6dcce9e
2025-10-29 16:04:57 -07:00
Sydney Runkle
fabcacc3e5 chore: remove mentions of sonnet 3.5 (#33737)
see
https://docs.claude.com/en/docs/about-claude/model-deprecations#2025-08-13%3A-claude-sonnet-3-5-models
2025-10-29 15:49:27 -07:00
Christian Bromann
ac58d75113 fix(langchain_v1): remove thread_model_call_count and run_model_call_count from tool node test (#33725)
While working on ToolRuntime in TS I discovered that Python still uses
`thread_model_call_count` and `run_model_call_count` in ToolNode tests
which afaik we removed.
2025-10-29 15:36:18 -07:00
Sydney Runkle
28564ef94e release: core 1.0.2 and langchain 1.0.3 (#33736) 2025-10-29 15:30:17 -07:00
Christian Bromann
b62a9b57f3 fix(langchain_v1): removed unsed functions in tool_call_limit middleware (#33735)
These functions seem unused and can be removed.
2025-10-29 15:21:38 -07:00
Sydney Runkle
76dd656f2a fix: filter out injected args from tracing (#33729)
this is CC generated and I want to do a thorough review + update the
tests. but should be able to ship today.

before eek

<img width="637" height="485" alt="Screenshot 2025-10-29 at 12 34 52 PM"
src="https://github.com/user-attachments/assets/121def87-fb7b-4847-b9e2-74f37b3b4763"
/>

now, woo

<img width="651" height="158" alt="Screenshot 2025-10-29 at 12 36 09 PM"
src="https://github.com/user-attachments/assets/1fc0e19e-a83f-417c-81e2-3aa0028630d6"
/>
2025-10-29 22:20:53 +00:00
ccurme
d218936763 fix(openai): update model used in test (#33733)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-29 17:09:18 -04:00
Mason Daugherty
123e29dc26 style: more refs fixes (#33730) 2025-10-29 16:34:46 -04:00
Sydney Runkle
6a1dca113e chore: move ToolNode improvements back to langgraph (#33634)
Moving all `ToolNode` related improvements back to LangGraph and
importing them in LC!
pairing w/ https://github.com/langchain-ai/langgraph/pull/6321

this fixes a couple of things:
1. `InjectedState`, store etc will continue to work as expected no
matter where the import is from
2. `ToolRuntime` is now usable w/in langgraph, woohoo!
2025-10-29 11:44:23 -07:00
Sydney Runkle
8aea6dd23a feat: support structured output retry middleware (#33663)
* attach the latest `AIMessage` to all `StructuredOutputError`s so that
relevant middleware can use as desired
* raise `StructuredOutputError` from `ProviderStrategy` logic in case of
failed parsing (so that we can retry from middleware)
* added a test suite w/ example custom middleware that retries for tool
+ provider strategy

Long term, we could add our own opinionated structured output retry
middleware, but this at least unblocks folks who want to use custom
retry logic in the short term :)

```py
class StructuredOutputRetryMiddleware(AgentMiddleware):
    """Retries model calls when structured output parsing fails."""

    def __init__(self, max_retries: int) -> None:
        self.max_retries = max_retries

    def wrap_model_call(
        self, request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]
    ) -> ModelResponse:
        for attempt in range(self.max_retries + 1):
            try:
                return handler(request)
            except StructuredOutputError as exc:
                if attempt == self.max_retries:
                    raise

                ai_content = exc.ai_message.content
                error_message = (
                    f"Your previous response was:\n{ai_content}\n\n"
                    f"Error: {exc}. Please try again with a valid response."
                )
                request.messages.append(HumanMessage(content=error_message))
```
2025-10-29 08:41:44 -07:00
Vincent Koc
78a2f86f70 fix(core): improve JSON get_format_instructions using Opik Agent Optimizer (#33718) 2025-10-29 11:05:24 -04:00
Mason Daugherty
b5e23e5823 fix(langchain_v1): correct ref url (#33715) 2025-10-28 23:29:19 -04:00
Mason Daugherty
7872643910 chore(standard-tests): Update API reference link in README (#33714) 2025-10-28 23:29:02 -04:00
Mason Daugherty
f15391f4fc chore(text-splitters): API reference link in README (#33713) 2025-10-28 23:28:48 -04:00
Mason Daugherty
ca9b81cc2e chore(infra): update README (#33712)
Updated the README to clarify LangChain's focus on building agents and
LLM-powered applications. Added a section for community discussions and
refined the ecosystem description.
2025-10-28 23:22:18 -04:00
Mason Daugherty
a2a9a02ecb style(core): more cleanup all around (#33711) 2025-10-28 22:58:19 -04:00
Mason Daugherty
e5e1d6c705 style: more refs work (#33707) 2025-10-28 14:43:28 -04:00
dependabot[bot]
6ee19473ba chore(infra): bump actions/download-artifact from 5 to 6 (#33682)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README for download-artifact v5 changes by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/417">actions/download-artifact#417</a></li>
<li>Update README with artifact extraction details by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/424">actions/download-artifact#424</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v6.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/438">actions/download-artifact#438</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v5...v6.0.0">https://github.com/actions/download-artifact/compare/v5...v6.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="018cc2cf5b"><code>018cc2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/438">#438</a>
from actions/danwkennedy/prepare-6.0.0</li>
<li><a
href="815651c680"><code>815651c</code></a>
Revert &quot;Remove <code>github.dep.yml</code>&quot;</li>
<li><a
href="bb3a066a8b"><code>bb3a066</code></a>
Remove <code>github.dep.yml</code></li>
<li><a
href="fa1ce46bbd"><code>fa1ce46</code></a>
Prepare <code>v6.0.0</code></li>
<li><a
href="4a24838f3d"><code>4a24838</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/431">#431</a>
from danwkennedy/patch-1</li>
<li><a
href="5e3251c4ff"><code>5e3251c</code></a>
Readme: spell out the first use of GHES</li>
<li><a
href="abefc31eaf"><code>abefc31</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/424">#424</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="ac43a6070a"><code>ac43a60</code></a>
Update README with artifact extraction details</li>
<li><a
href="de96f4613b"><code>de96f46</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/417">#417</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="7993cb44e9"><code>7993cb4</code></a>
Remove migration guide for artifact download changes</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/download-artifact/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-28 14:07:16 -04:00
dependabot[bot]
a59551f3b4 chore(infra): bump actions/upload-artifact from 4 to 5 (#33681)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README.md by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li>Update GHES guidance to include reference to Node 20 version by <a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v5.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/734">actions/upload-artifact#734</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li><a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li><a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v5.0.0">https://github.com/actions/upload-artifact/compare/v4...v5.0.0</a></p>
<h2>v4.6.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.3.2 package &amp; prepare for new
upload-artifact release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.2">https://github.com/actions/upload-artifact/compare/v4...v4.6.2</a></p>
<h2>v4.6.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.2.2 package by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/673">actions/upload-artifact#673</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.1">https://github.com/actions/upload-artifact/compare/v4...v4.6.1</a></p>
<h2>v4.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Expose env vars to control concurrency and timeout by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/662">actions/upload-artifact#662</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.0">https://github.com/actions/upload-artifact/compare/v4...v4.6.0</a></p>
<h2>v4.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: deprecated <code>Node.js</code> version in action by <a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
<li>Add new <code>artifact-digest</code> output by <a
href="https://github.com/bdehamer"><code>@​bdehamer</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/656">actions/upload-artifact#656</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="330a01c490"><code>330a01c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/734">#734</a>
from actions/danwkennedy/prepare-5.0.0</li>
<li><a
href="03f2824452"><code>03f2824</code></a>
Update <code>github.dep.yml</code></li>
<li><a
href="905a1ecb59"><code>905a1ec</code></a>
Prepare <code>v5.0.0</code></li>
<li><a
href="2d9f9cdfa9"><code>2d9f9cd</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/725">#725</a>
from patrikpolyak/patch-1</li>
<li><a
href="9687587dec"><code>9687587</code></a>
Merge branch 'main' into patch-1</li>
<li><a
href="2848b2cda0"><code>2848b2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/727">#727</a>
from danwkennedy/patch-1</li>
<li><a
href="9b511775fd"><code>9b51177</code></a>
Spell out the first use of GHES</li>
<li><a
href="cd231ca1ed"><code>cd231ca</code></a>
Update GHES guidance to include reference to Node 20 version</li>
<li><a
href="de65e23aa2"><code>de65e23</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/712">#712</a>
from actions/nebuk89-patch-1</li>
<li><a
href="8747d8cd76"><code>8747d8c</code></a>
Update README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/upload-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-28 14:07:03 -04:00
ccurme
3286a98b27 fix(core): translate Google GenAI text blocks to v1 (#33699) 2025-10-28 09:53:01 -04:00
Mason Daugherty
62769a0dac feat(langchain): export UsageMetadata (#33692)
as well as `InputTokenDetails`, and `OutputTokenDetails` from
`langchain_core.messages`
2025-10-27 19:47:41 -04:00
Mason Daugherty
f94108b4bc fix: links (#33691)
* X-ref to new docs
* Formatting updates
2025-10-27 19:04:29 -04:00
ccurme
60a0ff8217 fix(standard-tests): fix tool description in agent loop test (#33690) 2025-10-27 15:02:13 -04:00
Christophe Bornet
b3dffc70e2 fix(core): fix PydanticOutputParser's get_format_instructions for v1 models (#32479) 2025-10-27 13:44:20 -04:00
Arun Prasad
86ac39e11f refactor(core): Minor refactor for code readability (#33674) 2025-10-27 11:39:36 -04:00
John Eismeier
6e036d38b2 fix(infra): add emacs backup files to gitignore (#33675) 2025-10-27 11:26:47 -04:00
Shanto Mathew
2d30ebb53b docs(langchain): clarify create_tool_calling_agent system_prompt formatting and add troubleshooting (#33679) 2025-10-27 11:18:10 -04:00
Arun Prasad
b3934b9580 refactor(anthropic): remove unnecessary url check (#33671)
if "url" in annotation: in Line 15 , already ensures "url" is key in
annotation , so no need to check again to set "url" key in out object

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-27 11:13:54 -04:00
Mason Daugherty
09102a634a fix: update some links (#33686) 2025-10-27 11:12:11 -04:00
ccurme
95ff5901a1 chore(anthropic): update integration test cassette (#33685) 2025-10-27 10:43:36 -04:00
Mason Daugherty
f3d7152074 style(core): more refs work (#33664) 2025-10-24 16:06:24 -04:00
Christophe Bornet
dff37f6048 fix(nomic): support Python 3.14 (#33655)
Pyarrow just published 3.14 binaries

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-24 13:32:07 -04:00
ccurme
832036ef0f chore(infra): remove openai from langchain-core release test matrix (#33661) 2025-10-24 11:55:33 -04:00
ccurme
f1742954ab fix(core): make handling of schemas more defensive (#33660) 2025-10-24 11:10:06 -04:00
ccurme
6ab0476676 fix(openai): update test (#33659) 2025-10-24 11:04:33 -04:00
ccurme
d36413c821 release(mistralai): 1.0.1 (#33657) 2025-10-24 09:50:23 -04:00
Romi45
99097f799c fix(mistralai): resolve duplicate tool calls when converting to mistral chat message (#33648) 2025-10-24 09:40:31 -04:00
Mohammad Mohtashim
0666571519 chore(perplexity): Added all keys for usage metadata (#33480) 2025-10-24 09:32:35 -04:00
ccurme
ef85161525 release(core): 1.0.1 (#33639) 2025-10-22 14:25:21 -04:00
ccurme
079eb808f8 release(qdrant): 1.1.0 (#33638) 2025-10-22 13:24:36 -04:00
Anush
39fb2d1a3b feat(qdrant): Use Qdrant's built-in MMR search (#32302) 2025-10-22 13:19:32 -04:00
Mason Daugherty
db7f2db1ae feat(infra): langchain docs MCP (#33636) 2025-10-22 11:50:35 -04:00
Yu Zhong
df46c82ae2 feat(core): automatic set required to include all properties in strict mode (#32930) 2025-10-22 11:31:08 -04:00
Eugene Yurtsev
f8adbbc461 chore(langchain_v1): bump version from 1.0.1 to 1.0.2 (#33629)
Release 1.0.2
2025-10-21 17:05:51 -04:00
Eugene Yurtsev
17f0716d6c fix(langchain_v1): remove non llm controllable params from tool message on invocation failure (#33625)
The LLM shouldn't be seeing parameters it cannot control in the
ToolMessage error it gets when it invokes a tool with incorrect args.

This fixes the behavior within langchain to address immediate issue.

We may want to change the behavior in langchain_core as well to prevent
validation of injected arguments. But this would be done in a separate
change
2025-10-21 15:40:30 -04:00
Ali Ismail
5acd34ae92 feat(openai): add unit test for streaming error in _generate (#33134) 2025-10-21 15:08:37 -04:00
Aaron Sequeira
84dbebac4f fix(langchain): correctly initialize huggingface models in init_chat_model (#33167) 2025-10-21 14:21:46 -04:00
Mohammad Mohtashim
eddfcd2c88 docs(core): Updated docs for mustache_template_vars (#33481) 2025-10-21 13:01:25 -04:00
noeliecherrier
9f470d297f feat(mistralai): remove tenacity retries for embeddings (#33491) 2025-10-21 12:35:10 -04:00
ccurme
2222470f69 release(openai): 1.0.1 (#33624) 2025-10-21 11:37:47 -04:00
Marlene
78175fcb96 feat(openai): add callable support for openai_api_key parameter (#33532) 2025-10-21 11:16:02 -04:00
Mason Daugherty
d9e659ca4f style: even more refs work (#33619) 2025-10-21 01:09:52 -04:00
Mason Daugherty
e731ba1e47 style: more refs work (#33616) 2025-10-20 18:40:19 -04:00
Cole Murray
557fc9a817 fix(infra): harden pydantic test workflow against command injection (#33446) 2025-10-20 10:35:48 -04:00
Christophe Bornet
965dac74e5 chore(infra): test pydantic with python 3.12 (#33421) 2025-10-20 10:28:41 -04:00
Sydney Runkle
7d7a50d4cc release(langchain_v1): 1.0.1 (#33610) 2025-10-20 13:03:16 +00:00
Sydney Runkle
9319eecaba fix(langchain_v1): ToolRuntime default for args (#33606)
added some noqas, this is a quick patch to support a bug uncovered in
the quickstart, will resolve fully depending on where we centralize
ToolNode stuff.
2025-10-20 08:45:50 -04:00
Mason Daugherty
a47386f6dc style: more refs polishing (#33601) 2025-10-20 00:52:52 -04:00
Mason Daugherty
aaf88c157f docs(langchain): update reference documentation to note moved embeddings modules (#33600) 2025-10-19 20:10:25 -04:00
Christophe Bornet
3dcf4ae1e9 fix(cli): support Python 3.14 (#33598)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-19 19:37:34 -04:00
Christophe Bornet
3391168777 ci(infra): test CodSpeed with Python 3.13 (#33599) 2025-10-19 19:33:20 -04:00
repeat-Q
28728dca9f docs: add contributing guide to README (#33490)
**Description:** Added a beginner-friendly tip to the README to help
first-time contributors find a starting point. This is a documentation
improvement aimed at lowering the barrier for newcomers to participate
in open source.

**Issue:** No related issue

**Dependencies:** None

---

## Note to maintainers

I'm new to open source and this is my first PR! If there's anything that
needs improvement, please guide me and I'll be happy to learn and make
changes. Thank you for your patience! 😊

## What does this PR do?
- Added a noticeable beginner tip box after the badges section in README
- Provided specific guidance (Good First Issues link)
- Encourages newcomers to start with documentation fixes

## Why is this change needed?
- Makes it easier for new contributors to get started
- Provides clear direction and reduces confusion
- Creates a more welcoming open source community environment

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-19 00:01:21 -04:00
Christophe Bornet
1ae7fb7694 chore(langchain-classic): remove unused duckdb dependency (#33582)
* The dependency is not used.
* It takes a long time to build in Python 3.14 as there are no prebuilt
binaries yet. This slows down CI a lot.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-17 18:45:30 -04:00
Mason Daugherty
7aef3388d9 release(xai): 1.0.0 (#33591) 2025-10-17 17:42:29 -04:00
Mason Daugherty
1d056487c7 style(anthropic): use aliases for model names (#33590) 2025-10-17 21:40:22 +00:00
Mason Daugherty
64e6798a39 chore: update pyproject.toml url entries (#33587) 2025-10-17 17:16:55 -04:00
Sydney Runkle
4a65e827f7 release(langchain_v1): v1.0.0 (#33588)
waiting on langgraph bump
2025-10-17 16:49:07 -04:00
Sydney Runkle
35b89b8b10 fix: shell tool middleware (#33589)
the fact that this was broken showcases that we need significantly
better test coverage, this is literally the most minimalistic usage of
this middleware there could be 😿

will document these two gotchas better for custom middleware

```py
from langchain.agents.middleware.shell_tool import ShellToolMiddleware
from langchain.agents import create_agent

agent = create_agent(model="openai:gpt-4",middleware = [ShellToolMiddleware()])
agent.invoke({"messages":[{"role": "user", "content": "hi"}]})
```
2025-10-17 16:48:30 -04:00
Mason Daugherty
8efa75d04c fix(xai): inject model_provider in response_metadata (#33543)
plus tests minor rfc
2025-10-17 16:11:03 -04:00
Sydney Runkle
8fd54f13b5 feat(langchain_v1): Python 3.14 support (#33560)
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2025-10-17 15:10:01 -04:00
ccurme
952fa8aa99 fix(langchain,langchain_v1): enable huggingface optional dep (#33586) 2025-10-17 18:42:53 +00:00
Mason Daugherty
3948273350 release(prompty): 1.0.0 (#33584) 2025-10-17 14:10:01 -04:00
Eugene Yurtsev
a16307fe84 chore(infra): change scope names (#33580)
Change scope names
2025-10-17 15:55:58 +00:00
Eugene Yurtsev
af6f2cf366 chore(langchain_legacy): bump version 1.0 (#33579)
Bump version for langchain-classic
2025-10-17 11:55:13 -04:00
Mason Daugherty
6997867f0e release(deepseek): 1.0.0 (#33581) 2025-10-17 11:52:08 -04:00
Mason Daugherty
de791bc3ef fix(deepseek): inject model_provider in response_metadata (#33544)
& slight tests rfc
2025-10-17 11:47:59 -04:00
Mason Daugherty
69c6e7de59 release(ollama): 1.0.0 (#33567) 2025-10-17 11:39:24 -04:00
Mason Daugherty
10cee59f2e release(mistralai): 1.0.0 (#33573) 2025-10-17 11:33:17 -04:00
Mason Daugherty
58f521ea4f release(fireworks): 1.0.0 (#33571) 2025-10-17 11:32:57 -04:00
Mason Daugherty
a194ae6959 release(huggingface): 1.0.0 (#33572) 2025-10-17 11:26:48 -04:00
ccurme
4d623133a5 release(openai): 1.0.0 (#33578) 2025-10-17 11:25:25 -04:00
Mason Daugherty
8fbf192c2a release(perplexity): 1.0.0 (#33576) 2025-10-17 11:18:43 -04:00
Mason Daugherty
241a382fba docs: fix Anthropic, OpenAI docstrings (#33566)
minor
2025-10-17 11:18:32 -04:00
Mason Daugherty
c194ee2046 release(exa): 1.0.0 (#33570) 2025-10-17 11:17:43 -04:00
Mason Daugherty
85567f1dc3 release(qdrant): 1.0.0 (#33577) 2025-10-17 11:17:01 -04:00
Mason Daugherty
6f4978041e release(nomic): 1.0.0 (#33574) 2025-10-17 11:16:41 -04:00
Mason Daugherty
f1fca4f46f release(chroma): 1.0.0 (#33569) 2025-10-17 11:16:24 -04:00
Mason Daugherty
2b899fe961 release(groq): 1.0.0 (#33568) 2025-10-17 11:15:57 -04:00
ccurme
3152d25811 fix: support python 3.14 in various projects (#33575)
Co-authored-by: cbornet <cbornet@hotmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-17 11:06:23 -04:00
ccurme
3b8cb3d4b6 release(text-splitters): 1.0.0 (#33565) 2025-10-17 10:30:42 -04:00
ccurme
15047ae28a release(anthropic): 1.0.0 (#33564) 2025-10-17 10:03:04 -04:00
ccurme
888fa3a2fb release(standard-tests): 1.0.0 (#33563) 2025-10-17 09:53:59 -04:00
ccurme
90346b8a35 release(core): 1.0.0 (#33562) 2025-10-17 09:22:45 -04:00
Christophe Bornet
2d5efd7b29 fix(core): support for Python 3.14 (#33461)
* Fix detection of support of context in `asyncio.create_task`
* Fix: in Python 3.14 `asyncio.get_event_loop()` raises an exception if
there's no running loop
* Bump pydantic to version 2.12
* Skips tests with pydantic v1 models as they are not supported with
Python 3.14
* Run core tests with Python 3.14 in CI.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-10-17 05:27:34 -04:00
Mason Daugherty
1d2273597a docs: more fixes for refs (#33554) 2025-10-16 22:54:16 -04:00
Sydney Runkle
9dd494ddcd fix(langchain): conditional tools -> end edge when all client side calls return direct (#33550)
mostly #33520 
also tacking on change to make sure we're only looking at client side
calls for the jump to end

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
2025-10-17 02:35:47 +00:00
Sydney Runkle
2fa07b19f6 chore(langchain_v1): relax typing on input state (#33552)
so we don't get type errors when invoking w/ dict type (openai format)
messages

would love to have types for these eventually so we can get proper
checking

before
<img width="759" height="257" alt="Screenshot 2025-10-16 at 9 46 08 PM"
src="https://github.com/user-attachments/assets/aabe716f-6d8f-429d-ae47-31dd8617752d"
/>

after
<img width="751" height="228" alt="Screenshot 2025-10-16 at 9 51 09 PM"
src="https://github.com/user-attachments/assets/e74dcf12-874b-43ca-9d5b-5575ef8ced73"
/>
2025-10-16 22:35:28 -04:00
Nuno Campos
a022e3c14d feat(langchain_v1): Add ShellToolMiddleware and ClaudeBashToolMiddleware (#33527)
- Both middleware share the same implementation, the only difference is
one uses Claude's server-side tool definition, whereas the other one
uses a generic tool definition compatible with all models
- Implemented 3 execution policies (responsible for actually running the
shell process)
- HostExecutionPolicy runs the shell as subprocess, appropriate for
already sandboxed environments, eg when run inside a dedicated docker
container
- CodexSandboxExecutionPolicy runs the shell using the sandbox command
from the Codex CLI which implements sandboxing techniques for Linux and
Mac OS.
- DockerExecutionPolicy runs the shell inside a dedicated Docker
container for isolation.
- Implements all behaviours described in
https://docs.claude.com/en/docs/agents-and-tools/tool-use/bash-tool#handle-large-outputs
including timeouts, truncation, output redaction, etc

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-10-16 22:32:11 -04:00
Eugene Yurtsev
e0e11423d9 feat(langchain): file-search middleware (#33551)
File search middleware from
https://github.com/langchain-ai/langchain/pull/33527
2025-10-16 21:52:18 -04:00
Eugene Yurtsev
34de8ec1f3 feat(anthropic): add more anthropic middleware (#33510)
Middleware Classes

Text Editor Tools
- StateClaudeTextEditorToolMiddleware: In-memory text editor using agent
state
- FilesystemClaudeTextEditorToolMiddleware: Text editor operating on
real filesystem

Implementing Claude's text editor tools

https://docs.claude.com/en/docs/agents-and-tools/tool-use/text-editor-tool
Operations: view, create, str_replace, insert

Memory Tools
- StateClaudeMemoryToolMiddleware: Memory persistence in agent state
- FilesystemClaudeMemoryToolMiddleware: Memory persistence on filesystem

Implementing Claude's memory tools
https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool
Operations: Same as text editor plus delete and rename

File Search Tools
- StateFileSearchMiddleware: Search state-based files

Provides Glob and Grep tools with same schema as used by Claude Code
(but compatible with any model)
- Glob: Pattern matching (e.g., **/*.py, src/**/*.ts), sorted by
modification time
- Grep: Regex content search with output modes (files_with_matches,
content, count)

Usage

``` from langchain.agents import create_agent from langchain.agents.middleware import (
StateTextEditorToolMiddleware, StateFileSearchMiddleware, )

agent = create_agent( model=model, tools=[], middleware=[
StateTextEditorToolMiddleware(), StateFileSearchMiddleware(), ], ) ```

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
2025-10-16 21:07:14 -04:00
Sydney Runkle
3d288fd610 release: joint rcs for core + langchain (#33549) 2025-10-17 01:00:47 +00:00
Sydney Runkle
055cccde28 chore(langchain): allow injection of ToolRuntime and generic ToolRuntime[ContextT, StateT] (#33546)
Adds special private helper to allow direct injection of `ToolRuntime`
in tools, plus adding guards for generic annotations w/ `get_origin`.

Went w/ the private helper so that we didn't change behavior for other
injected types.
2025-10-16 20:55:19 -04:00
Mason Daugherty
361514d11d docs(exa): fix documentation link (#33545) 2025-10-16 23:53:52 +00:00
Eugene Yurtsev
90b68059f5 fix(langchain): revert conditional edge from tools to end (#33520) (#33539)
This is causing an issue with one of the middlewares
2025-10-16 17:19:26 -04:00
Mason Daugherty
87ad5276e4 chore: add v1 migration link to MIGRATE.md (#33537) 2025-10-16 20:31:02 +00:00
Mason Daugherty
5489df75d7 release(huggingface): 1.0.0a1 (#33536) 2025-10-16 16:21:38 -04:00
Sydney Runkle
c6b3f5b888 release(langchain): cut rc (#33534) 2025-10-16 19:55:38 +00:00
Mason Daugherty
15db024811 chore: more sweeping (#33533)
more fixes for refs
2025-10-16 15:44:56 -04:00
Jacob Lee
6d73003b17 feat(openai): Populate OpenAI service tier token details (#32721) 2025-10-16 15:14:57 -04:00
ccurme
13259a109a release(standard-tests): 1.0.0rc1 (#33531) 2025-10-16 14:09:41 -04:00
ccurme
aa78be574a release(core): 1.0.0rc2 (#33530) 2025-10-16 13:00:39 -04:00
Mason Daugherty
d0dd1b30d1 docs(langchain_v1): remove absent arg descriptions (#33529) 2025-10-16 12:25:18 -04:00
Mason Daugherty
0338a15192 docs(chroma): remove an extra arg space (#33526) 2025-10-16 16:05:51 +00:00
Sydney Runkle
e10d99b728 fix(langchain): conditional edge from tools to end (#33520) 2025-10-16 11:56:45 -04:00
Mason Daugherty
c9018f81ec docs(anthropic): update extended thinking docs and fix urls (#33525)
new urls

extended thinking isn't just 3.7 anymore
2025-10-16 11:18:47 -04:00
Eugene Yurtsev
31718492c7 fix(langchain_v1): relax tool node validation to allow claude text editing tools (#33512)
Relax tool node validation to allow claude text editing tools
2025-10-16 14:56:41 +00:00
Sydney Runkle
2209878f48 chore(langchain): update state schema doc (#33524) 2025-10-16 10:40:54 -04:00
Sydney Runkle
dd77dbe3ab chore(langchain_v1): adding back state_schema to create_agent (#33519)
To make migration easier, things are more backwards compat

Very minimal footprint here

Will need to upgrade migration guide and other docs w/ this change
2025-10-16 10:12:34 -04:00
ccurme
eb19e12527 feat(core): support vertexai standard content (#33521) 2025-10-16 10:08:58 -04:00
Sydney Runkle
551e86a517 chore(langchain): use runtime not tool_runtime for injected tool arg (#33522)
fast follow to https://github.com/langchain-ai/langchain/pull/33500
2025-10-16 13:53:54 +00:00
Eugene Yurtsev
8734c05f64 feat(langchain_v1): tool retry middleware (#33503)
Adds `ToolRetryMiddleware` to automatically retry failed tool calls with
configurable exponential backoff, exception filtering, and error
handling.

## Example

```python
from langchain.agents import create_agent
from langchain.agents.middleware import ToolRetryMiddleware
from langchain_openai import ChatOpenAI

# Retry up to 3 times with exponential backoff
retry = ToolRetryMiddleware(
    max_retries=3,
    initial_delay=1.0,
    backoff_factor=2.0,
)

agent = create_agent(
    model=ChatOpenAI(model="gpt-4"),
    tools=[search_tool, database_tool],
    middleware=[retry],
)

# Tool failures are automatically retried
result = agent.invoke({"messages": [{"role": "user", "content": "Search for AI news"}]})
```

For advanced usage with specific exception handling:

```python
from requests.exceptions import Timeout, HTTPError

def should_retry(exc: Exception) -> bool:
    # Only retry on 5xx errors or timeouts
    if isinstance(exc, HTTPError):
        return 500 <= exc.response.status_code < 600
    return isinstance(exc, Timeout)

retry = ToolRetryMiddleware(
    max_retries=4,
    retry_on=should_retry,
    tools=["search_database"],  # Only apply to specific tools
)
```
2025-10-16 09:47:43 -04:00
Sydney Runkle
0c8cbfb7de chore(langchain_v1): switch order of params in ToolRuntime (#33518)
To match `Runtime`
2025-10-16 12:09:05 +00:00
Sydney Runkle
89c3428d85 feat(langchain_v1): injected runtime (#33500)
Goal here is 2 fold

1. Improved devx for injecting args into tools
2. Support runtime injection for Python 3.10 async

One consequence of this PR is that `ToolNode` now expects `config`
available with `runtime`, which only happens in LangGraph execution
contexts. Hence the config patch for tests.

Are we ok reserving `tool_runtime`?

before, eek:
```py
from langchain.agents import create_agent
from langchain.tools import tool, InjectedState, InjectedStore
from langgraph.runtime import get_runtime
from typing_extensions import Annotated
from langgraph.store.base import BaseStore

@tool
def do_something(
    arg: int,
    state: Annotated[dict, InjectedState],
    store: Annotated[BaseStore, InjectedStore],
) -> None:
    """does something."""
    print(state)
    print(store)
    print(get_runtime().context)
    ...
```

after, woo!
```py
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime

@tool
def do_something_better(
    arg: int,
    tool_runtime: ToolRuntime,
) -> None:
    """does something better."""
    print(tool_runtime.state)
    print(tool_runtime.store)
    print(tool_runtime.context)
    ...
```

```python
@dataclass
class ToolRuntime(InjectedToolArg, Generic[StateT, ContextT]):
    state: StateT
    context: ContextT
    config: RunnableConfig
    tool_call_id: str
    stream_writer: StreamWriter
    context: ContextT
    store: BaseStore | None
2025-10-16 07:41:09 -04:00
Mason Daugherty
707e96c541 style: more sweeping refs work (#33513) 2025-10-15 23:33:39 -04:00
Mason Daugherty
26e0a00c4c style: more work for refs (#33508)
Largely:
- Remove explicit `"Default is x"` since new refs show default inferred
from sig
- Inline code (useful for eventual parsing)
- Fix code block rendering (indentations)
2025-10-15 18:46:55 -04:00
Eugene Yurtsev
d0f8f00e7e release(anthropic): 1.0.0a5 (#33507)
Release anthropic
2025-10-15 21:31:52 +00:00
Eugene Yurtsev
a39132787c feat(anthropic): add async implementation to middleware (#33506)
Add async implementation to middleware
2025-10-15 17:05:39 -04:00
Sydney Runkle
296994ebf0 release(langchain_v1): 1.0.0a15 (#33505) 2025-10-15 20:48:18 +00:00
ccurme
b5b31eec88 feat(core): include original block type in server tool results for google-genai (#33502) 2025-10-15 16:26:54 -04:00
Sydney Runkle
8f6851c349 fix(langchain_v1): keep state to relevant middlewares for tool/model call limits (#33493)
The one risk point that I can see here is that model + tool call
counting now occurs in the `after_model` hook which introduces order
dependency (what if you have HITL execute before this hook and we jump
early to `model`, for example).

This is something users can work around at the moment and we can
document. We could also introduce a priority concept to middleware.
2025-10-15 14:24:59 -04:00
Nuno Campos
0788461abd feat(openai): Add openai moderation middleware (#33492) 2025-10-15 13:59:49 -04:00
ccurme
3bfd1f6d8a release(core): 1.0.0rc1 (#33497) 2025-10-15 13:02:35 -04:00
Mason Daugherty
d83c3a12bf chore(core): delete BaseMemory, move to langchain-classic (#33373) 2025-10-15 12:55:23 -04:00
Mason Daugherty
79200cf3c2 docs: update package READMEs (#33488) 2025-10-15 10:49:35 -04:00
ccurme
bcb6789888 fix(anthropic): set langgraph-prebuilt dep explicitly (#33495) 2025-10-15 14:44:37 +00:00
ccurme
89b7933ef1 feat(standard-tests): parametrize tool calling test (#33496) 2025-10-15 14:43:09 +00:00
ccurme
4da5a8081f fix(core): propagate extras when aggregating tool calls in v1 content (#33494) 2025-10-15 10:38:16 -04:00
Mason Daugherty
53e9f00804 chore(core): delete items marked for removal in schemas.py (#33375) 2025-10-15 09:56:27 -04:00
Chenyang Li
6e25e185f6 fix(docs): Fix several typos and grammar (#33487)
Just typo changes

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-14 20:04:14 -04:00
Mason Daugherty
68ceeb64f6 chore(core): delete function_calling.py utils marked for removal (#33376) 2025-10-14 16:13:19 -04:00
Mason Daugherty
edae976b81 chore(core): delete pydantic_v1/ (#33374) 2025-10-14 16:08:24 -04:00
ccurme
9f4366bc9d feat(mistralai): support reasoning feature and v1 content (#33485)
Not yet supported: server-side tool calls
2025-10-14 15:19:44 -04:00
Eugene Yurtsev
99e0a60aab chore(langchain_v1): remove invocation request (#33482)
Remove ToolNode primitives from langchain
2025-10-14 15:07:30 -04:00
Eugene Yurtsev
d38729fbac feat(langchain_v1): add async implementations to wrap_model_call (#33467)
Add async implementations to wrap_model_call for prebuilt middleware
2025-10-14 17:39:38 +00:00
gsmini
ff0d21cfd5 fix(langchain_v1): can not import "wrap_tool_call" from agents.… (#33472)
fix can not import `wrap_tool_call` from ` langchain.agents.middleware
import `
```python

from langchain.agents import create_agent
from langchain.agents.middleware import wrap_tool_call # here !
from langchain_core.messages import ToolMessage

@wrap_tool_call
def handle_tool_errors(request, handler):
    """Handle tool execution errors with custom messages."""
    try:
        return handler(request)
    except Exception as e:
        # Return a custom error message to the model
        return ToolMessage(
            content=f"Tool error: Please check your input and try again. ({str(e)})",
            tool_call_id=request.tool_call["id"]
        )

agent = create_agent(
    model="openai:gpt-4o",
    tools=[search, calculate],
    middleware=[handle_tool_errors]
)
```
> example code from:
https://docs.langchain.com/oss/python/langchain/agents#tool-error-handling
2025-10-14 13:39:25 -04:00
Eugene Yurtsev
9140a7cb86 feat(langchain_v1): add override to model request and tool call request (#33465)
Add override to model request and tool call request
2025-10-14 10:31:46 -04:00
ccurme
41fe18bc80 chore(groq): fix integration tests (#33478)
- add missing cassette
- update streaming metadata test for v1
2025-10-14 14:16:34 +00:00
Mason Daugherty
9105573cb3 docs: create_agent style and clarify system_prompt (#33470) 2025-10-14 09:56:54 -04:00
Sydney Runkle
fff87e95d1 fix(langchain): rename PlanningMiddleware to TodoListMiddleware (#33476) 2025-10-14 09:06:06 -04:00
ccurme
9beb29a34c chore(mistralai): delete redundant tests (#33468) 2025-10-13 21:28:51 +00:00
ChoYongHo | 조용호
ca00f5aed9 fix(langchain_v1): export ModelResponse from agents.middleware (#33453) (#33454)
## Description

  Fixes #33453

`ModelResponse` was defined in `types.py` and included in its `__all__`
list, but was not exported from the middleware package's `__init__.py`.
This caused `ImportError` when attempting to import it directly
from `langchain.agents.middleware`, despite being documented as a public
export.

  ## Changes

- Added `ModelResponse` to the import statement in
`langchain/agents/middleware/__init__.py`
- Added `ModelResponse` to the `__all__` list in
`langchain/agents/middleware/__init__.py`
- Added comprehensive unit tests in `test_imports.py` to verify the
import works correctly

  ## Issue

  The original issue reported that the following import failed:

  ```python
  from langchain.agents.middleware import ModelResponse
# ImportError: cannot import name 'ModelResponse' from
'langchain.agents.middleware'

  The workaround was to import from the submodule:

from langchain.agents.middleware.types import ModelResponse # Workaround

  Solution

  After this fix, ModelResponse can be imported directly as documented:

  from langchain.agents.middleware import ModelResponse  # Now works!

  Testing

-  Added 3 unit tests in
tests/unit_tests/agents/middleware/test_imports.py
  -  All tests pass locally: make format, make lint, make test
  -  Verified ModelResponse is properly exported and importable
  -  Verified ModelResponse appears in __all__ list

  Dependencies

  None. This is a simple export fix with no new dependencies.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-10-13 16:02:30 -04:00
dependabot[bot]
637777b8e7 chore(infra): bump astral-sh/setup-uv from 6 to 7 (#33457)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 6
to 7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/setup-uv/releases">astral-sh/setup-uv's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.0 🌈 node24 and a lot of bugfixes</h2>
<h2>Changes</h2>
<p>This release comes with a load of bug fixes and a speed up. Because
of switching from node20 to node24 it is also a breaking change. If you
are running on GitHub hosted runners this will just work, if you are
using self-hosted runners make sure, that your runners are up to date.
If you followed the normal installation instructions your self-hosted
runner will keep itself updated.</p>
<p>This release also removes the deprecated input
<code>server-url</code> which was used to download uv releases from a
different server.
The <a
href="https://github.com/astral-sh/setup-uv?tab=readme-ov-file#manifest-file">manifest-file</a>
input supersedes that functionality by adding a flexible way to define
available versions and where they should be downloaded from.</p>
<h3>Fixes</h3>
<ul>
<li>The action now respects when the environment variable
<code>UV_CACHE_DIR</code> is already set and does not overwrite it. It
now also finds <a
href="https://docs.astral.sh/uv/reference/settings/#cache-dir">cache-dir</a>
settings in config files if you set them.</li>
<li>Some users encountered problems that <a
href="https://github.com/astral-sh/setup-uv?tab=readme-ov-file#disable-cache-pruning">cache
pruning</a> took forever because they had some <code>uv</code> processes
running in the background. Starting with uv version <code>0.8.24</code>
this action uses <code>uv cache prune --ci --force</code> to ignore the
running processes</li>
<li>If you just want to install uv but not have it available in path,
this action now respects <code>UV_NO_MODIFY_PATH</code></li>
<li>Some other actions also set the env var <code>UV_CACHE_DIR</code>.
This action can now deal with that but as this could lead to unwanted
behavior in some edgecases a warning is now displayed.</li>
</ul>
<h3>Improvements</h3>
<p>If you are using minimum version specifiers for the version of uv to
install for example</p>
<pre lang="toml"><code>[tool.uv]
required-version = &quot;&gt;=0.8.17&quot;
</code></pre>
<p>This action now detects that and directly uses the latest version.
Previously it would download all available releases from the uv repo
to determine the highest matching candidate for the version specifier,
which took much more time.</p>
<p>If you are using other specifiers like <code>0.8.x</code> this action
still needs to download all available releases because the specifier
defines an upper bound (not 0.9.0 or later) and &quot;latest&quot; would
possibly not satisfy that.</p>
<h2>🚨 Breaking changes</h2>
<ul>
<li>Use node24 instead of node20 <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/608">#608</a>)</li>
<li>Remove deprecated input server-url <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/607">#607</a>)</li>
</ul>
<h2>🐛 Bug fixes</h2>
<ul>
<li>Respect UV_CACHE_DIR and cache-dir <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/612">#612</a>)</li>
<li>Use --force when pruning cache <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/611">#611</a>)</li>
<li>Respect UV_NO_MODIFY_PATH <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/603">#603</a>)</li>
<li>Warn when <code>UV_CACHE_DIR</code> has changed <a
href="https://github.com/jamesbraza"><code>@​jamesbraza</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/601">#601</a>)</li>
</ul>
<h2>🚀 Enhancements</h2>
<ul>
<li>Shortcut to latest version for minimum version specifier <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/598">#598</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Bump dependencies <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/613">#613</a>)</li>
<li>Fix test-uv-no-modify-path <a
href="https://github.com/eifinger"><code>@​eifinger</code></a> (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/604">#604</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3259c6206f"><code>3259c62</code></a>
Bump deps (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/633">#633</a>)</li>
<li><a
href="bf8e8ed895"><code>bf8e8ed</code></a>
Split up documentation (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/632">#632</a>)</li>
<li><a
href="9c6b5e9fb5"><code>9c6b5e9</code></a>
Add resolution-strategy input to support oldest compatible version
selection ...</li>
<li><a
href="a5129e99f4"><code>a5129e9</code></a>
Add copilot-instructions.md (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/630">#630</a>)</li>
<li><a
href="d18bcc753a"><code>d18bcc7</code></a>
Add value of UV_PYTHON_INSTALL_DIR to path (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/628">#628</a>)</li>
<li><a
href="bd1f875aba"><code>bd1f875</code></a>
Set output venv when activate-environment is used (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/627">#627</a>)</li>
<li><a
href="1a91c3851d"><code>1a91c38</code></a>
chore: update known checksums for 0.9.2 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/626">#626</a>)</li>
<li><a
href="c79f606987"><code>c79f606</code></a>
chore: update known checksums for 0.9.1 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/625">#625</a>)</li>
<li><a
href="e0249f1599"><code>e0249f1</code></a>
Fall back to PR for updating known versions (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/623">#623</a>)</li>
<li><a
href="6d2eb15b49"><code>6d2eb15</code></a>
Cache python installs (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/621">#621</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/setup-uv/compare/v6...v7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=astral-sh/setup-uv&package-manager=github_actions&previous-version=6&new-version=7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 15:21:12 -04:00
Eugene Yurtsev
1cf851e054 chore(langchain_v1,anthropic): migrate anthropic middleware to langchain_anthropic (#33463)
Migrate prompt caching implementation into langchain_anthropic.middleware
2025-10-13 15:12:54 -04:00
ccurme
961f965f0c feat(groq): support built-in tools in message content (#33459) 2025-10-13 15:06:01 -04:00
Sydney Runkle
760fc3bc12 chore(langchain_v1): use args for HITL (#33442) 2025-10-11 07:12:46 -04:00
Eugene Yurtsev
e3fc7d8aa6 chore(langchain_v1): bump release version (#33440)
bump v1 for release
2025-10-10 21:51:00 -04:00
Eugene Yurtsev
2b3b209e40 chore(langchain_v1): improve error message (#33433)
Make error messages actionable for sync / async decorators
2025-10-10 17:18:20 -04:00
ccurme
78903ac285 fix(openai): conditionally skip test (#33431) 2025-10-10 21:04:18 +00:00
ccurme
f361acc11c chore(anthropic): speed up integration tests (#33430) 2025-10-10 20:57:44 +00:00
Eugene Yurtsev
ed185c0026 chore(langchain_v1): remove langchain_text_splitters from test group (#33425)
Remove langchain_text_splitters from test group in langchain_v1
2025-10-10 16:56:14 -04:00
Eugene Yurtsev
6dc34beb71 chore(langchain_v1): stricter handling of sync vs. async for wrap_model_call and wrap_tool_call (#33429)
Wrap model call and wrap tool call
2025-10-10 16:54:42 -04:00
Eugene Yurtsev
c2205f88e6 chore(langchain_v1): further namespace clean up (#33428)
Reduce exposed namespace for now
2025-10-10 20:48:24 +00:00
ccurme
abdbe185c5 release(anthropic): 1.0.0a4 (#33427) 2025-10-10 16:39:58 -04:00
ccurme
c1b816cb7e fix(fireworks): parse standard blocks in input (#33426) 2025-10-10 16:18:37 -04:00
Eugene Yurtsev
0559558715 feat(langchain_v1): add async implementation for wrap_tool_call (#33420)
Add async implementation. No automatic delegation to sync at the moment.
2025-10-10 15:07:19 -04:00
Eugene Yurtsev
75965474fc chore(langchain_v1): tool error exceptions (#33424)
Tool error exceptions
2025-10-10 15:06:40 -04:00
Mason Daugherty
5dc014fdf4 chore(core): delete get_relevant_documents (#33378)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-10-10 14:51:54 -04:00
Mason Daugherty
291a9fcea1 style: llm -> model (#33423) 2025-10-10 13:19:13 -04:00
Christophe Bornet
dd994b9d7f chore(langchain): remove arg types from docstrings (#33413)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-10 11:51:00 -04:00
Christophe Bornet
83901b30e3 chore(text-splitters): remove arg types from docstrings (#33406)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-10 11:37:53 -04:00
Mason Daugherty
bcfa21a6e7 chore(infra): remove Poetry setup and dependencies (#33418)
AWS now uses UV
2025-10-10 11:29:52 -04:00
ccurme
af1da28459 feat(langchain_v1): expand message exports (#33419) 2025-10-10 15:14:51 +00:00
Mason Daugherty
ed2ee4e8cc style: fix tables, capitalization (#33417) 2025-10-10 11:09:59 -04:00
Sydney Runkle
f293c8ffd6 chore(langchain_v1): add RemoveMessage (#33416) 2025-10-10 10:49:18 -04:00
Sydney Runkle
714c370191 release(langchain_v1): v1.0.0a13 (#33415) 2025-10-10 10:42:35 -04:00
Sydney Runkle
a29d4e9c3a fix(langchain_v1): out of date docstring (#33414) 2025-10-10 14:12:07 +00:00
Eugene Yurtsev
74983f8a96 chore(langchain_v1): update on_tool_call to wrap_tool (#33410)
Improve naming on ToolNode for on_tool_call interceptor
2025-10-10 03:19:45 +00:00
Eugene Yurtsev
11c5b86981 chore(langchain_v1): update wrap_on_model return (#33408)
Update wrap on model return to capture the full return type of the model
so we can accommodate dynamic structured outputs.
2025-10-09 23:01:21 -04:00
Mason Daugherty
383f4c0ee9 chore: update docs links in README.md (#33409) 2025-10-10 02:54:48 +00:00
Eugene Yurtsev
045e7ad4a1 feat(langchain_v1): tool emulator (#33357)
This is tool emulation middleware. The idea is to help test out an agent
that may have some tools that either take a long time to run or are
expensive to set up. This could allow simulating the behavior a bit.
2025-10-10 01:39:40 +00:00
Anika
0e80291804 fix(core): handle parent/child mustache vars (#33345)
**Description:**

currently `mustache_schema("{{x.y}} {{x}}")` will error. pr fixes

**Issue:** na
**Dependencies:**na

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-10-09 18:45:32 -04:00
Sydney Runkle
c99773b652 feat(langchain_v1): refactoring HITL API (#33397)
Easiest to review side by side (not inline)

* Adding `dict` type requests + responses so that we can ship config w/
interrupts. Also more extensible.
* Keeping things generic in terms of `interrupt_on` rather than
`tool_config`
* Renaming allowed decisions -- approve, edit, reject
* Draws differentiation between actions (requested + performed by the
agent), in this case tool calls, though we generalize beyond that and
decisions - human feedback for said actions

New request structure

```py
class Action(TypedDict):
    """Represents an action with a name and arguments."""

    name: str
    """The type or name of action being requested (e.g., "add_numbers")."""

    arguments: dict[str, Any]
    """Key-value pairs of arguments needed for the action (e.g., {"a": 1, "b": 2})."""


DecisionType = Literal["approve", "edit", "reject"]


class ReviewConfig(TypedDict):
    """Policy for reviewing a HITL request."""

    action_name: str
    """Name of the action associated with this review configuration."""

    allowed_decisions: list[DecisionType]
    """The decisions that are allowed for this request."""

    description: NotRequired[str]
    """The description of the action to be reviewed."""

    arguments_schema: NotRequired[dict[str, Any]]
    """JSON schema for the arguments associated with the action, if edits are allowed."""

class HITLRequest(TypedDict):
    """Request for human feedback on a sequence of actions requested by a model."""

    action_requests: list[Action]
    """A list of agent actions for human review."""

    review_configs: list[ReviewConfig]
    """Review configuration for all possible actions."""
```

New response structure

```py
class ApproveDecision(TypedDict):
    """Response when a human approves the action."""

    type: Literal["approve"]
    """The type of response when a human approves the action."""


class EditDecision(TypedDict):
    """Response when a human edits the action."""

    type: Literal["edit"]
    """The type of response when a human edits the action."""

    edited_action: Action
    """Edited action for the agent to perform.

    Ex: for a tool call, a human reviewer can edit the tool name and args.
    """


class RejectDecision(TypedDict):
    """Response when a human rejects the action."""

    type: Literal["reject"]
    """The type of response when a human rejects the action."""

    message: NotRequired[str]
    """The message sent to the model explaining why the action was rejected."""


Decision = ApproveDecision | EditDecision | RejectDecision


class HITLResponse(TypedDict):
    """Response payload for a HITLRequest."""

    decisions: list[Decision]
    """The decisions made by the human."""
```

User facing API:

NEW

```py
HumanInTheLoopMiddleware(interrupt_on={
    'send_email': True,
    # can also use a callable for description that takes tool call, state, and runtime
    'execute_sql': {
        'allowed_decisions': ['approve', 'edit', 'reject'], 
        'description': 'please review sensitive tool execution'},
    }
})

Command(resume={"decisions": [{"type": "approve"}, {"type": "reject": "message": "db down"}]})
```

OLD

```py
HumanInTheLoopMiddleware(interrupt_on={
    'send_email': True,
    'execute_sql': {
        'allow_accept': True, 
        'allow_edit': True, 
        'allow_respond': True, 
        description='please review sensitive tool execution'
    },
})

Command(resume=[{"type": "approve"}, {"type": "reject": "message": "db down"}])
```
2025-10-09 17:51:28 -04:00
Mason Daugherty
5f9e3e33cd style: remove Defaults to None (#33404) 2025-10-09 17:27:35 -04:00
Mason Daugherty
6fc21afbc9 style: .. code-block:: admonition translations (#33400)
biiiiiiiiiiiiiiiigggggggg pass
2025-10-09 16:52:58 -04:00
ccurme
50445d4a27 fix(standard-tests): update Anthropic inputs test (#33391)
Since 10/7 Anthropic will raise BadRequestError if given an invalid
thinking signature.
2025-10-09 14:13:26 -04:00
ccurme
11a2efe49b fix(anthropic): handle empty AIMessage (#33390) 2025-10-09 13:57:42 -04:00
Mason Daugherty
d8a680ee57 style: address Sphinx double-backtick snippet syntax (#33389) 2025-10-09 13:35:51 -04:00
Christophe Bornet
f405a2c57d chore(core): remove arg types from docstrings (#33388)
* Remove types args
* Remove types from Returns
* Remove types from Yield
* Replace `kwargs` by `**kwargs` when needed
2025-10-09 13:13:23 -04:00
Mason Daugherty
3576e690fa chore: update Sphinx links to markdown (#33386) 2025-10-09 11:54:14 -04:00
Mason Daugherty
057ac361ef chore: delete .claude/settings.local.json (#33387) 2025-10-09 11:44:57 -04:00
Christophe Bornet
d9675a4a20 fix(langchain): improve and fix typing (#32383) 2025-10-09 10:55:31 -04:00
ccurme
c27271f3ae fix(openai): update file index key name (#33350) 2025-10-09 13:15:27 +00:00
ccurme
a3e4f4c2e3 fix(core): override streaming callback if streaming attribute is set (#33351) 2025-10-09 09:04:27 -04:00
Mason Daugherty
b5030badbe refactor(core): clean up sys_info.py (#33372) 2025-10-09 03:31:26 +00:00
Mason Daugherty
b6132fc23e style: remove more Optional syntax (#33371) 2025-10-08 23:28:43 -04:00
Eugene Yurtsev
f33b1b3d77 chore(langchain_v1): rename on_model_call to wrap_model_call (#33370)
rename on_model_call to wrap_model_call
2025-10-08 23:28:14 -04:00
Eugene Yurtsev
c382788342 chore(langchain_v1): update the uv lock file (#33369)
Update the uv lock file.
2025-10-08 23:03:25 -04:00
Eugene Yurtsev
e193a1f273 chore(langchain_v1): replace modify model request with on model call (#33368)
* Replace modify model request with on model call
* Remove modify model request
2025-10-09 02:46:48 +00:00
Eugene Yurtsev
eb70672f4a chore(langchain): add unit tests for wrap_tool_call decorator (#33367)
Add unit tests for wrap_tool_call decorator
2025-10-09 02:30:07 +00:00
Eugene Yurtsev
87df179ca9 chore(langchain_v1): rename on_tool_call to wrap_tool_call (#33366)
Replace on tool call with wrap tool call
2025-10-08 22:10:36 -04:00
Eugene Yurtsev
982a950ccf chore(langchain_v1): add runtime and context to model request (#33365)
Add runtime and context to ModelRequest to make the API more convenient
2025-10-08 21:59:56 -04:00
Eugene Yurtsev
c2435eeca5 chore(langchain_v1): update on_tool_call to regular callbacks (#33364)
Refactor tool call middleware from generator-based to handler-based
pattern

Simplifies on_tool_call middleware by replacing the complex generator
protocol with a straightforward handler pattern. Instead of yielding
requests and receiving results via .send(),
handlers now receive an execute callable that can be invoked multiple
times for retry logic.


Before vs. After

Before (Generator):
```python
class RetryMiddleware(AgentMiddleware):
    def on_tool_call(self, request, state, runtime):
        for attempt in range(3):
            response = yield request  # Yield request, receive result via .send()
            if is_valid(response) or attempt == 2:
                return  # Final result is last value sent to generator
```

After (Handler):

```python
class RetryMiddleware(AgentMiddleware):
    def on_tool_call(self, request, handler):
        for attempt in range(3):
            result = handler(request)  # Direct function call
            if is_valid(result):
                return result
        return result
```


Follow up after this PR:

* Rename the interceptor to wrap_tool_call
* Fix the async path for the ToolNode
2025-10-08 21:46:03 -04:00
Mason Daugherty
68c56440cf fix(groq): handle content correctly (#33363)
(look at most recent commit; ignore prior)
2025-10-08 21:23:30 -04:00
Mason Daugherty
31eeb50ce0 chore: drop UP045 (#33362)
Python 3.9 EOL
2025-10-08 21:17:53 -04:00
Mason Daugherty
0039b3b046 refactor(core): remove keep-runtime-typing from pyproject.toml following dropping 3.9 (#33360)
https://docs.astral.sh/ruff/rules/non-pep604-annotation-optional/#why-is-this-bad
2025-10-08 21:09:53 -04:00
Mason Daugherty
ffb1a08871 style(infra): use modern Optional typing in script (#33361) 2025-10-08 21:09:43 -04:00
Mason Daugherty
d13823043d style: monorepo pass for refs (#33359)
* Delete some double backticks previously used by Sphinx (not done
everywhere yet)
* Fix some code blocks / dropdowns

Ignoring CLI CI for now
2025-10-08 18:41:39 -04:00
Eugene Yurtsev
b665b81a0e chore(langchain_v1): simplify on model call logic (#33358)
Moving from the generator pattern to the slightly less verbose (but explicit) handler pattern.

This will be more familiar to users

**Before (Generator Pattern):**
```python
def on_model_call(self, request, state, runtime):
    try:
        result = yield request
    except Exception:
        result = yield request  # Retry
```

**After (Handler Pattern):**
```python
def on_model_call(self, request, state, runtime, handler):
    try:
        return handler(request)
    except Exception:
        return handler(request)  # Retry
```
2025-10-08 17:23:11 -04:00
Mason Daugherty
6b9b177b89 chore(openai): base.py ref pass (#33355) 2025-10-08 16:08:52 -04:00
Mason Daugherty
b1acf8d931 chore: fix dropdown default open admonition in refs (#33354) 2025-10-08 18:50:44 +00:00
Eugene Yurtsev
97f731da7e chore(langchain_v1): remove unused internal namespace (#33352)
Remove unused internal namespace. We'll likely restore a part of it for
lazy loading optimizations later.
2025-10-08 14:08:07 -04:00
Eugene Yurtsev
1bf29da0d6 feat(langchain_v1): add on_tool_call middleware hook (#33329)
Adds generator-based middleware for intercepting tool execution in
agents. Middleware can retry on errors, cache results, modify requests,
or short-circuit execution.

### Implementation

**Middleware Protocol**
```python
class AgentMiddleware:
    def on_tool_call(
        self,
        request: ToolCallRequest,
        state: StateT,
        runtime: Runtime[ContextT],
    ) -> Generator[ToolCallRequest | ToolMessage | Command, ToolMessage | Command, None]:
        """
        Yields: ToolCallRequest (execute), ToolMessage (cached result), or Command (control flow)
        Receives: ToolMessage or Command via .send()
        Returns: None (final result is last value sent to handler)
        """
        yield request  # passthrough
```

**Composition**
Multiple middleware compose automatically (first = outermost), with
`_chain_tool_call_handlers()` stacking them like nested function calls.

### Examples

**Retry on error:**
```python
class RetryMiddleware(AgentMiddleware):
    def on_tool_call(self, request, state, runtime):
        for attempt in range(3):
            response = yield request
            if not isinstance(response, ToolMessage) or response.status != "error":
                return
            if attempt == 2:
                return  # Give up
```

**Cache results:**
```python
class CacheMiddleware(AgentMiddleware):
    def on_tool_call(self, request, state, runtime):
        cache_key = (request.tool_call["name"], tuple(request.tool_call["args"].items()))
        if cached := self.cache.get(cache_key):
            yield ToolMessage(content=cached, tool_call_id=request.tool_call["id"])
        else:
            response = yield request
            self.cache[cache_key] = response.content
```

**Emulate tools with LLM**
```python
class ToolEmulator(AgentMiddleware):
    def on_tool_call(self, request, state, runtime):
        prompt = f"""Emulate: {request.tool_call["name"]}
Description: {request.tool.description}
Args: {request.tool_call["args"]}
Return ONLY the tool's output."""

        response = emulator_model.invoke([HumanMessage(prompt)])
        yield ToolMessage(
            content=response.content,
            tool_call_id=request.tool_call["id"],
            name=request.tool_call["name"],
        )
```

**Modify requests:**
```python
class ScalingMiddleware(AgentMiddleware):
    def on_tool_call(self, request, state, runtime):
        if "value" in request.tool_call["args"]:
            request.tool_call["args"]["value"] *= 2
        yield request
```
2025-10-08 16:43:32 +00:00
Eugene Yurtsev
2c3fec014f feat(langchain_v1): on_model_call middleware (#33328)
Introduces a generator-based `on_model_call` hook that allows middleware
to intercept model calls with support for retry logic, error handling,
response transformation, and request modification.

## Overview

Middleware can now implement `on_model_call()` using a generator
protocol that:
- **Yields** `ModelRequest` to execute the model
- **Receives** `AIMessage` via `.send()` on success, or exception via
`.throw()` on error
- **Yields again** to retry or transform responses
- Uses **implicit last-yield semantics** (no return values from
generators)

## Usage Examples

### Basic Retry on Error

```python
from langchain.agents.middleware.types import AgentMiddleware

class RetryMiddleware(AgentMiddleware):
    def on_model_call(self, request, state, runtime):
        for attempt in range(3):
            try:
                yield request  # Execute model
                break  # Success
            except Exception:
                if attempt == 2:
                    raise  # Max retries exceeded
```

### Response Transformation

```python
class UppercaseMiddleware(AgentMiddleware):
    def on_model_call(self, request, state, runtime):
        result = yield request
        modified = AIMessage(content=result.content.upper())
        yield modified  # Return transformed response
```

### Error Recovery

```python
class FallbackMiddleware(AgentMiddleware):
    def on_model_call(self, request, state, runtime):
        try:
            yield request
        except Exception:
            fallback = AIMessage(content="Service unavailable")
            yield fallback  # Convert error to fallback response
```

### Caching / Short-Circuit

```python
class CacheMiddleware(AgentMiddleware):
    def on_model_call(self, request, state, runtime):
        if cached := get_cache(request):
            yield cached  # Skip model execution
        else:
            result = yield request
            save_cache(request, result)
```

### Request Modification

```python
class SystemPromptMiddleware(AgentMiddleware):
    def on_model_call(self, request, state, runtime):
        modified_request = ModelRequest(
            model=request.model,
            system_prompt="You are a helpful assistant.",
            messages=request.messages,
            tools=request.tools,
        )
        yield modified_request
```

### Function Decorator

```python
from langchain.agents.middleware.types import on_model_call

@on_model_call
def retry_three_times(request, state, runtime):
    for attempt in range(3):
        try:
            yield request
            break
        except Exception:
            if attempt == 2:
                raise

agent = create_agent(model="openai:gpt-4o", middleware=[retry_three_times])
```

## Middleware Composition

Middleware compose with first in list as outermost layer:

```python
agent = create_agent(
    model="openai:gpt-4o",
    middleware=[
        RetryMiddleware(),      # Outer - wraps others
        LoggingMiddleware(),    # Middle
        UppercaseMiddleware(),  # Inner - closest to model
    ]
)
```
2025-10-08 12:34:04 -04:00
Mason Daugherty
4c38157ee0 fix(core): don't print package if no version found (#33347)
This is polluting issues making it hard to find issues that apply to a
query
2025-10-07 23:14:17 -04:00
Sydney Runkle
b5f8e87e2f remove runtime where not needed 2025-10-07 21:33:52 -04:00
Eugene Yurtsev
6a2efd060e fix(langchain_v1): injection logic in tool node (#33344)
Fix injection logic in tool node
2025-10-07 21:31:10 -04:00
Mason Daugherty
cda336295f chore: enrich pyproject.toml files with links to new references, others (#33343) 2025-10-07 16:17:14 -04:00
Mason Daugherty
02f4256cb6 chore: remove CLI note in migrations (#33342)
unsure of functionality/we don't plan to spend time on it at the moment
2025-10-07 19:18:33 +00:00
ccurme
492ba3d127 release(core): 1.0.0a8 (#33341) 2025-10-07 14:18:44 -04:00
ccurme
cbf8d46d3e fix(core): add back add_user_message and add_ai_message (#33340) 2025-10-07 13:56:34 -04:00
Mason Daugherty
58598f01b0 chore: add more informative README for libs/ (#33339) 2025-10-07 17:13:45 +00:00
ccurme
89fe7e1ac1 release(langchain): 1.0.0a1 (#33337) 2025-10-07 12:52:32 -04:00
ccurme
a24712f7f7 revert: chore(infra): temporarily skip tests of previous alpha versions on core release (#33333)
Reverts langchain-ai/langchain#33312
2025-10-07 10:51:17 -04:00
Mason Daugherty
8446fef00d fix(infra): v0.3 ref dep (#33336) 2025-10-07 10:49:20 -04:00
Mason Daugherty
8bcdfbb24e chore: clean up pyproject.toml files, use core a7 (#33334) 2025-10-07 10:49:04 -04:00
Mason Daugherty
b8ebc14a23 chore(langchain): clean Makefile (#33335) 2025-10-07 10:48:47 -04:00
ccurme
aa442bc52f release(openai): 1.0.0a4 (#33316) 2025-10-07 09:25:05 -04:00
ccurme
2e024b7ede release(anthropic): 1.0.0a3 (#33317) 2025-10-07 09:24:54 -04:00
Sydney Runkle
c8205ff511 fix(langchain_v1): fix edges when there's no middleware (#33321)
1. Main fix: when we don't have a response format or middleware, don't
draw a conditional edge back to the loop entrypoint (self loop on model)
2. Supplementary fix: when we jump to `end` and there is an
`after_agent` hook, jump there instead of `__end__`

Other improvements -- I can remove these if they're more harmful than
helpful
1. Use keyword only arguments for edge generator functions for clarity
2. Rename args to `model_destination` and `end_destination` for clarity
2025-10-06 18:08:08 -04:00
Mason Daugherty
ea0a25d7fe fix(infra): v0.3 ref build; allow prerelease installations for partner packages (#33326) 2025-10-06 18:06:40 -04:00
Mason Daugherty
29b5df3881 fix(infra): handle special case for langchain-tavily repository checkout during ref build (#33324) 2025-10-06 18:00:24 -04:00
Mason Daugherty
690b620b7f docs(infra): add note about check_diff.py running on seemingly unrelated PRs (#33323) 2025-10-06 17:56:57 -04:00
Mason Daugherty
c55c9785be chore(infra): only build 0.3 ref docs from v0.3 branches (#33322)
Using the `api_doc_build.yml` workflow will now only pull from the
`v0.3` branch for each `langchain-ai` repo used during the build
process. This ensures that upcoming updates to the `master`/`main`
branch for each repo won't affect the v0.3 reference docs if/when they
are re-built or updated.
2025-10-06 21:45:49 +00:00
Christophe Bornet
20e04fc3dd chore(text-splitters): cleanup ruff config (#33247)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-06 17:02:31 -04:00
Mason Daugherty
078137f0ba chore(infra): use different pr title labeler (#33318)
The previous (from Grafana) is archived and doesn't work for community
PRs.
2025-10-06 16:58:52 -04:00
ccurme
d0f5a1cc96 fix(standard-tests,openai): minor fix for Responses API tests (#33315)
Following https://github.com/langchain-ai/langchain/pull/33301
2025-10-06 16:46:41 -04:00
ccurme
e8e41bd7a6 chore(infra): temporarily skip tests of previous alpha versions on core release (#33312)
To accommodate breaking changes (e.g., removal of deprecated params like
`callback_manager`).

Will revert once we have updated releases of anthropic and openai.
2025-10-06 16:31:36 -04:00
Sydney Runkle
7326966566 release(langchain_v1): 1.0.0a12 (#33314) 2025-10-06 16:24:30 -04:00
Mason Daugherty
6eb1c34ba1 fix(infra): pr-title-labeler (#33313)
Wasn't working on `pull_request_target`
2025-10-06 16:20:15 -04:00
Mason Daugherty
d390d2f28f chore: add .claude to .gitignore (#33311) 2025-10-06 16:20:02 -04:00
Sydney Runkle
2fa9741f99 chore(langchain_v1): rename model_request node -> model (#33310) 2025-10-06 16:18:18 -04:00
ccurme
ba35387c9e release(core): 1.0.0a7 (#33309) 2025-10-06 16:03:34 -04:00
ccurme
de48e102c4 fix(core,openai,anthropic): delegate to core implementation on invoke when streaming=True (#33308) 2025-10-06 15:54:55 -04:00
Sydney Runkle
08bf8f3dc9 release(langchain_v1): 1.0.0a11 (#33307)
* Consolidating agents
* Removing remainder of globals
* Removing `ToolNode`
2025-10-06 15:13:26 -04:00
Sydney Runkle
00f4db54c4 chore(langchain_v1): remove support for ToolNode in create_agent (#33306)
Let's add a note to help w/ migration once we add the tool call retry
middleware.
2025-10-06 15:06:20 -04:00
Sydney Runkle
62ccf7e8a4 feat(langchain_v1): simplify to use ONE agent (#33302)
This reduces confusion w/ types like `AgentState`, different arg names,
etc.

Second attempt, following
https://github.com/langchain-ai/langchain/pull/33249

* Ability to pass through `cache` and name in `create_agent` as
compilation args for the agent
* Right now, removing `test_react_agent.py` but we should add these
tests back as implemented w/ the new agent
* Add conditional edge when structured output tools are present to allow
for retries
* Rename `tracking` to `model_call_limit` to be consistent w/ tool call
limits

We need in the future (I'm happy to own):
* Significant test refactor
* Significant test overhaul where we emphasize and enforce coverage
2025-10-06 14:46:29 -04:00
Eugene Yurtsev
0ff2bc890b chore(langchain_v1): remove text splitters from langchain v1 namespace (#33297)
Removing text splitters for now for a lighter dependency. We may re-introduce
2025-10-06 14:42:23 -04:00
ccurme
426b8e2e6a feat(standard-tests): enable parametrization of output_version (#33301) 2025-10-06 14:37:33 -04:00
Eugene Yurtsev
bfed5f67a8 chore(langchain_v1): expose rate_limiters from langchain_core (#33305)
expose rate limiters from langchain core
2025-10-06 14:25:56 -04:00
Mason Daugherty
a4c8baebc5 chore: delete cookbook/ (#33303)
It will continue to be available in the `v0.3` branch
2025-10-06 14:21:53 -04:00
Sydney Runkle
a869f84c62 fix(langchain_v1): tool selector should use last human message (#33294) 2025-10-06 11:32:16 -04:00
Sydney Runkle
0ccc0cbdae feat(langchain_v1): before_agent and after_agent hooks (#33279)
We're adding enough new nodes that I think a refactor in terms of graph
building is warranted here, but not necessarily required for merging.
2025-10-06 11:31:52 -04:00
ccurme
7404338786 fix(core): fix string content when streaming output_version="v1" (#33261)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-06 11:03:03 -04:00
Nuno Campos
f308139283 feat(langchain_v1): Implement Context Editing Middleware (#33267)
Brings functionality similar to Anthropic's context editing to all chat
models
https://docs.claude.com/en/docs/build-with-claude/context-editing

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-10-06 10:34:04 -04:00
ccurme
95a451ef2c fix(openai): disable stream_usage in chat completions if OPENAI_BASE_URL is set (#33298)
This env var is used internally by the OpenAI client.
2025-10-06 10:14:43 -04:00
ccurme
c8636a626a chore(openai): (v1) fix sort order of mcp call keys (#33295) 2025-10-06 09:29:41 -04:00
ccurme
4e50ec4b98 feat(openai): enable stream_usage when using default base URL and client (#33205) 2025-10-06 08:56:38 -04:00
Mason Daugherty
90e4d944ac chore(infra): pdm -> hatchling (#33289) 2025-10-05 23:52:52 -04:00
Mason Daugherty
a16342b2bb re-do cli 2025-10-05 23:52:34 -04:00
Mason Daugherty
8e7cd85431 style: drop target-version = "py39" for OpenAI, Anthropic, HuggingFace (#33287) 2025-10-06 03:29:34 +00:00
Mason Daugherty
66889e2804 style(langchain): drop target-version = py39 (#33288) 2025-10-05 23:24:11 -04:00
Mason Daugherty
6ea03ab46c style(core): drop python 39 linting target for 3.10 (#33286) 2025-10-05 23:22:34 -04:00
Mason Daugherty
99d8504731 chore(core): docstring nits (#33285) 2025-10-05 22:40:34 -04:00
Nuno Campos
a9aa3f232d feat(langchain_v1): Add retry_model_request middleware hook, add ModelFallbackMiddleware (#33275)
- retry_model_request hook lets a middleware decide to retry a failed
model request, with full ability to modify as much or as little of the
request before doing so
- ModelFallbackMiddleware tries each fallback model in order, until one
is successful, or fallback list is exhausted

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-10-05 20:32:45 +00:00
Sydney Runkle
20514f5d44 fix(langchain_v1): linting fixes for llm tool selector (#33278)
* Including server side tools by default
* Fixing up typing / linting on `master`
2025-10-05 16:30:27 -04:00
Eugene Yurtsev
df2ecd9448 feat(langchain_v1): add llm selection middleware (#33272)
* Add llm based tool selection middleware.
* Note that we might want some form of caching for when the agent is
inside an active tool calling loop as the tool selection isn't expected
to change during that time.

API:

```python
class LLMToolSelectorMiddleware(AgentMiddleware):
    """Uses an LLM to select relevant tools before calling the main model.

    When an agent has many tools available, this middleware filters them down
    to only the most relevant ones for the user's query. This reduces token usage
    and helps the main model focus on the right tools.

    Examples:
        Limit to 3 tools:
        ```python
        from langchain.agents.middleware import LLMToolSelectorMiddleware

        middleware = LLMToolSelectorMiddleware(max_tools=3)

        agent = create_agent(
            model="openai:gpt-4o",
            tools=[tool1, tool2, tool3, tool4, tool5],
            middleware=[middleware],
        )
        ```

        Use a smaller model for selection:
        ```python
        middleware = LLMToolSelectorMiddleware(model="openai:gpt-4o-mini", max_tools=2)
        ```
    """

    def __init__(
        self,
        *,
        model: str | BaseChatModel | None = None,
        system_prompt: str = DEFAULT_SYSTEM_PROMPT,
        max_tools: int | None = None,
        always_include: list[str] | None = None,
    ) -> None:
        """Initialize the tool selector.

        Args:
            model: Model to use for selection. If not provided, uses the agent's main model.
                Can be a model identifier string or BaseChatModel instance.
            system_prompt: Instructions for the selection model.
            max_tools: Maximum number of tools to select. If the model selects more,
                only the first max_tools will be used. No limit if not specified.
            always_include: Tool names to always include regardless of selection.
                These do not count against the max_tools limit.
        """
```



```python
"""Test script for LLM tool selection middleware."""

from langchain.agents import create_agent
from langchain.agents.middleware import LLMToolSelectorMiddleware
from langchain_core.tools import tool


@tool
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    return f"Weather in {location}: 72°F, sunny"


@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Search results for: {query}"


@tool
def calculate(expression: str) -> str:
    """Perform mathematical calculations."""
    return f"Result of {expression}: 42"


@tool
def send_email(to: str, subject: str) -> str:
    """Send an email to someone."""
    return f"Email sent to {to} with subject: {subject}"


@tool
def get_stock_price(symbol: str) -> str:
    """Get current stock price for a symbol."""
    return f"Stock price for {symbol}: $150.25"


@tool
def translate_text(text: str, target_language: str) -> str:
    """Translate text to another language."""
    return f"Translated '{text}' to {target_language}"


@tool
def set_reminder(task: str, time: str) -> str:
    """Set a reminder for a task."""
    return f"Reminder set: {task} at {time}"


@tool
def get_news(topic: str) -> str:
    """Get latest news about a topic."""
    return f"Latest news about {topic}"


@tool
def book_flight(destination: str, date: str) -> str:
    """Book a flight to a destination."""
    return f"Flight booked to {destination} on {date}"


@tool
def get_restaurant_recommendations(city: str, cuisine: str) -> str:
    """Get restaurant recommendations."""
    return f"Top {cuisine} restaurants in {city}"


# Create agent with tool selection middleware
middleware = LLMToolSelectorMiddleware(
    model="openai:gpt-4o-mini",
    max_tools=3,
)

agent = create_agent(
    model="openai:gpt-4o",
    tools=[
        get_weather,
        search_web,
        calculate,
        send_email,
        get_stock_price,
        translate_text,
        set_reminder,
        get_news,
        book_flight,
        get_restaurant_recommendations,
    ],
    middleware=[middleware],
)

# Test with a query that should select specific tools
response = agent.invoke(
    {"messages": [{"role": "user", "content": "I need to find restaurants"}]}
)

print(response)
```
2025-10-05 15:55:55 -04:00
Eugene Yurtsev
bdb7dbbf16 feat(langchain_v1): represent server side tools in modifyModelRequest and update tool handling (#33274)
* Add server side tools to modifyModelRequest (represented as dicts)
* Update some of the logic in terms of which tools are bound to ToolNode
* We still have a constraint on changing the response format dynamically
when using tool strategy. structured_output_tools are being using in
some of the edges. The code is now raising an exception to explain that
it's a limitation of the implementation. (We can add support later.)
2025-10-05 15:55:46 -04:00
Nuno Campos
30f7c87b6f feat(langchain_v1): Implement PIIMiddleware (#33271)
- supports 6 well-known PII types (email, credit_card, ip, mac_address,
url)
- 4 handling strategies (block, redact, mask, hash)
- supports custom PII types with detector functions or regex
- the built-in types were chosen because they are common, and detection
can be reliably implemented with stdlib
2025-10-04 22:27:51 -04:00
Eugene Yurtsev
fdf8181f58 fix(langchain_v1): dynamic response format (#33273)
* Preserve Auto type for the response format. cc @sydney-runkle Creating
an extra type was the nicest devx I could find for this (makes it easy
to do isinstance(thingy, AutoStrategy)

Remaining issue to address:
* Going to sort out why we're including tools in the tool node
2025-10-04 16:58:32 -04:00
Eugene Yurtsev
8a95eb1ef7 chore(langchain_v1): remove union return type in init_embeddings (#33062)
Fix the return type of init_embeddings
2025-10-04 16:40:36 -04:00
Eugene Yurtsev
4d1cfa494a chore(langchain,prompty): rename to langchain-classic (#33256)
* Rename to langchain-classic
* After release of community, we should add the [community] option back
into the pyproject.toml file.
2025-10-04 16:04:43 -04:00
Nuno Campos
2286d0d993 feat(langchain_v1): Add ToolCallLimitMiddleware (#33269)
which implements a tool call budget for either all tools, or a specific tool

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-10-04 15:03:45 -04:00
Eugene Yurtsev
46b87e435c chore(langchain_v1): change modifyModelRequest back to tools (#33270)
Seems like a much better devx with fairly little downside (we'll
document that you can't register new tools)
2025-10-04 12:33:54 -04:00
Eugene Yurtsev
905c6d7bad fix(langchain_v1): handle switching resposne format strategy based on model identity (#33259)
Change response format strategy dynamically based on model.

After this PR there are two remaining issues:

- [ ] Review binding of tools used for output to ToolNode (shouldn't be
required)
- [ ] Update ModelRequest to also support the original schema provided
by the user (to correctly support auto mode)
2025-10-04 11:56:56 -04:00
Sydney Runkle
acd1aa813c feat(langchain_v1): implement nicer devx for dynamic prompt (#33264)
Adding a `dynamic_prompt` decorator to support smoother devx for dynamic
system prompts

```py
from langchain.agents.middleware.types import dynamic_prompt, ModelRequest, AgentState
from langchain.agents.middleware_agent import create_agent
from langgraph.runtime import Runtime
from dataclasses import dataclass
from langchain_core.messages import HumanMessage


@dataclass
class Context:
    user_name: str


@dynamic_prompt
def my_prompt(request: ModelRequest, state: AgentState, runtime: Runtime[Context]) -> str:
    user_name = runtime.context.user_name
    return (
        f"You are a helpful assistant helping {user_name}. Please refer to the user as {user_name}."
    )


agent = create_agent(model="openai:gpt-4o", middleware=[my_prompt]).compile()

result = agent.invoke({"messages": [HumanMessage("Hello")]}, context=Context(user_name="Sydney"))
for msg in result["messages"]:
    msg.pretty_print()

"""
================================ Human Message =================================

Hello
================================== Ai Message ==================================

Hello Sydney! How can I assist you today?
"""

```
2025-10-03 21:06:23 -04:00
Sydney Runkle
2671fee2c6 feat(langchain_v1): description generator for HITL middleware (#33195)
Need to decide - what information should we feed to this description
factory? Right now, feeding:
* state
* runtime
* tool call (so the developer doesn't have to search through the state's
messages for the corresponding tool call)

I can see a case for just passing tool call. But again, this abstraction
is semi-bound to interrupts for tools... though we pretend it's more
abstract than that.

Right now:

```py
def custom_description(state: AgentState, runtime: Runtime, tool_call: ToolCall) -> str:
        """Generate a custom description."""
        return f"Custom: {tool_call['name']} with args {tool_call['args']}"

middleware = HumanInTheLoopMiddleware(
    interrupt_on={
        "tool_with_callable": {"allow_accept": True, "description": custom_description},
        "tool_with_string": {"allow_accept": True, "description": "Static description"},
    }
)
```
2025-10-04 01:01:44 +00:00
ccurme
010ed5d096 fix(anthropic,openai): fix tests (#33257)
following https://github.com/langchain-ai/langchain/pull/33192
2025-10-03 13:41:37 -04:00
Eugene Yurtsev
7f5be6b65c chore(core,langchain,langchain_v1)!: remove globals from langchain-v1, update globals in langchain-classic, langchain-core (#33251)
* Remove globals.py from langchain_v1
* Adjust langchain-core to not inspect langchain namespace
2025-10-03 12:53:33 -04:00
Eugene Yurtsev
1074ce5fe5 chore(langchain_v1)!: Remove ToolNode from agents (#33250)
Remove ToolNode from agents namespace. It should only be present in tools
2025-10-03 10:57:54 -04:00
Sydney Runkle
3d2f13a2f1 feat(langchain): model call limits (#33178)
This PR adds a model call limit middleware that helps to manage:

* number of model calls during a run (helps w/ avoiding tool calling
loops) - implemented w/ `UntrackedValue`
* number of model calls on a thread (helps w/ avoiding lengthy convos) -
standard state

Concern here is w/ other middlewares overwriting the model call count...
we could use a `_` prefixed field?
2025-10-03 08:28:56 -04:00
SN
99361e623a feat(core): add optional include_id param to convert_to_openai_messages function (#33242) 2025-10-03 08:22:43 -04:00
Mason Daugherty
5a016de53f chore: delete deprecated items (#33192)
Removed:
- `libs/core/langchain_core/chat_history.py`: `add_user_message` and
`add_ai_message` in favor of `add_messages` and `aadd_messages`
- `libs/core/langchain_core/language_models/base.py`: `predict`,
`predict_messages`, and async versions in favor of `invoke`. removed
`_all_required_field_names` since it was a wrapper on
`get_pydantic_field_names`
- `libs/core/langchain_core/language_models/chat_models.py`:
`callback_manager` param in favor of `callbacks`. `__call__` and
`call_as_llm` method in favor of `invoke`
- `libs/core/langchain_core/language_models/llms.py`: `callback_manager`
param in favor of `callbacks`. `__call__`, `predict`, `apredict`, and
`apredict_messages` methods in favor of `invoke`
- `libs/core/langchain_core/prompts/chat.py`: `from_role_strings` and
`from_strings` in favor of `from_messages`
- `libs/core/langchain_core/prompts/pipeline.py`: removed
`PipelinePromptTemplate`
- `libs/core/langchain_core/prompts/prompt.py`: `input_variables` param
on `from_file` as it wasn't used
- `libs/core/langchain_core/tools/base.py`: `callback_manager` param in
favor of `callbacks`
- `libs/core/langchain_core/tracers/context.py`: `tracing_enabled` in
favor of `tracing_enabled_v2`
- `libs/core/langchain_core/tracers/langchain_v1.py`: entire module
- `libs/core/langchain_core/utils/loading.py`: entire module,
`try_load_from_hub`
- `libs/core/langchain_core/vectorstores/in_memory.py`: `upsert` in
favor of `add_documents`
- `libs/standard-tests/langchain_tests/integration_tests/chat_models.py`
and `libs/standard-tests/langchain_tests/unit_tests/chat_models.py`:
`tool_choice_value` as models should accept `tool_choice="any"`
- `langchain` will consequently no longer expose these items if it was
previously

---------

Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-03 03:33:24 +00:00
Mason Daugherty
b541a56c66 chore(langchain): uncomment some optional deps (#33243)
remaining:
- azure-ai
- cohere
- huggingface
- community
2025-10-02 23:29:14 -04:00
Mason Daugherty
4a6890a4e5 chore(langchain_v1): uncomment some optional deps (#33244)
remaining:
- azure-ai
- cohere
- huggingface
- community
2025-10-02 23:18:06 -04:00
Mason Daugherty
e2e0327c90 ci: add workflow for manually building API ref for v0.3 (#33241) 2025-10-02 20:33:12 -04:00
Mason Daugherty
bba37bd6be chore: add libs/ note (#33238) 2025-10-02 19:57:50 -04:00
Mason Daugherty
b051ff4a84 chore(infra): remove formatting and linting hook for root (#33237) 2025-10-02 19:43:09 -04:00
Mason Daugherty
13812f0df8 release(qdrant): 1.0.0a1 (#33236) 2025-10-02 19:37:00 -04:00
Mason Daugherty
420dcf5c4a release(prompty): 1.0.0a1 (#33235) 2025-10-02 19:29:55 -04:00
Mason Daugherty
9f75e20d4f release(perplexity): 1.0.0a1 (#33234) 2025-10-02 19:23:22 -04:00
Mason Daugherty
743e9b2ad1 release(nomic): 1.0.0a1 (#33233) 2025-10-02 19:23:06 -04:00
Mason Daugherty
ea438f9e8a release(groq): 1.0.0a1 (#33231) 2025-10-02 19:04:27 -04:00
Mason Daugherty
86cf3fad4d release(chroma): 1.0.0a1 (#33227) 2025-10-02 19:04:14 -04:00
Mason Daugherty
79a12c8f27 release(mistralai): 1.0.0a1 (#33232) 2025-10-02 19:04:03 -04:00
Mason Daugherty
e85b03d5e4 release(fireworks): 1.0.0a1 (#33230) 2025-10-02 19:03:54 -04:00
Mason Daugherty
21ba7adbab release(exa): 1.0.0a1 (#33229) 2025-10-02 19:03:45 -04:00
Mason Daugherty
f9a87971ba release(deepseek): 1.0.0a1 (#33228) 2025-10-02 19:03:39 -04:00
Mason Daugherty
638d1ff912 release(cli): 1.0.0a1 (#33226) 2025-10-02 19:03:29 -04:00
Mason Daugherty
ae5b105d11 docs: v1 docs updates (#33173)
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-02 18:46:26 -04:00
Mason Daugherty
d07cb63c75 fix(xai): update langchain dependencies to latest alpha versions (#33224) 2025-10-02 17:08:16 -04:00
Mason Daugherty
b8c9b20db4 release(xai): 1.0.0a1 (#33223)
Drop Python 3.9
2025-10-02 17:00:14 -04:00
Mason Daugherty
89b4d7b6c1 fix(infra): _release.yml permissions (#33222) 2025-10-02 16:41:51 -04:00
Mason Daugherty
65cd214f67 chore(infra): more tweaks to PR linting (#33220) 2025-10-02 20:11:05 +00:00
Mason Daugherty
38a971cb3b release(standard-tests): 1.0.0a2 (#33219) 2025-10-02 16:09:57 -04:00
Mason Daugherty
9c9b80c70a docs(standard-tests): add clarity to docstrings (#33218) 2025-10-02 16:09:34 -04:00
Mason Daugherty
5fd4b192bc chore(infra): update integration test workflow (#33216) 2025-10-02 14:49:16 -04:00
Mason Daugherty
ae16392ada release(text-splitters): 1.0.0a1 (#33214) 2025-10-02 13:56:10 -04:00
Mason Daugherty
ccfea37d17 style(infra): update release guidelines for IDE autogen (#33215)
VSCode looks at this file. Should help auto-gen commits for releases.
2025-10-02 17:55:35 +00:00
Mason Daugherty
5e8cb58e6a refactor(text-splitters): drop python 3.9 (#33212) 2025-10-02 13:51:10 -04:00
Mason Daugherty
740ad00d36 chore(infra): add text-splitters labeling (#33213) 2025-10-02 13:50:34 -04:00
Mason Daugherty
9459ab189a docs(openai): use text property instead of method (#33211) 2025-10-02 13:50:25 -04:00
Mason Daugherty
63097db4fc fix(ollama): exclude None parameters from options dictionary (#33208) 2025-10-02 11:25:15 -04:00
Mason Daugherty
eaa6dcce9e release: v1.0.0 (#32567)
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-02 10:49:42 -04:00
ccurme
d7cce2f469 feat(langchain_v1): update messages namespace (#33207) 2025-10-02 10:35:00 -04:00
Mason Daugherty
48b77752d0 release(ollama): 0.3.9 (#33200) 2025-10-01 22:31:20 -04:00
Mason Daugherty
6f2d16e6be refactor(ollama): simplify options handling (#33199)
Fixes #32744

Don't restrict options; the client accepts any dict
2025-10-01 21:58:12 -04:00
Mason Daugherty
a9eda18e1e refactor(ollama): clean up tests (#33198) 2025-10-01 21:52:01 -04:00
Mason Daugherty
a89c549cb0 feat(ollama): add basic auth support (#32328)
support for URL authentication in the format
`https://user:password@host:port` for all LangChain Ollama clients.

Related to #32327 and #25055
2025-10-01 20:46:37 -04:00
Sydney Runkle
a336afaecd feat(langchain): use decorators for jumps instead (#33179)
The old `before_model_jump_to` classvar approach was quite clunky, this
is nicer imo and easier to document. Also moving from `jump_to` to
`can_jump_to` which is more idiomatic.

Before:

```py
class MyMiddleware(AgentMiddleware):
    before_model_jump_to: ClassVar[list[JumpTo]] = ["end"]

    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```

After

```py
class MyMiddleware(AgentMiddleware):

    @hook_config(can_jump_to=["end"])
    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```
2025-10-01 16:49:27 -07:00
Lauren Hirata Singh
af07949d13 fix(docs): Redirects (#33190) 2025-10-01 16:28:47 -04:00
Sydney Runkle
a10e880c00 feat(langchain_v1): add async support for create_agent (#33175)
This makes branching **much** more simple internally and helps greatly
w/ type safety for users. It just allows for one signature on hooks
instead of multiple.

Opened after https://github.com/langchain-ai/langchain/pull/33164
ballooned more than expected, w/ branching for:
* sync vs async
* runtime vs no runtime (this is self imposed)

**This also removes support for nodes w/o `runtime` in the signature.**
We can always go back and add support for nodes w/o `runtime`.

I think @christian-bromann's idea to re-export `runtime` from
langchain's agents might make sense due to the abundance of imports
here.

Check out the value of the change based on this diff:
https://github.com/langchain-ai/langchain/pull/33176
2025-10-01 19:15:39 +00:00
Eugene Yurtsev
7b5e839be3 chore(langchain_v1): use list[str] for modifyModelRequest (#33166)
Update model request to return tools by name. This will decrease the
odds of misusing the API.

We'll need to extend the type for built-in tools later.
2025-10-01 14:46:19 -04:00
ccurme
740842485c fix(openai): bump min core version (#33188)
Required for new tests added in
https://github.com/langchain-ai/langchain/pull/32541 and
https://github.com/langchain-ai/langchain/pull/33183.
2025-10-01 11:01:15 -04:00
noeliecherrier
08bb74f148 fix(mistralai): handle HTTP errors in async embed documents (#33187)
The async embed function does not properly handle HTTP errors.

For instance with large batches, Mistral AI returns `Too many inputs in
request, split into more batches.` in a 400 error.

This leads to a KeyError in `response.json()["data"]` l.288

This PR fixes the issue by:
- calling `response.raise_for_status()` before returning
- adding a retry similarly to what is done in the synchronous
counterpart `embed_documents`

I also added an integration test, but willing to move it to unit tests
if more relevant.
2025-10-01 10:57:47 -04:00
ccurme
7d78ed9b53 release(standard-tests): 0.3.22 (#33186) 2025-10-01 10:39:17 -04:00
ccurme
7ccff656eb release(core): 0.3.77 (#33185) 2025-10-01 10:24:07 -04:00
ccurme
002d623f2d feat: (core, standard-tests) support PDF inputs in ToolMessages (#33183) 2025-10-01 10:16:16 -04:00
Mohammad Mohtashim
34f8031bd9 feat(langchain): Using Structured Response as Key in Output Schema for Middleware Agent (#33159)
- **Description:** Changing the key from `response` to
`structured_response` for middleware agent to keep it sync with agent
without middleware. This a breaking change.
 - **Issue:** #33154
2025-10-01 03:24:59 +00:00
Mason Daugherty
a541b5bee1 chore(infra): rfc README.md for better presentation (#33172) 2025-09-30 17:44:42 -04:00
Mason Daugherty
3e970506ba chore(core): remove runnable section from README.md (#33171) 2025-09-30 17:15:31 -04:00
Mason Daugherty
d1b0196faa chore(infra): whitespace fix (#33170) 2025-09-30 17:14:55 -04:00
ccurme
aac69839a9 release(openai): 0.3.34 (#33169) 2025-09-30 16:48:39 -04:00
ccurme
64141072a3 feat(openai): support openai sdk 2.0 (#33168) 2025-09-30 16:34:00 -04:00
Mason Daugherty
0795be2a04 docs(core): remove non-existent param from as_tool docstring (#33165) 2025-09-30 19:43:34 +00:00
Eugene Yurtsev
9c97597175 chore(langchain_v1): expose middleware decorators and selected messages (#33163)
* Make it easy to improve the middleware shortcuts
* Export the messages that we're confident we'll expose
2025-09-30 14:14:57 -04:00
Sydney Runkle
eed0f6c289 feat(langchain): todo middleware (#33152)
Porting the [planning
middleware](39c0138d0f/src/deepagents/middleware.py (L21))
over from deepagents.

Also adding the ability to configure:
* System prompt
* Tool description

```py
from langchain.agents.middleware.planning import PlanningMiddleware
from langchain.agents import create_agent

agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])

result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})

print(result["todos"])  # Array of todo items with status tracking
```
2025-09-30 02:23:26 +00:00
ccurme
729637a347 docs(anthropic): document support for memory tool and context management (#33149) 2025-09-29 16:38:01 -04:00
Mason Daugherty
3325196be1 fix(langchain): handle gpt-5 model name in init_chat_model (#33148)
expand to match any `gpt-*` model to openai
2025-09-29 16:16:17 -04:00
Mason Daugherty
f402fdcea3 fix(langchain): add context_management to Anthropic chat model init (#33150) 2025-09-29 16:13:47 -04:00
ccurme
ca9217c02d release(anthropic): 0.3.21 (#33147) 2025-09-29 19:56:28 +00:00
ccurme
f9bae40475 feat(anthropic): support memory and context management features (#33146)
https://docs.claude.com/en/docs/build-with-claude/context-editing

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-29 15:42:38 -04:00
ccurme
839a18e112 fix(openai): remove __future__.annotations import from test files (#33144)
Breaks schema conversion in places.
2025-09-29 16:23:32 +00:00
Mohammad Mohtashim
33a6def762 fix(core): Support of 'reasoning' type in 'convert_to_openai_messages' (#33050) 2025-09-29 09:17:05 -04:00
nhuang-lc
c456c8ae51 fix(langchain): fix response action for HITL (#33131)
Multiple improvements to HITL flow:

* On a `response` type resume, we should still append the tool call to
the last AIMessage (otherwise we have a ToolResult without a
corresponding ToolCall)
* When all interrupts have `response` types (so there's no pending tool
calls), we should jump back to the first node (instead of end) as we
enforced in the previous `post_model_hook_router`
* Added comments to `model_to_tools` router so clarify all of the
potential exit conditions

Additionally:
* Lockfile update to use latest LG alpha release
* Added test for `jump_to` behaving ephemerally, this was fixed in LG
but surfaced as a bug w/ `jump_to`.
* Bump version to v1.0.0a10 to prep for alpha release

---------

Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-09-29 13:08:18 +00:00
Eugene Yurtsev
54ea62050b chore(langchain_v1): move tool node to tools namespace (#33132)
* Move ToolNode to tools namespace
* Expose injected variable as well in tools namespace
* Update doc-strings throughout
2025-09-26 15:23:57 -04:00
Mason Daugherty
986302322f docs: more standardization (#33124) 2025-09-25 20:46:20 -04:00
Mason Daugherty
a5137b0a3e refactor(langchain): resolve pydantic deprecation warnings (#33125) 2025-09-25 17:33:18 -04:00
Mason Daugherty
5bea28393d docs: standardize .. code-block directive usage (#33122)
and fix typos
2025-09-25 16:49:56 -04:00
Mason Daugherty
c3fed20940 docs: correct ported over directives (#33121)
Match rest of repo
2025-09-25 15:54:54 -04:00
Mason Daugherty
6d418ba983 test(mistralai): add xfail for structured output test (#33119)
In rare cases (difficult to reproduce), Mistral's API fails to return
valid bodies, leading to failures from `PydanticToolsParser`
2025-09-25 13:05:31 -04:00
Mason Daugherty
12daba63ff test(openai): raise token limit for o1 test (#33118)
`test_o1[False-False]` was sometimes failing because the OpenAI o1 model
was hitting a token limit with only 100 tokens
2025-09-25 12:57:33 -04:00
Christophe Bornet
eaf8dce7c2 chore: bump ruff version to 0.13 (#33043)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 12:27:39 -04:00
Mason Daugherty
f82de1a8d7 chore: bump locks (#33114) 2025-09-25 01:46:01 -04:00
Mason Daugherty
e3efd1e891 test(text-splitters): capture beta warnings (#33113) 2025-09-25 01:30:20 -04:00
Mason Daugherty
d6769cf032 test(text-splitters): resolve pytest marker warning (#33112)
#33111
2025-09-25 01:29:42 -04:00
Mason Daugherty
7ab2e0dd3b test(core): resolve pytest marker warning (#33111)
Remove redundant/outdated `@pytest.mark.requires("jinja2")` decorator

Pytest marks (like `@pytest.mark.requires(...)`) applied directly to
fixtures have no effect and are deprecated.
2025-09-25 01:08:54 -04:00
Mason Daugherty
81319ad3f0 test(core): resolve pydantic_v1 deprecation warning (#33110)
Excluded pydantic_v1 module from import testing

Acceptable since this pydantic_v1 is explicitly deprecated. Testing its
importability at this stage serves little purpose since users should
migrate away from it.
2025-09-25 01:08:03 -04:00
Mason Daugherty
e3f3c13b75 refactor(core): use aadd_documents in vectorstores unit tests (#33109)
Don't use the deprecated `upsert()` and `aupsert()`

Instead use the recommended alternatives
2025-09-25 00:57:08 -04:00
Mason Daugherty
c30844fce4 fix(core): use version agnostic get_fields (#33108)
Resolves a warning
2025-09-25 00:54:29 -04:00
Mason Daugherty
c9eb3bdb2d test(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33107)
Finish work from #33101
2025-09-25 00:49:11 -04:00
Mason Daugherty
e97baeb9a6 test(core): suppress pydantic_v1 deprecation warnings during import tests (#33106)
We intentionally import these. Hide warnings to reduce testing noise.
2025-09-25 00:37:40 -04:00
Mason Daugherty
3a6046b157 test(core): don't use deprecated input_variables param in from_file (#33105)
finish #33104
2025-09-25 04:29:31 +00:00
Mason Daugherty
8fdc619f75 refactor(core): don't use deprecated input_variables param in from_file (#33104)
Missed awhile back; causes warnings during tests
2025-09-25 00:14:17 -04:00
Ali Ismail
729bfe8369 test(core): enhance stringify_value test coverage for nested structures (#33099)
## Summary
Adds test coverage for the `stringify_value` utility function to handle
complex nested data structures that weren't previously tested.

## Changes
- Added `test_stringify_value_nested_structures()` to `test_strings.py`
- Tests nested dictionaries within lists
- Tests mixed-type lists with various data types
- Verifies proper stringification of complex nested structures

## Why This Matters
- Fills a gap in test coverage for edge cases
- Ensures `stringify_value` handles complex data structures correctly  
- Improves confidence in string utility functions used throughout the
codebase
- Low risk addition that strengthens existing test suite

## Testing
```bash
uv run --group test pytest libs/core/tests/unit_tests/utils/test_strings.py::test_stringify_value_nested_structures -v
```

This test addition follows the project's testing patterns and adds
meaningful coverage without introducing any breaking changes.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 00:04:47 -04:00
Mason Daugherty
9b624a79b2 test(core): suppress deprecation warnings in PipelinePromptTemplate (#33102)
We're intentionally testing this still so as not to regress. Reduce
warning noise.
2025-09-25 04:03:27 +00:00
Mason Daugherty
c60c5a91cb fix(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33101)
Use SHA-256 (collision-resistant) instead of the default SHA-1. No
functional changes to test behavior.
2025-09-25 00:02:11 -04:00
Mason Daugherty
d9e0c212e0 chore(infra): add tests to label mapping (#33103) 2025-09-25 00:01:53 -04:00
Sydney Runkle
f015526e42 release(langchain): v1.0.0a9 (#33098) 2025-09-24 21:02:53 +00:00
Sydney Runkle
57d931532f fix(langchain): extra arg for anthropic caching, __end__ -> end for jump_to (#33097)
Also updating `jump_to` to use `end` instead of `__end__`
2025-09-24 17:00:40 -04:00
Mason Daugherty
50012d95e2 chore: update pull_request_target types, harden (#33096)
Enhance the pull request workflows by updating the `pull_request_target`
types and ensuring safety by avoiding checkout of the PR's head. Update
the action to use a specific commit from the archived repository.
2025-09-24 16:37:16 -04:00
Mason Daugherty
33f06875cb fix(langchain_v1): version equality check (#33095) 2025-09-24 16:27:55 -04:00
dependabot[bot]
e5730307e7 chore: bump actions/setup-node from 4 to 5 (#32952)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Make eslint-compact matcher compatible with Stylelint by <a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li>Add support for indented eslint output by <a
href="https://github.com/fregante"><code>@​fregante</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
</ul>
<h3>Enhancement:</h3>
<ul>
<li>Support private mirrors by <a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<h3>Dependency update:</h3>
<ul>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1262">actions/setup-node#1262</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li><a href="https://github.com/fregante"><code>@​fregante</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
<li><a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v4.4.0">https://github.com/actions/setup-node/compare/v4...v4.4.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li><a
href="08f58d1471"><code>08f58d1</code></a>
Bump <code>@​octokit/request-error</code> and
<code>@​actions/github</code> (<a
href="https://redirect.github.com/actions/setup-node/issues/1227">#1227</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/setup-node/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 16:26:05 -04:00
Mason Daugherty
4783a9c18e style: update workflow name for version equality check (#33094) 2025-09-24 20:11:30 +00:00
Mason Daugherty
ee4d84de7c style(core): typo/docs lint pass (#33093) 2025-09-24 16:11:21 -04:00
Mason Daugherty
092dd5e174 chore: update link for monorepo structure (#33091) 2025-09-24 19:55:00 +00:00
Sydney Runkle
dd81e1c3fb release(langchain): 1.0.0a8 (#33090) 2025-09-24 15:31:29 -04:00
Sydney Runkle
135a5b97e6 feat(langchain): improvements to anthropic prompt caching (#33058)
Adding an `unsupported_model_behavior` arg that can be `'ignore'`,
`'warn'`, or `'raise'`. Defaults to `'warn'`.
2025-09-24 15:28:49 -04:00
Mason Daugherty
b92b394804 style: repo linting pass (#33089)
enable docstring-code-format
2025-09-24 15:25:55 -04:00
Sydney Runkle
083bb3cdd7 fix(langchain): need to inject all state for tools registered by middleware (#33087)
Type hints matter for conditional edges!
2025-09-24 15:25:51 -04:00
Mason Daugherty
2e9291cdd7 fix: lift openai version constraints across packages (#33088)
re: #33038 and https://github.com/openai/openai-python/issues/2644
2025-09-24 15:25:10 -04:00
Sydney Runkle
4f8a76b571 chore(langchain): renaming for HITL (#33067) 2025-09-24 07:19:44 -04:00
Mason Daugherty
05ba941230 style(cli): linting pass (#33078) 2025-09-24 01:24:52 -04:00
Mason Daugherty
ae4976896e chore: delete erroneous .readthedocs.yaml (#33079)
From the legacy docs/not needed here
2025-09-24 01:24:42 -04:00
Mason Daugherty
504ef96500 chore: add commit message generation instructions for VSCode (#33077) 2025-09-24 05:06:43 +00:00
Mason Daugherty
d99a02bb27 chore: add AGENTS.md (#33076)
it would be super cool if Anthropic supported this instead of
`CLAUDE.md` :/

https://agents.md/
2025-09-24 05:02:14 +00:00
Mason Daugherty
793de80429 chore: update label mapping in PR title labeler configuration (#33075) 2025-09-24 01:00:14 -04:00
Mason Daugherty
7d4e9d8cda revert(infra): put SECURITY.md at root (#33074) 2025-09-24 00:54:37 -04:00
Mason Daugherty
54dca494cf chore: delete erroneous poetry.toml configuration file (#33073)
- Not used by the current build system
- Potentially confusing for new contributors
- A leftover artifact from the Poetry to uv migration
2025-09-24 04:40:17 +00:00
Mason Daugherty
7b30e58386 chore: delete erroneous yarn.lock in root (#33072)
Appears to have had no purpose/was added by mistake and nobody
questioned it
2025-09-24 04:35:00 +00:00
Mason Daugherty
e62b541dfd chore(infra): move SECURITY.md to .github (#33071)
cleaning up top-level. `.github` folder placement will continue to show
on repo homepage:
https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository#about-security-policies
2025-09-24 00:27:48 -04:00
Mason Daugherty
8699980d09 chore(scripts): remove obsolete release and mypy/ruff update scripts (#33070)
Outdated scripts related to release management and mypy/ruff updates

Cleaning up the root-level
2025-09-24 04:24:38 +00:00
Mason Daugherty
79e536b0d6 chore(infra): further docs build cleanup (#33057)
Reorganize the requirements for better clarity and consistency. Improve
documentation on scripts and workflows.
2025-09-23 17:29:58 -04:00
Sydney Runkle
b5720ff17a chore(langchain): simplifying HITL condition (#33065)
Simplifying condition
2025-09-23 21:24:14 +00:00
nhuang-lc
48b05224ad fix(langchain_v1): only interrupt if at least one ToolConfig value is True (#33064)
**Description:** Right now, we interrupt even if the provided ToolConfig
has all false values. We should ignore ToolConfigs which do not have at
least one value marked as true (just as we would if tool_name: False was
passed into the dict).
2025-09-23 17:20:34 -04:00
Sydney Runkle
89079ad411 feat(langchain): new decorator pattern for dynamically generated middleware (#33053)
# Main Changes

1. Adding decorator utilities for dynamically defining middleware with
single hook functions (see an example below for dynamic system prompt)
2. Adding better conditional edge drawing with jump configuration
attached to middleware. Can be registered w/ the decorator new
decorator!

## Decorator Utilities

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import modify_model_request
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver


@modify_model_request
def modify_system_prompt(request: ModelRequest, state: AgentState) -> ModelRequest:
    request.system_prompt = (
        "You are a helpful assistant."
        f"Please record the number of previous messages in your response: {len(state['messages'])}"
    )
    return request

agent = create_agent(
    model="openai:gpt-4o-mini", 
    middleware=[modify_system_prompt]
).compile(checkpointer=InMemorySaver())
```

## Visualization and Routing improvements

We now require that middlewares define the valid jumps for each hook.

If using the new decorator syntax, this can be done with:

```py
@before_model(jump_to=["__end__"])
@after_model(jump_to=["tools", "__end__"])
```

If using the subclassing syntax, you can use these two class vars:

```py
class MyMiddlewareAgentMiddleware):
    before_model_jump_to = ["__end__"]
    after_model_jump_to = ["tools", "__end__"]
```

Open for debate if we want to bundle these in a single jump map / config
for a middleware. Easy to migrate later if we decide to add more hooks.

We will need to **really clearly document** that these must be
explicitly set in order to enable conditional edges.

Notice for the below case, `Middleware2` does actually enable jumps.

<table>
  <thead>
    <tr>
      <th>Before (broken), adding conditional edges unconditionally</th>
      <th>After (fixed), adding conditional edges sparingly</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>
<img width="619" height="508" alt="Screenshot 2025-09-23 at 10 23 23 AM"
src="https://github.com/user-attachments/assets/bba2d098-a839-4335-8e8c-b50dd8090959"
/>
      </td>
      <td>
<img width="469" height="490" alt="Screenshot 2025-09-23 at 10 23 13 AM"
src="https://github.com/user-attachments/assets/717abf0b-fc73-4d5f-9313-b81247d8fe26"
/>
      </td>
    </tr>
  </tbody>
</table>

<details>
<summary>Snippet for the above</summary>

```py
from typing import Any
from langchain.agents.tool_node import InjectedState
from langgraph.runtime import Runtime
from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.agents.middleware_agent import create_agent
from langchain_core.tools import tool
from typing import Annotated
from langchain_core.messages import HumanMessage
from typing_extensions import NotRequired

@tool
def simple_tool(input: str) -> str:
    """A simple tool."""
    return "successful tool call"


class Middleware1(AgentMiddleware):
    """Custom middleware that adds a simple tool."""

    tools = [simple_tool]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware2(AgentMiddleware):

    before_model_jump_to = ["tools", "__end__"]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware3(AgentMiddleware):

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

builder = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[Middleware1(), Middleware2(), Middleware3()],
    system_prompt="You are a helpful assistant.",
)
agent = builder.compile()
```

</details>

## More Examples

### Guardrails `after_model`

<img width="379" height="335" alt="Screenshot 2025-09-23 at 10 40 09 AM"
src="https://github.com/user-attachments/assets/45bac7dd-398e-45d1-ae58-6ecfa27dfc87"
/>

<details>
<summary>Code</summary>

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import after_model
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.checkpoint.memory import InMemorySaver
from typing import cast, Any

@after_model(jump_to=["model", "__end__"])
def after_model_hook(state: AgentState) -> dict[str, Any]:
    """Check the last AI message for safety violations."""
    last_message_content = cast(AIMessage, state["messages"][-1]).content.lower()
    print(last_message_content)

    unsafe_keywords = ["pineapple"]
    if any(keyword in last_message_content for keyword in unsafe_keywords):

        # Jump back to model to regenerate response
        return {"jump_to": "model", "messages": [HumanMessage("Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.")]}

    return {"jump_to": "__end__"}

# Create agent with guardrails middleware
agent = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[after_model_hook],
    system_prompt="Keep your responses to one sentence please!"
).compile()

# Test with potentially unsafe input
result = agent.invoke(
    {"messages": [HumanMessage("Tell me something about pineapples")]},
)

for msg in result["messages"]:
    print(msg.pretty_print())

"""
================================ Human Message =================================

Tell me something about pineapples
None
================================== Ai Message ==================================

Pineapples are tropical fruits known for their sweet, tangy flavor and distinctive spiky exterior.
None
================================ Human Message =================================

Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.
None
================================== Ai Message ==================================

Apples are popular fruits that come in various varieties, known for their crisp texture and sweetness, and are often used in cooking and baking.
None
"""
```

</details>
2025-09-23 13:25:55 -04:00
Mason Daugherty
2c95586f2a chore(infra): audit workflows, scripts (#33055)
Mostly adding a descriptive frontmatter to workflow files. Also address
some formatting and outdated artifacts

No functional changes outside of
[d5457c3](d5457c39ee),
[90708a0](90708a0d99),
and
[338c82d](338c82d21e)
2025-09-23 17:08:19 +00:00
Mason Daugherty
9c1285cf5b chore(infra): fix ping pong pr labeler config (#33054)
The title-based labeler was clearing all pre-existing labels (including
the file-based ones) before adding its semantic labels.
2025-09-22 21:19:53 -04:00
Sydney Runkle
c3be45bf14 fix(langchain): HITL bug causing dupe interrupt (#33052)
Need to find **last** AI msg (not first). Getting too creative w/
generators.
2025-09-22 20:09:12 -04:00
Arman Tsaturian
8f488d62b2 docs: fix stripe toolkit import in the guide (#33044)
**Description:**
Stripe tools integration guide incorrectly referenced the `crewai`
toolkit. Updated the import to use the correct `langchain` toolkit.

Stripe docs reference:
https://docs.stripe.com/agents?framework=langchain&lang=python
2025-09-22 15:17:09 -04:00
Mason Daugherty
cdae9e4942 fix(infra): prevent labeler workflow from adding/removing same labels (#33039)
The file-based and title-based labeler workflows were conflicting,
causing the bot to add and remove identical labels in the same
operation. Hopefully this fixes
2025-09-21 04:37:59 +00:00
Mason Daugherty
7ddc798f95 fix(openai): pin upper bound to prevent Pydantic 2.7.0 issues (#33038)
https://github.com/openai/openai-python/issues/2644
2025-09-21 00:27:03 -04:00
Mason Daugherty
7dcf6a515e fix: update method calls from dict to model_dump in Chain (#33035) 2025-09-20 23:47:44 -04:00
Mason Daugherty
043a7560a5 test: use .get() for safe ls_params access (#33034) 2025-09-20 23:46:37 -04:00
Mason Daugherty
5b418d3f26 feat(infra): add PR labeler configurations and workflows (#33031) 2025-09-20 22:33:08 -04:00
Mason Daugherty
6b4054c795 chore(infra): update pre-commit hooks to include linting (#33029) 2025-09-21 02:26:19 +00:00
Mason Daugherty
30fde5af38 chore(infra): remove couchbase formatting hook from pre-commit (#33030)
Should've been done when it was removed from the monorepo
2025-09-20 22:09:57 -04:00
Mason Daugherty
781db9d892 chore: update pyproject.toml files, remove codespell (#33028)
- Removes Codespell from deps, docs, and `Makefile`s
- Python version requirements in all `pyproject.toml` files now use the
`~=` (compatible release) specifier
- All dependency groups and main dependencies now use explicit lower and
upper bounds, reducing potential for breaking changes
2025-09-20 22:09:33 -04:00
Sydney Runkle
f2b0afd0b7 release(langchain): 1.0.0a6 (#33024)
w/ improvements to HITL, state schema merging, dynamic system prompt
2025-09-19 18:47:41 +00:00
Sydney Runkle
c3654202a3 fix(langchain): use state schema as input schema to middleware nodes (#33023)
We want state schema as the input schema to middleware nodes because the
conditional edges after these nodes need access to the full state.

Also, we just generally want all state passed to middleware nodes, so we
should be specifying this explicitly. If we don't, the state annotations
used by users in their node signatures are used (so they might be
missing fields).
2025-09-19 18:43:33 +00:00
Sydney Runkle
4d118777bc feat(langchain): dynamic system prompt middleware (#33006)
# Changes

## Adds support for `DynamicSystemPromptMiddleware`

```py
from langchain.agents.middleware import DynamicSystemPromptMiddleware
from langgraph.runtime import Runtime
from typing_extensions import TypedDict

class Context(TypedDict):
    user_name: str

def system_prompt(state: AgentState, runtime: Runtime[Context]) -> str:
    user_name = runtime.context.get("user_name", "n/a")
    return f"You are a helpful assistant. Always address the user by their name: {user_name}"

middleware = DynamicSystemPromptMiddleware(system_prompt)
```

## Adds support for `runtime` in middleware hooks

```py
class AgentMiddleware(Generic[StateT, ContextT]):
    def modify_model_request(
        self,
        request: ModelRequest,
        state: StateT,
        runtime: Runtime[ContextT],  # Optional runtime parameter
    ) -> ModelRequest:
        # upgrade model if runtime.context.subscription is `top-tier` or whatever
```

## Adds support for omitting state attributes from input / output
schemas

```py
from typing import Annotated, NotRequired
from langchain.agents.middleware.types import PrivateStateAttr, OmitFromInput, OmitFromOutput

class CustomState(AgentState):
    # Private field - not in input or output schemas
    internal_counter: NotRequired[Annotated[int, PrivateStateAttr]]
    
    # Input-only field - not in output schema
    user_input: NotRequired[Annotated[str, OmitFromOutput]]
    
    # Output-only field - not in input schema  
    computed_result: NotRequired[Annotated[str, OmitFromInput]]
```

## Additionally
* Removes filtering of state before passing into middleware hooks

Typing is not foolproof here, still need to figure out some of the
generics stuff w/ state and context schema extensions for middleware.

TODO:
* More docs for middleware, should hold off on this until other prios
like MCP and deepagents are met

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-18 16:07:16 -04:00
Mason Daugherty
f158cea1e8 release(mistralai): 0.2.12 (#33008) 2025-09-18 11:42:11 -04:00
Sadiq Khan
90280d1f58 docs(core): fix bugs and improve example code in chat_history.py (#32994)
## Summary

This PR fixes several bugs and improves the example code in
`BaseChatMessageHistory` docstring that would prevent it from working
correctly.

### Bugs Fixed
- **Critical bug**: Fixed `json.dump(messages, f)` →
`json.dump(serialized, f)` - was using wrong variable
- **NameError**: Fixed bare variable references to use
`self.storage_path` and `self.session_id`
- **Missing imports**: Added required imports (`json`, `os`, message
converters) to make example runnable

### Improvements
- Added missing type hints following project standards (`messages() ->
list[BaseMessage]`, `clear() -> None`)
- Added robust error handling with `FileNotFoundError` exception
handling
- Added directory creation with `os.makedirs(exist_ok=True)` to prevent
path errors
- Improved performance: `json.load(f)` instead of `json.loads(f.read())`
- Added explicit UTF-8 encoding to all file operations
- Updated stores.py to use modern union syntax (`int | None` vs
`Optional[int]`)

### Test Plan
- [x] Code passes linting (`ruff check`)
- [x] Example code now has all required imports and proper syntax
- [x] Fixed variable references prevent runtime errors
- [x] Follows project's type annotation standards

The example code in the docstring is now fully functional and follows
LangChain's coding standards.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
2025-09-18 09:34:19 -04:00
Dushmanta
ee340e0a3b fix(docs): update dead link to docling github and docs (#33001)
- **Description:** Updated the dead/unreachable links to Docling from
the additional resources section of the langchain-docling docs
  - **Issue:** Fixes langchain-ai/docs/issues/574
  - **Dependencies:** None
2025-09-18 09:30:29 -04:00
Sydney Runkle
d5ba5d3511 feat(langchain): improved HITL patterns (#32996)
# Main changes / new features

## Better support for parallel tool calls

1. Support for multiple tool calls requiring human input
2. Support for combination of tool calls requiring human input + those
that are auto-approved
3. Support structured output w/ tool calls requiring human input
4. Support structured output w/ standard tool calls

## Shortcut for allowed actions

Adds a shortcut where tool config can be specified as a `bool`, meaning
"all actions allowed"

```py
HumanInTheLoopMiddleware(tool_configs={"expensive_tool": True})
```

## A few design decisions here
* We only raise one interrupt w/ all `HumanInterrupt`s, currently we
won't be able to execute all tools until all of these are resolved. This
isn't super blocking bc we can't re-invoke the model until all tools
have finished execution. That being said, if you have a long running
auto-approved tool, this could slow things down.

## TODOs

* Ideally, we would rename `accept` -> `approve`
* Ideally, we would rename `respond` -> `reject`
* Docs update (@sydney-runkle to own)
* In another PR I'd like to refactor testing to have one file for each
prebuilt middleware :)

Fast follow to https://github.com/langchain-ai/langchain/pull/32962
which was deemed as too breaking
2025-09-17 16:53:01 -04:00
Mason Daugherty
76d0758007 fix(docs): json_mode -> json_schema (#32993) 2025-09-17 18:21:34 +00:00
Mason Daugherty
8b3f74012c docs: update GenAI structured output section to include JSON mode details (#32992) 2025-09-17 17:40:34 +00:00
Mason Daugherty
54a9556f5c chore(cli): update lock (#32986) 2025-09-17 02:08:20 +00:00
Mason Daugherty
66041a2778 refactor(cli): target ruff 310 (#32985)
Use union types for optional parameters
2025-09-16 22:04:28 -04:00
Mason Daugherty
ab1b822523 chore: update PR title lint (#32983) 2025-09-16 22:04:19 -04:00
Chase Lean
543d90e108 docs: add langchain-scraperapi (#31973)
Adds documentation for the integration langchain-scraperapi, which
contains 3 tools using the ScraperAPI service.

The tools give AI agents the ability to

Scrape the web and return HTML/text/markdown
Perform Google search and return json output
Perform Amazon search and return json output

For reference, here is the official repo for langchain_scraperapi:
https://github.com/scraperapi/langchain-scraperapi
2025-09-16 21:46:20 -04:00
Adam Deedman
f8640630d8 docs: fix memory for agents (#32979)
Replaced `input_message` parameter with a directly called tuple, e.g.
`{"messages": [("user", "What is my name?")]}`

Before, the memory function wasn't working with the agent, using the
format of the input_message parameter.

Specifically, on page [Build an
Agent#adding-in-memory](https://python.langchain.com/docs/tutorials/agents/#adding-in-memory)

In the previous code, the query "What's my name?" wasn't working, as the
agent could not recall memory correctly.

<img width="860" height="679" alt="image"
src="https://github.com/user-attachments/assets/dfbca21e-ffe9-4645-a810-3be7a46d81d5"
/>
2025-09-16 15:46:15 -04:00
Mason Daugherty
f9605c7438 chore(infra): update contribution guide link in CONTRIBUTING.md (#32976) 2025-09-16 15:15:53 +00:00
Mason Daugherty
ebd6f7d8a3 chore(infra): update security guidelines formatting (#32975) 2025-09-16 15:12:10 +00:00
ccurme
e63c1d7171 chore(langchain): drop cap on python version (#32974) 2025-09-16 10:44:21 -04:00
Mason Daugherty
8180020b93 chore: restore commented out optional deps (#32971)
langchain & langchain_v1
2025-09-16 10:10:49 -04:00
Username46786
435194acf6 docs: add cross-links between summarization how-to pages (#32968)
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.

- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
  - /docs/how_to/summarize_map_reduce/
  - /docs/how_to/summarize_refine/

Type: docs-only (no code changes)
2025-09-16 09:59:03 -04:00
Mason Daugherty
244c699551 refactor(cli): drop Python 3.9 (#32964) 2025-09-15 19:22:53 -04:00
Mason Daugherty
369858de19 chore(infra): fix codspeed (#32963)
Related to #32950

CodSpeed v4 runs pytest inside its own runner process, which does not
automatically inherit environment variables from the job
2025-09-15 21:52:46 +00:00
Ali Ismail
4ebce80fbb docs(langchain): add docstring for _load_map_reduce_chain (#32961)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.

Issue:
N/A (documentation improvement only)

Dependencies:
None
2025-09-15 17:19:20 -04:00
Mason Daugherty
8670b24c8e test(groq): xfail tool integration test (#32960)
Groq models have known issues with tool calling consistency,
[particularly with complex tools derived from
runnables](https://github.com/langchain-ai/langchain/discussions/19990).
[(more)](https://github.com/langchain-ai/langchain/discussions/24309)

xfail until we can dedicate time to wrangling their API/model handling
2025-09-15 14:23:22 -04:00
Ademílson Tonato
8d60ddba3a docs: update installation command for firecrawl-py package (#32958) 2025-09-15 14:10:08 -04:00
Mason Daugherty
9f6431924f feat(openai): add max_tokens to AzureChatOpenAI (#32959)
Fixes #32949

This pattern is [present in
`ChatOpenAI`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L2821)
but wasn't carried over to Azure.


[CI](https://github.com/langchain-ai/langchain/actions/runs/17741751797/job/50417180998)
2025-09-15 14:09:20 -04:00
Ali Ismail
569a3d9602 docs(langchain): add docstring for _load_stuff_chain (#32932)
**Description:**  
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.

**Issue:**  
N/A (documentation improvement only)

**Dependencies:**  
None
2025-09-15 10:02:50 -04:00
dependabot[bot]
8ef4df903f chore(infra): bump CodSpeedHQ/action from 3 to 4 (#32950)
Bumps [CodSpeedHQ/action](https://github.com/codspeedhq/action) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codspeedhq/action/releases">CodSpeedHQ/action's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>💥 BREAKING</h2>
<p>It's now required to explicitly set the runner mode to
<code>instrumentation</code> or <code>walltime</code> using either:</p>
<ul>
<li>the <code>mode</code> argument</li>
<li>or the <code>CODSPEED_RUNNER_MODE</code> environment variable</li>
</ul>
<blockquote>
<p>[!TIP]
Before, this variable was automatically set to
<code>instrumentation</code> on every runner except for <a
href="https://codspeed.io/docs/instruments/walltime">CodSpeed macro
runners</a> where it was set to <code>walltime</code> by default.</p>
</blockquote>
<p>Find more details in <a
href="https://codspeed.io/docs/instruments">the instruments
documentation</a>.</p>
<h2>Details</h2>
<h3><!-- raw HTML omitted -->🚀 Features</h3>
<ul>
<li>Make perf profiling enabled by default by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/110">#110</a></li>
<li>Make the runner mode argument required by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Use introspected node in walltime mode by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/108">#108</a></li>
<li>Add instrumented go shell script by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/102">#102</a></li>
</ul>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Compute proper load bias by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/107">#107</a></li>
<li>Increase timeout for first perf ping by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Prevent running with valgrind by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/106">#106</a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Change go-runner binary name by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/111">#111</a></li>
</ul>
<p><strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Don't show error when libpython is not found by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Improve conditional compilation in
<code>get_pipe_open_options</code> by <a
href="https://github.com/art049"><code>@​art049</code></a> in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/100">#100</a></li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Internals</h3>
<ul>
<li>Change log level to warn for venv_compat error by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/104">#104</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1">https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1</a>
<strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="653fdc30e6"><code>653fdc3</code></a>
Release v4.0.1 🚀</li>
<li><a
href="4da7be1bda"><code>4da7be1</code></a>
chore: bump runner version to 4.0.1</li>
<li><a
href="172d6c5630"><code>172d6c5</code></a>
chore: make the comment about input validation more discrete</li>
<li><a
href="d15e1ce813"><code>d15e1ce</code></a>
chore: improve the release script</li>
<li><a
href="6eeb021fd0"><code>6eeb021</code></a>
Release v4.0.0 🚀</li>
<li><a
href="74312dabbe"><code>74312da</code></a>
chore: improve the release script</li>
<li><a
href="8a17a350a8"><code>8a17a35</code></a>
ci: add modes to the matrix</li>
<li><a
href="8e3f02a649"><code>8e3f02a</code></a>
feat: make the mode argument required</li>
<li><a
href="97c7a6f5fc"><code>97c7a6f</code></a>
chore: bump runner version to 4.0.0</li>
<li><a
href="8a4cadd026"><code>8a4cadd</code></a>
chore: point the changelog to the runner</li>
<li>See full diff in <a
href="https://github.com/codspeedhq/action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=CodSpeedHQ/action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:56:46 +00:00
doubleinfinity
b944bbc766 docs: add ZeusDB vector store integration (#32822)
## Description

This PR adds documentation for the new ZeusDB vector store integration
with LangChain.

## Motivation

ZeusDB is a high-performance vector database (Python/Rust backend)
designed for AI applications that need fast similarity search and
real-time vector ops. This integration brings ZeusDB's capabilities to
the LangChain ecosystem, giving developers another production-oriented
option for vector storage and retrieval.

**Key Features:**
- **User-Friendly Python API**: Intuitive interface that integrates
seamlessly with Python ML workflows
- **High Performance**: Powered by a robust Rust backend for
lightning-fast vector operations
- **Enterprise Logging**: Comprehensive logging capabilities for
monitoring and debugging production systems
- **Advanced Features**: Includes product quantization and persistence
capabilities
- **AI-Optimized**: Purpose-built for modern AI applications and RAG
pipelines

## Changes

- Added provider documentation:
`docs/docs/integrations/providers/zeusdb.mdx` (installation, setup).

- Added vector store documentation:
`docs/docs/integrations/vectorstores/zeusdb.ipynb` (quickstart for
creating/querying a ZeusDBVectorStore).

- Registered langchain-zeusdb in `libs/packages.yml` for discovery.

## Target users

- AI/ML engineers building RAG pipelines

- Data scientists working with large document collections

- Developers needing high-throughput vector search

- Teams requiring near real-time vector operations

## Testing

- Followed LangChain's "How to add standard tests to an integration"
guidance.
- Code passes format, lint, and test checks locally.
- Tested with LangChain Core 0.3.74
- Works with Python 3.10 to 3.13

## Package Information
**PyPI:** https://pypi.org/project/langchain-zeusdb
**Github:** https://github.com/ZeusDB/langchain-zeusdb
2025-09-15 09:55:14 -04:00
Filip Makraduli
0be7515abc docs: add superlinked retriever integration (#32433)
# feat(superlinked): add superlinked retriever integration

**Description:** 
Add Superlinked as a custom retriever with full LangChain compatibility.
This integration enables users to leverage Superlinked's multi-modal
vector search capabilities including text similarity, categorical
similarity, recency, and numerical spaces with flexible weighting
strategies. The implementation provides a `SuperlinkedRetriever` class
that extends LangChain's `BaseRetriever` with comprehensive error
handling, parameter validation, and support for various vector databases
(in-memory, Qdrant, Redis, MongoDB).

**Key Features:**
- Full LangChain `BaseRetriever` compatibility with `k` parameter
support
- Multi-modal search spaces (text, categorical, numerical, recency)
- Flexible weighting strategies for complex search scenarios
- Vector database agnostic implementation
- Comprehensive validation and error handling
- Complete test coverage (unit tests, integration tests)
- Detailed documentation with 6 practical usage examples

**Issue:** N/A (new integration)

**Dependencies:** 
- `superlinked==33.5.1` (peer dependency, imported within functions)
- `pandas^2.2.0` (required by superlinked)

**Linkedin handle:** https://www.linkedin.com/in/filipmakraduli/

## Implementation Details

### Files Added/Modified:
- `libs/partners/superlinked/` - Complete package structure
- `libs/partners/superlinked/langchain_superlinked/retrievers.py` - Main
retriever implementation
- `libs/partners/superlinked/tests/unit_tests/test_retrievers.py` - unit
tests
- `libs/partners/superlinked/tests/integration_tests/test_retrievers.py`
- Integration tests with mocking
- `docs/docs/integrations/retrievers/superlinked.ipynb` - Documentation
a few usage examples

### Testing:
- `make format` - passing
- `make lint` - passing 
- `make test` - passing (16 unit tests, integration tests)
- Comprehensive test coverage including error handling, validation, and
edge cases

### Documentation:
- Example notebook with 6 practical scenarios:
  1. Simple text search
  2. Multi-space blog search (content + category + recency)
  3. E-commerce product search (price + brand + ratings)
  4. News article search (sentiment + topics + recency)
  5. LangChain RAG integration example
  6. Qdrant vector database integration

### Code Quality:
- Follows LangChain contribution guidelines
- Backwards compatible
- Optional dependencies imported within functions
- Comprehensive error handling and validation
- Type hints and docstrings throughout

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:54:04 +00:00
Sadiq Khan
cc9a97a477 docs(core): add type hints to BaseStore example code (#32946)
## Summary
- Add comprehensive type hints to the MyInMemoryStore example code in
BaseStore docstring
- Improve documentation quality and educational value for developers
- Align with LangChain's coding standards requiring type hints on all
Python code

## Changes Made
- Added return type annotations to all methods (__init__, mget, mset,
mdelete, yield_keys)
- Added parameter type annotations using proper generic types (Sequence,
Iterator)
- Added instance variable type annotation for the store attribute
- Used modern Python union syntax (str | None) for optional types

## Test Plan
- Verified Python syntax validity with ast.parse()
- No functional changes to actual code, only documentation improvements
- Example code now follows best practices and coding standards

This change improves the educational value of the example code and
ensures consistency with LangChain's requirement that "All Python code
MUST include type hints and return types" as specified in the
development guidelines.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:45:34 +00:00
Dmitry
ee17adb022 docs: add AI/ML API integration (#32430)
**Description:**
Introduces documentation notebooks for AI/ML API integration covering
the following use cases:
- Chat models (`ChatAimlapi`)
- Text completion models (`AimlapiLLM`)
- Provider usage examples
- Text embedding models (`AimlapiEmbeddings`)

Additionally, adds the `langchain-aimlapi` package entry to
`libs/packages.yml` for package management.

This PR aims to provide a comprehensive starting point for developers
integrating AI/ML API models with LangChain via the new
`langchain-aimlapi` package.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** @aimlapi

---

### **To-Do Before Submitting PR:**

* [x] Run `make format`
* [x] Run `make lint`
* [x] Confirm all documentation notebooks are in
`docs/docs/integrations/`
* [x] Double-check `libs/packages.yml` has the correct repo path
* [x] Confirm no `pyproject.toml` modifications were made unnecessarily

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 09:41:40 -04:00
Noraina
6a43f140bc docs: update SerpApi free searches amount in tool feature table (#32945)
**Description:** 
This PR updates the free searches per month from **100** to **250** and
renames SerpAPI to [SerpApi](https://serpapi.com/) to prevent confusion.
Add import API keys and enhance usage instructions in the Jupyter
notebook

**Issue:** N/A

**Dependencies:** N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
2025-09-14 21:42:59 -04:00
Youngho Kim
4619a2727f docs(anthropic): update documentation links (#32938)
**Description:**
This PR updated links to the latest Anthropic documentation. Changes
include revised links for model overview, tool usage, web search tool,
text editor tool, and more.

**Issue:**
N/A

**Dependencies:**
None

**Twitter handle:**
N/A
2025-09-14 21:38:51 -04:00
湛露先生
6487a7e2e5 chore(langchain): remove duplicate .pdf listing (#32929) 2025-09-14 21:33:40 -04:00
湛露先生
406ebc9141 chore(langchain): Fix typos in core docstrings (#32928)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-09-14 21:33:06 -04:00
Nikhil Chandrappa
e6b5ff213a docs: add YugabyteDB Distributed SQL database (#32571)
- **Description:** The `langchain-yugabytedb` package implementations of
core LangChain abstractions using `YugabyteDB` Distributed SQL Database.
  
YugabyteDB is a cloud-native distributed PostgreSQL-compatible database
that combines strong consistency with ultra-resilience, seamless
scalability, geo-distribution, and highly flexible data locality to
deliver business-critical, transactional applications.

[YugabyteDB](https://www.yugabyte.com/ai/) combines the power of the
`pgvector` PostgreSQL extension with an inherently distributed
architecture. This future-proofed foundation helps you build GenAI
applications using RAG retrieval that demands high-performance vector
search.

- [ ] **tests and docs**: 
1. `langchain-yugabytedb`
[github](https://github.com/yugabyte/langchain-yugabytedb) repo.
2. YugabyteDB VectorStore example notebook showing its use. It lives in
`langchain/docs/docs/integrations/vectorstores/yugabytedb.ipynb`
directory.
  3. Running `langchain-yugabytedb` unit tests 
  
- Setting up a Development Environment

This document details how to set up a local development environment that
will
allow you to contribute changes to the project.

Acquire sources and create virtualenv.
```shell
git clone https://github.com/yugabyte/langchain-yugabytedb
cd langchain-yugabytedb
uv venv --python=3.13
source .venv/bin/activate
```

Install package in editable mode.
```shell
uv pip install pipx  
pipx install poetry
poetry install
uv pip install pytest pytest_asyncio pytest-timeout langchain-core langchain_tests sqlalchemy psycopg psycopg-binary numpy pgvector
```

Start YugabyteDB RF-1 Universe.
```shell
docker run -d --name yugabyte_node01 --hostname yugabyte01 \
  -p 7000:7000 -p 9000:9000 -p 15433:15433 -p 5433:5433 -p 9042:9042 \
  yugabytedb/yugabyte:2.25.2.0-b359 bin/yugabyted start --background=false \
  --master_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true" \
  --tserver_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true"

docker exec -it yugabyte_node01 bin/ysqlsh -h yugabyte01 -c "CREATE extension vector;"
```

Invoke test cases.
```shell
pytest -vvv tests/unit_tests/yugabytedb_tests
```
2025-09-12 16:55:09 -04:00
Michael Yilma
03f0ebd93e docs: add Bigtable Key-value Store and Vector Store Docs (#32598)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **feat(docs)**: add Bigtable Key-value store doc
- [X] **feat(docs)**: add Bigtable Vector store doc 

This PR adds a doc for Bigtable and LangChain Key-value store
integration. It contains guides on how to add, delete, get, and yield
key-value pairs from Bigtable Key-value Store for LangChain.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:53:59 -04:00
Bar Cohen
c9eed530ce docs: add Timbr tools integration (#32862)
# feat(integrations): Add Timbr tools integration

## DESCRIPTION

This PR adds comprehensive documentation and integration support for
Timbr's semantic layer tools in LangChain.

[Timbr](https://timbr.ai/) provides an ontology-driven semantic layer
that enables natural language querying of databases through
business-friendly concepts. It connects raw data to governed business
measures for consistent access across BI, APIs, and AI applications.

[`langchain-timbr`](https://pypi.org/project/langchain-timbr/) is a
Python SDK that extends
[LangChain](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangChain)
and
[LangGraph](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangGraph)
with custom agents, chains, and nodes for seamless integration with the
Timbr semantic layer. It enables converting natural language prompts
into optimized semantic-SQL queries and executing them directly against
your data.

**What's Added:**
- Complete integration documentation for `langchain-timbr` package
- Tool documentation page with usage examples and API reference

**Integration Components:**
- `IdentifyTimbrConceptChain` - Identify relevant concepts from user
prompts
- `GenerateTimbrSqlChain` - Generate SQL queries from natural language
- `ValidateTimbrSqlChain` - Validate queries against knowledge graph
schemas
- `ExecuteTimbrQueryChain` - Execute queries against semantic databases
- `GenerateAnswerChain` - Generate human-readable answers from results

## Documentation Added

- `/docs/integrations/providers/timbr.mdx` - Provider overview and
configuration
- `/docs/integrations/tools/timbr.ipynb` - Comprehensive tool usage
examples

## Links

- [PyPI Package](https://pypi.org/project/langchain-timbr/)
- [GitHub Repository](https://github.com/WPSemantix/langchain-timbr)
- [Official
Documentation](https://docs.timbr.ai/doc/docs/integration/langchain-sdk/)

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:51:42 -04:00
tbice
e6c38a043f docs: add Qwen integration guide and update qwq documentation (#32817)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

**Description:**  
Add documentation for Qwen integration in LangChain, including setup
instructions, usage examples, and configuration details. Update related
qwq documentation to reflect current best practices and improve clarity
for users.

This PR enhances the documentation ecosystem by:
- Adding a new guide for integrating Qwen models
- Updating outdated or incomplete qwq documentation
- Improving structure and readability of relevant sections

**Issue:** N/A  
**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:49:20 -04:00
Elif Sema Balcioglu
dc47c2c598 docs: update langchain-oracledb documentation (#32805)
`Oracle AI Vector Search` integrations for LangChain have been moved to
a dedicated package, [langchain-oracledb
](https://pypi.org/project/langchain-oracledb/), and a new repository,
[langchain-oracle
](https://github.com/oracle/langchain-oracle/tree/main/libs/oracledb).
This PR updates the corresponding documentation, including installation
instructions and import statements, to reflect these changes.

This PR is complemented with:
https://github.com/langchain-ai/langchain-community/pull/283

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:47:10 -04:00
Yuvraj Chandra
3420ca1da2 docs: add ZenRows provider and tool integration docs (#31742)
**Description:** Adds documentation for ZenRows integration with
LangChain, including provider overview and detailed tool documentation.
ZenRows is an enterprise-grade web scraping solution that enables
LangChain agents to extract web content at scale with advanced features
like JavaScript rendering, anti-bot bypass, geo-targeting, and multiple
output formats.

This PR includes:
- Provider documentation
(`docs/docs/integrations/providers/zenrows.ipynb`)
- Tool documentation
(`docs/docs/integrations/tools/zenrows_universal_scraper.ipynb`)
- Complete usage examples and API reference links

**Issue:** N/A

**Dependencies:** 
- [langchain-zenrows](https://github.com/ZenRows-Hub/langchain-zenrows)
package (external, available on
[PyPI](https://pypi.org/project/langchain-zenrows/))
- No changes to core LangChain dependencies

**LinkedIn handle:** https://www.linkedin.com/company/zenrows/

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:37:49 -04:00
Vishal Karwande
f11dd177e9 docs: update oci documentation and examples. (#32749)
Adding Oracle Generative AI as one of the providers for langchain.
Updated the old examples in the documentation with the new working
examples.

---------

Co-authored-by: Vishal Karwande <vishalkarwande@Vishals-MacBook-Pro.local>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:28:03 -04:00
Ali Ismail
d5a4abf960 docs(core): remove duplicate 'the' in indexing/api.py (#32924)
**Description:** Fixes a small typo in `_get_document_with_hash` inside 
`libs/core/langchain_core/indexing/api.py`.

**Issue:** N/A (no related issue)

**Dependencies:** None
2025-09-12 15:49:54 -04:00
Eugene Yurtsev
b1497bcea1 chore(core): test that default values in tool calls are preserved in json schema representation (#32921)
Add unit test coverage for this issue:
https://github.com/langchain-ai/langchain/issues/32232
2025-09-12 12:50:54 -04:00
Sydney Runkle
84f9824cc9 chore: use uv caches (#32919)
Especially helpful for the text splitters tests where we're installing
pytorch (expensive and slow slow slow). Should speed up CI by 5-10 mins.

w/o caches, CI taking 20 minutes 😨 
w/ caches, CI taking 3 minutes
2025-09-12 10:29:35 -04:00
Sydney Runkle
0814bfe5ed ci: use partial runs w/ codspeed (#32920)
Taking advantage of [partial
runs](https://codspeed.io/docs/features/partial-runs)!

This should save us minutes on every CI job, we only run codspeed for
libs w/ changes and this doesn't affect benchmarking drops
2025-09-12 09:46:01 -04:00
Christophe Bornet
cbaf97ada4 chore: bump mypy version to 1.18 (#32914) 2025-09-12 09:19:23 -04:00
Sydney Runkle
dc2da95ac0 release(langchain): v1.0.0a5 (#32917) 2025-09-12 08:36:44 -04:00
Sydney Runkle
9e78ff19ab fix(langchain): use messages from model request (#32908)
Oversight when moving back to basic function call for
`modify_model_request` rather than implementation as its own node.

Basic test right now failing on main, passing on this branch

Revealed a gap in testing. Will write up a more robust test suite for
basic middleware features.
2025-09-12 08:18:02 -04:00
Mason Daugherty
649d8a8223 test(anthropic): enable VCR for web fetch test (#32913)
The API issues have been resolved; no longer xfailing
2025-09-12 03:19:55 +00:00
Mason Daugherty
338d3d2795 chore: remove infra tag from task issue template (#32912) 2025-09-11 22:02:14 -04:00
Mason Daugherty
31f641a11f chore(infra): issue template updates (#32911) 2025-09-11 22:00:44 -04:00
open-swe[bot]
91286b0b27 chore(infra): issue template updates (#32910)
Fixes: #32909

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:53:35 -04:00
dishaprakash
bea72bac3e docs: add hybrid search documentation to PGVectorStore (#32549)
Adding documentation for Hybrid Search in the PGVectorStore Notebook

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:12:58 -04:00
Caspar Broekhuizen
15d558ff16 fix(core): resolve mermaid node id collisions when special chars are used (#32857)
### Description

* Replace the Mermaid graph node label escaping logic
(`_escape_node_label`) with `_to_safe_id`, which converts a string into
a unique, Mermaid-compatible node id. Ensures nodes with special
characters always render correctly.

**Before**
* Invalid characters (e.g. `开`) replaced with `_`. Causes collisions
between nodes with names that are the same length and contain all
non-safe characters:
```python
_escape_node_label("开") # '_'
_escape_node_label("始") # '_'  same as above, but different character passed in. not a unique mapping.
```

**After**
```python
_to_safe_id("开") # \5f00
_to_safe_id("始") # \59cb  unique!
```

### Tests
* Rename `test_graph_mermaid_escape_node_label()` to
`test_graph_mermaid_to_safe_id()` and update function logic to use
`_to_safe_id`
* Add `test_graph_mermaid_special_chars()`

### Issue

Fixes langchain-ai/langgraph#6036
2025-09-11 14:15:17 -07:00
Hyunjoon Jeong
9cc85387d1 fix(text-splitters): add validation to prevent infinite loop and prevent empty token splitter (#32205)
### Description
1) Add validation to prevent infinite loop condition when
```tokenizer.tokens_per_chunk > tokenizer.chunk_overlap```
2) Avoid empty decoded chunk when splitter appends tokens

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:55:32 -04:00
Mason Daugherty
7e5180e2fa refactor: inline test release (#32903)
Reusable workflows are not currently supported by PyPI's Trusted
Publishing
functionality, and are subject to breakage. Users are strongly
encouraged
to avoid using reusable workflows for Trusted Publishing until support
becomes official. Please, do not report bugs if this breaks.
2025-09-11 16:20:07 -04:00
Mason Daugherty
bbb1b9085d release(prompty): 0.1.2 (#32907) 2025-09-11 16:19:07 -04:00
Vincent Min
ff9f17bc66 fix(core): preserve ordering in RunnableRetry batch/abatch results (#32526)
Description: Fixes a bug in RunnableRetry where .batch / .abatch could
return misordered outputs (e.g. inputs [0,1,2] yielding [1,1,2]) when
some items succeeded on an earlier attempt and others were retried. Root
cause: successful results were stored keyed by the index within the
shrinking “pending” subset rather than the original input index, causing
collisions and reordered/duplicated outputs after retries. Fix updates
_batch and _abatch to:

- Track remaining original indices explicitly.
- Call underlying batch/abatch only on remaining inputs.
- Map results back to original indices.
- Preserve final ordering by reconstructing outputs in original
positional order.

Issue: Fixes #21326

Tests:

- Added regression tests: test_retry_batch_preserves_order and
test_async_retry_batch_preserves_order asserting correct ordering after
a single controlled failure + retry.
- Existing retry tests still pass.

Dependencies:

- None added or changed.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:18:25 -04:00
Matthew Lapointe
b1f08467cd feat(core): allow overriding ls_model_name from kwargs (#32541) 2025-09-11 16:18:06 -04:00
Eugene Yurtsev
2903e08311 chore(docs): remove langchain_experimental from api reference (#32904)
This removes langchain-experimental from api reference.

We do not recommend it to users for production use cases, so let's also
deprecate it from documentation
2025-09-11 16:13:58 -04:00
Mason Daugherty
115e20a0bc release(ollama): 0.3.8 (#32906) 2025-09-11 16:00:41 -04:00
Mason Daugherty
0ea945d291 release(nomic): 0.1.5 (#32905) 2025-09-11 15:54:19 -04:00
Mason Daugherty
5795ec3c4d release(exa): 0.3.1 (#32902) 2025-09-11 15:53:13 -04:00
Mason Daugherty
bd765753ca release(chroma): 0.2.6 (#32901) 2025-09-11 15:52:19 -04:00
Christophe Bornet
5fd7962a78 fix(core): fix support of Pydantic v1 models in BaseTool.args (#32487)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:44:51 -04:00
Marcus Chia
c68796579e fix(core): resolve infinite recursion in _dereference_refs_helper with mixed $ref objects (#32578)
**Description:** Fixes infinite recursion issue in JSON schema
dereferencing when objects contain both $ref and other properties (e.g.,
nullable, description, additionalProperties). This was causing Apollo
MCP server schemas to hang indefinitely during tool binding.

**Problem:**
- Commit fb5da8384 changed the condition from `set(obj.keys()) ==
{"$ref"}` to `"$ref" in set(obj.keys())`
- This caused objects with $ref + other properties to be treated as pure
$ref nodes
- Result: other properties were lost and infinite recursion occurred
with complex schemas

**Solution:**
- Restore pure $ref detection for objects with only $ref key  
- Add proper handling for mixed $ref objects that preserves all
properties
- Merge resolved reference content with other properties
- Maintain cycle detection to prevent infinite recursion

**Impact:**
- Fixes Apollo MCP server schema integration
- Resolves tool binding infinite recursion with complex GraphQL schemas
- Preserves backward compatibility with existing functionality
- No performance impact - actually improves handling of complex schemas

**Issue:** Fixes #32511

**Dependencies:** None

**Testing:**
- Added comprehensive unit tests covering mixed $ref scenarios
- All existing tests pass (1326 passed, 0 failed)
- Tested with realistic Apollo GraphQL schemas
- Stress tested with 100 iterations of complex schemas

**Verification:**
-  `make format` - All files properly formatted
-  `make lint` - All linting checks pass  
-  `make test` - All 1326 unit tests pass
-  No breaking changes - full backwards compatibility maintained

---------

Co-authored-by: Marcus <marcus@Marcus-M4-MAX.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:21:31 -04:00
Mason Daugherty
255ad31955 release(anthropic): 0.3.20 (#32900) 2025-09-11 15:18:43 -04:00
Mason Daugherty
00e992a780 feat(anthropic): web fetch beta (#32894)
Note: citations are broken until Anthropic fixes their API
2025-09-11 15:14:06 -04:00
Mason Daugherty
83d938593b lint 2025-09-11 15:12:38 -04:00
Mason Daugherty
38afeddcb6 fix(groq): update docs due to model deprecation (#32899)
On Friday, October 10th, the moonshotai/kimi-k2-instruct model will be
decommissioned in favor of the latest version,
moonshotai/kimi-k2-instruct-0905.
 
Until then, requests to moonshotai/kimi-k2-instruct will automatically
be routed to moonshotai/kimi-k2-instruct-0905.
2025-09-11 15:00:24 -04:00
Yu Zhong
fca1aaa9b5 fix(core): force overwrite additionalProperties to False in strict mode (#32879)
# Description
This PR fixes a bug in _recursive_set_additional_properties_false used
in function_calling.convert_to_openai_function.

Previously, schemas with "additionalProperties=True" were not correctly
overridden when strict validation was expected, which could lead to
invalid OpenAI function schemas.

The updated implementation ensures that:
- Any schema with "additionalProperties" already set will now be forced
to False under strict mode.
- Recursive traversal of properties, items, and anyOf is preserved.
- Function signature remains unchanged for backward compatibility.

# Issue
When using tool calling in OpenAI structured output strict mode
(strict=True), 400: "Invalid schema for response_format XXXXX
'additionalProperties' is required to be supplied and to be false" error
raises for the parameter that contains dict type. OpenAI requires
additionalProperties to be set to False.
Some PRs try to resolved the issue.
- PR #25169 introduced _recursive_set_additional_properties_false to
recursively set additionalProperties=False.
- PR #26287 fixed handling of empty parameter tools for OpenAI function
generation.
- PR #30971 added support for Union type arguments in strict mode of
OpenAI function calling / structured output.

Despite these improvements, since Pydantic 2.11, it will always add
`additionalProperties: True` for arbitrary dictionary schemas dict or
Any (https://pydantic.dev/articles/pydantic-v2-11-release#changes).
Schemas that already had additionalProperties=True in such cases were
not being overridden, which this PR addresses to ensure strict mode
behaves correctly in all cases.

# Dependencies
No Changes

---------

Co-authored-by: Zhong, Yu <yzhong@freewheel.com>
2025-09-11 11:02:12 -04:00
Jonathan Paserman
af17774186 docs: add MLflow tracking and evaluation cookbook (#32667)
This PR adds a new cookbook demonstrating how to build a RAG pipeline
with LangChain and track + evaluate it using MLflow.
Currently not much documentation on LangChain MLflow integration, hope
this can help folks trying to monitor and evaluate their LangChain
applications.

- ArXiv document loader 
- In Memory vector store
- LCEL rag pipeline
- MLflow tracing
- MLflow evaluation

Issue:
N/A

Dependencies:
N/A
2025-09-10 22:55:28 -04:00
chen-assert
d72da29c0b docs: Fix classification notebook small mistake (#32636)
Fix some minor issues in the Classification Notebook.
While some code still using hardcoded OpenAI model instead of selected
chat model.

Specifically, on page [Classify Text into
Labels](https://python.langchain.com/docs/tutorials/classification/)

We selected chat model before and have init_chat_model with our chosen
mode.
<img width="1262" height="576" alt="image"
src="https://github.com/user-attachments/assets/14eb436b-d2ef-4074-96d8-71640a13c0f7"
/>

But the following sample code still uses the hard-coded OpenAI model,
which in my case is obviously unrunable (lack of openai api key)
<img width="1263" height="543" alt="image"
src="https://github.com/user-attachments/assets/d13846aa-1c4b-4dee-b9c1-c66570ba3461"
/>
2025-09-10 22:43:44 -04:00
Amit Biswas
653b0908af docs: update Confident callback integration and examples (#32458)
**Description:**
Updates the Confident AI integration documentation to use modern
patterns and improve code quality. This change:
- Replaces deprecated `DeepEvalCallbackHandler` with the new
`CallbackHandler` from `deepeval.integrations.langchain`
- Updates installation and authentication instructions to match current
best practices
- Adds modern integration examples using LangChain's latest patterns
- Removes deprecated metrics and outdated code examples
- Updates code samples to follow current best practices

The changes make the documentation more maintainable and ensure users
follow the recommended integration patterns.

**Issue:** Fixes #32444

**Dependencies:**
- deepeval
- langchain
- langchain-openai

**Twitter handle:** @Muwinuddin

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-10 22:43:31 -04:00
GDanksAnchor
eb77da7de5 docs: add name title for Anchor Browser (#32512)
# description
change the sidebar name to Anchor Browser from anchor_browser.

# Issue
Anchor Browser sidebar name looks unattractive.
2025-09-10 22:40:37 -04:00
Tianyu Chen
9c93439a01 docs: add Linux quick setup method for JaguarDB (#32520)
Description:
Added "Method Two: Quick Setup (Linux)" section to prerequisites,
providing a curl-based installation method for deploying JaguarDB
without Docker. Retained original Docker setup instructions for
flexibility.
2025-09-10 22:36:01 -04:00
Marco Vinciguerra
64fe1e9a80 docs: update scrapegraph.ipynb (#32617)
I updated ScrapeGraphAI for checking the new ScrapeGraphAI tool
2025-09-10 22:33:57 -04:00
chen-assert
e4a90490c3 docs: Fix agents tutorials parameter missing (#32639)
Fix a minor issue in the Agents Tutorials Notebook.
While a config parameter is missing.

Specifically, on page [Build an Agent#Streaming
tokens](https://python.langchain.com/docs/tutorials/agents/#streaming-tokens)

These pieces of code can not be run without the config parameter, which
seems to have been omitted by the author.
<img width="1318" height="691" alt="image"
src="https://github.com/user-attachments/assets/54ce2833-9499-41bb-9de0-d5f9beba9ef9"
/>
2025-09-10 22:27:24 -04:00
dwelch-spike
80776b80f0 docs: remove aerospike vector store (#32726)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- **Description:** Aerospike Vector Store has been retired. It is no
longer supported so It should no longer be documented on the Langchain
site.

- **Add tests and docs**: Removes docs for retired Aerospike vector
store.

- **Lint and test**: NA
2025-09-10 22:19:43 -04:00
may
2c2bab93fc docs: add example for reusing an existing collection (#32774)
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

### Description
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

### Issue
Fixes langchain-ai/langchain-weaviate#197

### Dependencies
None
2025-09-10 22:16:46 -04:00
Mateusz Świtała
221c96e7b4 docs: fix import path in WatsonxToolkit after releasing langchain-ibm 0.3.17 (#32746)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: 
- **Description:** Fixing the import path for `WatsonxToolkit` in
examples after releasing `lnagchain-ibm==0.3.17`

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-09-10 22:14:43 -04:00
yrk111222
364465bd11 docs: update modelscope.mdx (#32823)
### Description
This PR is primarily aimed at updating some usage methods in the
`modelscope.mdx` file.
Specifically, it changes from `ModelScopeLLM` to `ModelScopeEndpoint`.
### Relevant PR
The relevant PR link is:
https://github.com/langchain-ai/langchain/pull/28941
2025-09-10 22:07:19 -04:00
Mason Daugherty
7b874da9b2 fix(docs): text-embedding-004 -> gemini-embedding-001 (#32596)
`text-embedding-004` will be discontinued
2025-09-10 21:47:45 -04:00
Mason Daugherty
8e213c9f1a fix(core): AsyncCallbackHandler docstring cleanup (#32897)
plus IDE warning fixes
2025-09-10 21:31:45 -04:00
Yash Vishwanath Tobre
a8828b1bda fix(core): raise OutputParserException for non-dict JSON outputs (#32236)
**Description:**
Raise a more descriptive OutputParserException when JSON parsing results
in a non-dict type. This improves debugging and aligns behavior with
expectations when using expected_keys.

**Issue:**
Fixes #32233

**Twitter handle:**
@yashvtobre

**Testing:**

- Ran make format and make lint from the root directory; both passed
cleanly.
- Attempted make test but no such target exists in the root Makefile.
- Executed tests directly via pytest targeting the relevant test file,
confirming all tests pass except for unrelated async test failures
outside the scope of this change.

**Notes:**

- No additional dependencies introduced.
- Changes are backward compatible and isolated within the output parser
module.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-10 20:57:09 -04:00
Mason Daugherty
7a158c7f1c revert: "chore: remove ruff target-version" (#32895)
Reverts langchain-ai/langchain#32880

Not needed at the moment, will do when finishing v1
2025-09-10 20:56:48 -04:00
Daniel Barker
25c34bd9b2 feat(core): allow custom Mermaid URL (#32831)
- **Description:** Currently,
`langchain_core.runnables.graph_mermaid.py` is hardcoded to use
mermaid.ink to render graph diagrams. It would be nice to allow users to
specify a custom URL, e.g. for self-hosted instances of the Mermaid
server.
- **Issue:** [Langchain Forum: allow custom mermaid API
URL](https://forum.langchain.com/t/feature-request-allow-custom-mermaid-api-url/1472)
  - **Dependencies:** None

- [X] **Add tests and docs**: Added unit tests using mock requests.
- [X] **Lint and test**: Run `make format`, `make lint` and `make test`.

Minimal example using the feature:

```python
import os
import operator
from pathlib import Path
from typing import Any, Annotated, TypedDict

from langgraph.graph import StateGraph

class State(TypedDict):
    messages: Annotated[list[dict[str, Any]], operator.add]

def hello_node(state: State) -> State:
    return {"messages": [{"role": "assistant", "content": "pong!"}]}

builder = StateGraph(State)
builder.add_node("hello_node", hello_node)
builder.add_edge("__start__", "hello_node")
builder.add_edge("hello_node", "__end__")

graph = builder.compile()

# Run graph
output = graph.invoke({"messages": [{"role": "user", "content": "ping?"}]})

# Draw graph
Path("graph.png").write_bytes(graph.get_graph().draw_mermaid_png(base_url="https://custom-mermaid.ink"))
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-10 17:14:50 -04:00
Mason Daugherty
38001699d5 fix(anthropic): remove unneeded beta flags (#32893)
- Beta isn't needed for search result tests anymore
- Add TODO for other tests to come back when generally available
- Regenerate remote MCP snapshot after some testing (now the same, but
fresher)
- Bump deps
2025-09-10 20:47:13 +00:00
Mason Daugherty
3da0377c02 fix(anthropic): update ChatAnthropic model in tests/doc (#32892)
from `'claude-3-5-sonnet-latest'` to `'claude-3-5-haiku-latest'` since
sonnet is deprecated
2025-09-10 16:44:04 -04:00
JADAVA VINEETH KUMAR RAO
0abf82a45a fix(openai): ainvoke uses async _aget_response; add async tests (#32459) 2025-09-10 15:52:15 -04:00
Jonathan Hill
2fed177d0b fix(core): preserve ToolMessage.status field in convert_to_messages (#32840) 2025-09-10 15:49:39 -04:00
Aasish
9c7d262ff4 fix(openai): update AzureOpenAIEmbeddings validation logic for openai_api_base (#31782) 2025-09-10 14:53:30 -04:00
ccurme
67e651b592 fix(infra): fix min version check (#32891)
Should no longer require `langchain-core>=(version in monorepo)`
2025-09-10 14:04:26 -04:00
Shibayan003
f08dfb6f49 test: Add failing test for BaseCallbackManager.merge (#32040)
This pull request introduces a failing unit test to reproduce the bug
reported in issue #32028.
The test asserts the expected behavior: `BaseCallbackManager.merge()`
should combine `handlers` and `inheritable_handlers` independently,
without mixing them. This test will fail on the current codebase and is
intended to guide the fix and prevent future regressions.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-10 13:56:18 -04:00
ccurme
450870c9ac release(qdrant): 0.2.1 (#32889) 2025-09-10 13:21:16 -04:00
Zhou Jing
10dfeea110 fix(qdrant): allow as_retriever to work without embeddings in SPARSE mode (#32757) 2025-09-10 13:08:50 -04:00
ccurme
34ecb92178 release(openai): 0.3.33 (#32887) 2025-09-10 11:53:26 -04:00
ccurme
49b3918c26 fix(infra): update scheduled test workflow following uv migration in langchain-google (#32886) 2025-09-10 11:30:55 -04:00
Christophe Bornet
12921a94c5 test(core): reactivate commented tests in test_indexing (#32882)
* These tests now pass
* Commenting them is a [ruff
ERA](https://docs.astral.sh/ruff/rules/commented-out-code/) violation
2025-09-10 11:14:14 -04:00
Alexey Bondarenko
181bb91ce0 fix(ollama): Fix handling message content lists (#32881)
The Ollama chat model adapter does not support all of the possible
message content formats. That leads to Ollama model adapter crashing on
some messages from different models (e.g. Gemini 2.5 Flash).

These changes should fix one known scenario - when `content` is a list
containing a string.
2025-09-10 11:13:28 -04:00
Christophe Bornet
b274416441 chore: remove ruff target-version (#32880)
This is not needed anymore since `requires-python` was added when moving
to `uv`.
2025-09-10 11:12:30 -04:00
ccurme
389a781aa0 fix(infra): exclude pre-releases from latest version checks in core release workflow (#32883) 2025-09-10 10:35:24 -04:00
William FH
443f0ccb0e release(core): 0.3.76 (#32877) 2025-09-10 14:10:44 +00:00
Lauren Hirata Singh
00e547c311 docs: update banner with docs deprecation notice (#32871) 2025-09-10 00:35:43 +00:00
Sydney Runkle
d464d3089b chore: redirect docs template -> docs repo (#32872) 2025-09-09 18:24:22 -04:00
William FH
f1d44d0f9d fix(core): honor enabled=false in nested tracing (#31986)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-09 13:12:17 -07:00
Christophe Bornet
a35ee49f37 chore(langchain): enable ruff docstring-code-format in langchain (#32858) 2025-09-09 15:00:38 -04:00
Christophe Bornet
352ff363ca chore(cli): remove ruff exclusion of templates (#32864) 2025-09-09 14:56:47 -04:00
Christophe Bornet
256a0b5f2f chore(langchain): add ruff rule BLE (#32868)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-09 18:52:53 +00:00
ccurme
937087a29c release(groq): 0.3.8 (#32870) 2025-09-09 14:39:02 -04:00
Jan Z
08bf4c321f feat(groq): add support for json_schema (#32396) 2025-09-09 18:30:07 +00:00
Mason Daugherty
4c6af2d1b2 fix(openai): structured output (#32551) 2025-09-09 11:37:50 -04:00
Christophe Bornet
ee268db1c5 feat(standard-tests): add a property to skip relevant tests if the vector store doesn't support get_by_ids() (#32633) 2025-09-09 11:37:23 -04:00
Zhou Jing
dcc517b187 fix(core): ensure InjectedToolCallId always overrides LLM-generated values (#32766) 2025-09-09 11:25:52 -04:00
Mason Daugherty
c124e67325 chore(docs): update package READMEs (#32869)
- Fix badges
- Focus on agents
- Cut down fluff
2025-09-09 14:50:32 +00:00
Christophe Bornet
699a5d06d1 chore(langchain): add ruff rule ERA (#32867) 2025-09-09 10:13:18 -04:00
Christophe Bornet
00f699c60d chore(core): cleanup pyproject.toml (#32865) 2025-09-09 10:12:18 -04:00
Christophe Bornet
e36e25fe2f feat(langchain): support PEP604 ( | union) in tool node error handlers (#32861)
This allows to use PEP604 syntax for `ToolNode` error handlers
```python
def error_handler(e: ValueError | ToolException) -> str:
    return "error"

ToolNode(my_tool, handle_tool_errors=error_handler).invoke(...)
```
Without this change, this fails with `AttributeError: 'types.UnionType'
object has no attribute '__mro__'`
2025-09-09 10:11:12 -04:00
Christophe Bornet
cc3b5afe52 fix(huggingface): fix typing in test_standard (#32863) 2025-09-09 10:05:41 -04:00
Gal Bloch
428c2ee6c5 fix(langchain): preserve supplied llm in FlareChain.from_llm (#32847) 2025-09-09 13:41:23 +00:00
Christophe Bornet
714f74a847 refactor(core): improve beta decorator (#32505)
This is better than using a subclass as returning a `property` works
with `ClassWithBetaMethods.beta_property.__doc__`

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 18:06:48 -04:00
Christophe Bornet
c3b28c769a chore(langchain): add ruff rules D (except D100 and D104) (#31994)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 21:47:22 +00:00
Christophe Bornet
017348b27c chore(langchain): add ruff rule E501 in langchain_v1 (#32812)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:28:14 -04:00
Christophe Bornet
1e101ae9a2 chore(langchain): add ruff rules N (#32098)
See https://docs.astral.sh/ruff/rules/#pep8-naming-n

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:27:43 -04:00
Christophe Bornet
fe6c415c9f chore(langchain): add ruff rule UP007 in langchain_v1 (#32811)
Done by autofix
2025-09-08 17:26:00 -04:00
Christophe Bornet
54c2419a4e chore(langchain): enable ruff docstring-code-format in langchain_v1 (#32855) 2025-09-08 16:51:18 -04:00
Mason Daugherty
35e9d36b0e fix(standard-tests): ensure non-negative token counts in usage metadata assertions (#32593) 2025-09-08 16:49:26 -04:00
Christophe Bornet
8b90eae455 chore(text-splitters): enable ruff docstring-code-format (#32854) 2025-09-08 16:40:11 -04:00
Christophe Bornet
05d14775f2 chore(standard-tests): enable ruff docstring-code-format (#32852) 2025-09-08 16:39:53 -04:00
PieterKok-jaam
33c7f230e0 feat(core): add id field to Document passed to filter for InMemoryVectorStore similarity search (#32688)
Added an id field to the Document passed to filter for
InMemoryVectorStore similarity search. This allows filtering by Document
id and brings the input to the filter in line with the result returned
by the vector similarity search.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 20:39:18 +00:00
Mason Daugherty
97dd7628d2 chore: update badges (#32851)
- stars badge redundant (look at the top of the page)
- remove version badge since we have many pkgs (and it was only showing
core) -- also, just look at the releases tab to the right of the readme
2025-09-08 20:06:59 +00:00
Adithya1617
f5bd00d1f1 feat(core): support AWS Bedrock document content blocks in msg_content_output (#32799) 2025-09-08 19:40:28 +00:00
Sadra Barikbin
3486d6c74d feat(core): support for adding PromptTemplates with formats other than f-string (#32253)
Allow adding`PromptTemplate`s with formats other than `f-string`. Fixes
#32151

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 19:16:54 +00:00
Stefano Lottini
390606c155 fix(standard-tests): standard vectorstore tests accept out-of-order get_by_ids (#32821)
- **Description:** The vectorstore standard-test mistakenly assumes that
the store's `get_by_ids` respects the order of the provided `ids`. This
is not the case (as the base class docstring states). This PR fixes
those tests that would fail otherwise (see issue #32820 for details,
repro and all). Fixes #32820
- **Issue:** Fixes #32820
- **Dependencies:** none

Co-authored-by: Stefano Lottini <stefano.lottini@ibm.com>
2025-09-08 14:22:14 -04:00
Christophe Bornet
cc98fb9bee chore(core): add ruff rule PLC0415 (#32351)
See https://docs.astral.sh/ruff/rules/import-outside-top-level/

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:15:04 -04:00
Christophe Bornet
16420cad71 chore(core): fix some pydocs to use google-style (#32764)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:52:17 +00:00
Christophe Bornet
01fdeede50 chore(core): fix some ruff preview rules (#32785)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:55:20 +00:00
Christophe Bornet
f4e83e0ad8 chore(core): fix some docstrings (from DOC preview rule) (#32833)
* Add `Raises` sections
* Add `Returns` sections
* Add `Yields` sections

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:44:15 +00:00
dependabot[bot]
4024d47412 chore(infra): bump actions/setup-python from 5 to 6 (#32842)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1164">actions/setup-python#1164</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Enhancements:</h3>
<ul>
<li>Add support for <code>pip-version</code> by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1129">actions/setup-python#1129</a></li>
<li>Enhance reading from .python-version by <a
href="https://github.com/krystof-k"><code>@​krystof-k</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li>Add version parsing from Pipfile by <a
href="https://github.com/aradkdj"><code>@​aradkdj</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Clarify pythonLocation behaviour for PyPy and GraalPy in environment
variables by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1183">actions/setup-python#1183</a></li>
<li>Change missing cache directory error to warning by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1182">actions/setup-python#1182</a></li>
<li>Add Architecture-Specific PATH Management for Python with --user
Flag on Windows by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1122">actions/setup-python#1122</a></li>
<li>Include python version in PyPy python-version output by <a
href="https://github.com/cdce8p"><code>@​cdce8p</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li>Update docs: clarification on pip authentication with setup-python
by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1156">actions/setup-python#1156</a></li>
</ul>
<h3>Dependency updates:</h3>
<ul>
<li>Upgrade idna from 2.9 to 3.7 in /<strong>tests</strong>/data by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/843">actions/setup-python#843</a></li>
<li>Upgrade form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1163">actions/setup-python#1163</a></li>
<li>Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIndex.download by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1165">actions/setup-python#1165</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1181">actions/setup-python#1181</a></li>
<li>Upgrade <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1095">actions/setup-python#1095</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/krystof-k"><code>@​krystof-k</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li><a href="https://github.com/cdce8p"><code>@​cdce8p</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li><a href="https://github.com/aradkdj"><code>@​aradkdj</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v6.0.0">https://github.com/actions/setup-python/compare/v5...v6.0.0</a></p>
<h2>v5.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Workflow updates related to Ubuntu 20.04 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1065">actions/setup-python#1065</a></li>
<li>Fix for Candidate Not Iterable Error by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1082">actions/setup-python#1082</a></li>
<li>Upgrade semver and <code>@​types/semver</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1091">actions/setup-python#1091</a></li>
<li>Upgrade prettier from 2.8.8 to 3.5.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1046">actions/setup-python#1046</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1081">actions/setup-python#1081</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v5.6.0">https://github.com/actions/setup-python/compare/v5...v5.6.0</a></p>
<h2>v5.5.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements:</h3>
<ul>
<li>Support free threaded Python versions like '3.13t' by <a
href="https://github.com/colesbury"><code>@​colesbury</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/973">actions/setup-python#973</a></li>
<li>Enhance Workflows: Include ubuntu-arm runners, Add e2e Testing for
free threaded and Upgrade <code>@​action/cache</code> from 4.0.0 to
4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1056">actions/setup-python#1056</a></li>
<li>Add support for .tool-versions file in setup-python by <a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1043">actions/setup-python#1043</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Fix architecture for pypy on Linux ARM64 by <a
href="https://github.com/mayeut"><code>@​mayeut</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1011">actions/setup-python#1011</a>
This update maps arm64 to aarch64 for Linux ARM64 PyPy
installations.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e797f83bcb"><code>e797f83</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/setup-python/issues/1164">#1164</a>)</li>
<li><a
href="3d1e2d2ca0"><code>3d1e2d2</code></a>
Revert &quot;Enhance cache-dependency-path handling to support files
outside the w...</li>
<li><a
href="65b071217a"><code>65b0712</code></a>
Clarify pythonLocation behavior for PyPy and GraalPy in environment
variables...</li>
<li><a
href="5b668cf765"><code>5b668cf</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-python/issues/1181">#1181</a>)</li>
<li><a
href="f62a0e252f"><code>f62a0e2</code></a>
Change missing cache directory error to warning (<a
href="https://redirect.github.com/actions/setup-python/issues/1182">#1182</a>)</li>
<li><a
href="9322b3ca74"><code>9322b3c</code></a>
Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIn...</li>
<li><a
href="fbeb884f69"><code>fbeb884</code></a>
Bump form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
(<a
href="https://redirect.github.com/actions/setup-python/issues/1163">#1163</a>)</li>
<li><a
href="03bb6152f4"><code>03bb615</code></a>
Bump idna from 2.9 to 3.7 in /<strong>tests</strong>/data (<a
href="https://redirect.github.com/actions/setup-python/issues/843">#843</a>)</li>
<li><a
href="36da51d563"><code>36da51d</code></a>
Add version parsing from Pipfile (<a
href="https://redirect.github.com/actions/setup-python/issues/1067">#1067</a>)</li>
<li><a
href="3c6f142cc0"><code>3c6f142</code></a>
update documentation (<a
href="https://redirect.github.com/actions/setup-python/issues/1156">#1156</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-python/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 11:27:54 -04:00
Christophe Bornet
f589168411 refactor(core): use pytest style in TestGetBufferString (#32786)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:16:13 +00:00
Christophe Bornet
5840dad40b chore(core): enable ruff docstring-code-format (#32834)
See https://docs.astral.sh/ruff/settings/#format_docstring-code-format

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:13:50 +00:00
Christophe Bornet
e3b6c9bb66 chore(core): fix some mypy warn_unreachable issues (#32560)
Found by setting `warn_unreachable: true` in mypy.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:02:08 +00:00
Christophe Bornet
c672590f42 chore(standard-tests): select ALL rules with exclusions (#31937)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:57:47 +00:00
Christophe Bornet
323729915a chore(standard-tests): add mypy strict checking (#32384)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 10:50:38 -04:00
Christophe Bornet
0c3e8ccd0e chore(text-splitters): select ALL rules with exclusions (#32325)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:46:09 +00:00
Christophe Bornet
20401df25d chore(cli): fix some DOC rules (preview) (#32839)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:36:22 +00:00
dependabot[bot]
e0aaaccb61 chore(infra): bump aws-actions/configure-aws-credentials from 4 to 5 (#32841)
Bumps
[aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/releases">aws-actions/configure-aws-credentials's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.1...v5.0.0">5.0.0</a>
(2025-09-03)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)
(<a
href="8c45f6b081">8c45f6b</a>)</li>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)
(<a
href="74b3e27aa8">74b3e27</a>)</li>
<li>support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)
(<a
href="c4be498953">c4be498</a>)</li>
</ul>
<h2>v4.3.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2>v4.3.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<p>NOTE: This release tag originally pointed to
59b441846ad109fa4a1549b73ef4e149c4bfb53b, but a critical bug was
discovered shortly after publishing. We updated this tag to
d0834ad3a60a024346910e522a81b0002bd37fea to prevent anyone using the
4.3.0 tag from encountering the bug, and we published 4.3.1 to allow
workflows to auto update correctly.</p>
<h3>Features</h3>
<ul>
<li>dependency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2>v4.2.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md">aws-actions/configure-aws-credentials's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.1...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<h3>Features</h3>
<ul>
<li>depenency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>ensure explicit inputs take precedence over environment variables
(<a
href="e56e6c4038">e56e6c4</a>)</li>
<li>prioritize explicit inputs over environment variables (<a
href="df9c8fed6b">df9c8fe</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.1.0...v4.2.0">4.2.0</a>
(2025-05-06)</h2>
<h3>Features</h3>
<ul>
<li>add Expiration field to Outputs (<a
href="a4f326760c">a4f3267</a>)</li>
<li>Document role-duration-seconds range (<a
href="5a0cf0167f">5a0cf01</a>)</li>
<li>support action inputs as environment variables (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1338">#1338</a>)
(<a
href="2c168adcae">2c168ad</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>make sure action builds, also fix dependabot autoapprove (<a
href="c401b8a98c">c401b8a</a>)</li>
<li>role chaning on mulitple runs (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1340">#1340</a>)
(<a
href="9e38641911">9e38641</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a03048d875"><code>a03048d</code></a>
chore(main): release 5.0.0 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1451">#1451</a>)</li>
<li><a
href="337f510212"><code>337f510</code></a>
chore: Fix markdown link formatting in README.md (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1466">#1466</a>)</li>
<li><a
href="f001d79eaa"><code>f001d79</code></a>
chore: update README with versioning (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1465">#1465</a>)</li>
<li><a
href="cf5f2acba3"><code>cf5f2ac</code></a>
chore: Update dist</li>
<li><a
href="b394bdd9f0"><code>b394bdd</code></a>
chore(deps-dev): bump <code>@​aws-sdk/credential-provider-env</code> (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1463">#1463</a>)</li>
<li><a
href="b632c0b5e4"><code>b632c0b</code></a>
chore(deps-dev): bump memfs from 4.38.1 to 4.38.2 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1462">#1462</a>)</li>
<li><a
href="978e44aa36"><code>978e44a</code></a>
chore: Update dist</li>
<li><a
href="c4be498953"><code>c4be498</code></a>
feat: support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)</li>
<li><a
href="c5a43c32e1"><code>c5a43c3</code></a>
chore: Update dist</li>
<li><a
href="8c45f6b081"><code>8c45f6b</code></a>
feat: add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aws-actions/configure-aws-credentials&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:18:01 -04:00
dependabot[bot]
9368ce6b07 chore(infra): bump google-github-actions/auth from 2 to 3 (#32777)
Bumps
[google-github-actions/auth](https://github.com/google-github-actions/auth)
from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/google-github-actions/auth/releases">google-github-actions/auth's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump to Node 24 and remove old parameters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/508">google-github-actions/auth#508</a></li>
<li>Remove hacky script by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/509">google-github-actions/auth#509</a></li>
<li>Release: v3.0.0 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/510">google-github-actions/auth#510</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2...v3.0.0">https://github.com/google-github-actions/auth/compare/v2...v3.0.0</a></p>
<h2>v2.1.13</h2>
<h2>What's Changed</h2>
<ul>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/506">google-github-actions/auth#506</a></li>
<li>Release: v2.1.13 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/507">google-github-actions/auth#507</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13">https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13</a></p>
<h2>v2.1.12</h2>
<h2>What's Changed</h2>
<ul>
<li>Add retries for getIDToken by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/502">google-github-actions/auth#502</a></li>
<li>Release: v2.1.12 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/503">google-github-actions/auth#503</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12">https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12</a></p>
<h2>v2.1.11</h2>
<h2>What's Changed</h2>
<ul>
<li>Update troubleshooting docs for Python by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/488">google-github-actions/auth#488</a></li>
<li>Add linters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/499">google-github-actions/auth#499</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/500">google-github-actions/auth#500</a></li>
<li>Release: v2.1.11 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/501">google-github-actions/auth#501</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11">https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11</a></p>
<h2>v2.1.10</h2>
<h2>What's Changed</h2>
<ul>
<li>Declare workflow permissions by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/482">google-github-actions/auth#482</a></li>
<li>Document that the OIDC token expires in 5min by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/483">google-github-actions/auth#483</a></li>
<li>Release: v2.1.10 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/484">google-github-actions/auth#484</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10">https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10</a></p>
<h2>v2.1.9</h2>
<h2>What's Changed</h2>
<ul>
<li>Use our custom boolean parsing by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/478">google-github-actions/auth#478</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/479">google-github-actions/auth#479</a></li>
<li>Release: v2.1.9 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/480">google-github-actions/auth#480</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7c6bc770da"><code>7c6bc77</code></a>
Release: v3.0.0 (<a
href="https://redirect.github.com/google-github-actions/auth/issues/510">#510</a>)</li>
<li><a
href="42e4997ee3"><code>42e4997</code></a>
Remove hacky script (<a
href="https://redirect.github.com/google-github-actions/auth/issues/509">#509</a>)</li>
<li><a
href="5ea4dc1147"><code>5ea4dc1</code></a>
Bump to Node 24 and remove old parameters (<a
href="https://redirect.github.com/google-github-actions/auth/issues/508">#508</a>)</li>
<li>See full diff in <a
href="https://github.com/google-github-actions/auth/compare/v2...v3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google-github-actions/auth&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:59 -04:00
dependabot[bot]
f8bcc98362 chore(infra): bump amannn/action-semantic-pull-request from 5 to 6 (#32585)
Bumps
[amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/releases">amannn/action-semantic-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.3...v6.0.0">6.0.0</a>
(2025-08-13)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)
(<a
href="bc0c9a79ab">bc0c9a7</a>)</li>
</ul>
<h2>v5.5.3</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.2...v5.5.3">5.5.3</a>
(2024-06-28)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump <code>braces</code> dependency (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/269">#269</a>.
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="2d952a1bf9">2d952a1</a>)</li>
</ul>
<h2>v5.5.2</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.1...v5.5.2">5.5.2</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump tar from 6.1.11 to 6.2.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/262">#262</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="9a90d5a5ac">9a90d5a</a>)</li>
</ul>
<h2>v5.5.1</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.0...v5.5.1">5.5.1</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump ip from 2.0.0 to 2.0.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/263">#263</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="5e7e9acca3">5e7e9ac</a>)</li>
</ul>
<h2>v5.5.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.4.0...v5.5.0">5.5.0</a>
(2024-04-23)</h2>
<h3>Features</h3>
<ul>
<li>Add outputs for <code>type</code>, <code>scope</code> and
<code>subject</code> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/261">#261</a>
by <a href="https://github.com/bcaurel"><code>@​bcaurel</code></a>) (<a
href="b05f5f6423">b05f5f6</a>)</li>
</ul>
<h2>v5.4.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.3.0...v5.4.0">5.4.0</a>
(2023-11-03)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md">amannn/action-semantic-pull-request's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.2.0...v5.3.0">5.3.0</a>
(2023-09-25)</h2>
<h3>Features</h3>
<ul>
<li>Use Node.js 20 in action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/240">#240</a>)
(<a
href="4c0d5a21fc">4c0d5a2</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.1.0...v5.2.0">5.2.0</a>
(2023-03-16)</h2>
<h3>Features</h3>
<ul>
<li>Update dependencies by <a
href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/229">#229</a>)
(<a
href="e797448a07">e797448</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.2...v5.1.0">5.1.0</a>
(2023-02-10)</h2>
<h3>Features</h3>
<ul>
<li>Add regex support to <code>scope</code> and
<code>disallowScopes</code> configuration (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/226">#226</a>)
(<a
href="403a6f8924">403a6f8</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.1...v5.0.2">5.0.2</a>
(2022-10-17)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade <code>@actions/core</code> to avoid deprecation warnings (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/208">#208</a>)
(<a
href="91f4126c9e">91f4126</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.0...v5.0.1">5.0.1</a>
(2022-10-14)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade GitHub Action to use Node v16 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/207">#207</a>)
(<a
href="6282ee339b">6282ee3</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.6.0...v5.0.0">5.0.0</a>
(2022-10-11)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)
(<a
href="c906fe1e5a">c906fe1</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.5.0...v4.6.0">4.6.0</a>
(2022-09-26)</h2>
<h3>Features</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fdd4d3ddf6"><code>fdd4d3d</code></a>
chore: Release 6.0.1 [skip ci]</li>
<li><a
href="58e4ab40f5"><code>58e4ab4</code></a>
fix: Actually execute action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/289">#289</a>)</li>
<li><a
href="04a8d177d9"><code>04a8d17</code></a>
chore: Release 6.0.0 [skip ci]</li>
<li><a
href="bc0c9a79ab"><code>bc0c9a7</code></a>
feat!: Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
<li><a
href="631ffdc028"><code>631ffdc</code></a>
build(deps): bump the github-action-workflows group with 2 updates (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/286">#286</a>)</li>
<li><a
href="c1807ceb58"><code>c1807ce</code></a>
build: configure Dependabot (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/231">#231</a>)</li>
<li><a
href="3352882559"><code>3352882</code></a>
docs: Remove <code>synchronize</code> trigger (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/281">#281</a>)</li>
<li><a
href="04501d43b5"><code>04501d4</code></a>
docs: More restrictive permissions (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/280">#280</a>)</li>
<li><a
href="40166f0081"><code>40166f0</code></a>
chore: Update actions in release workflow (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/276">#276</a>)</li>
<li><a
href="80c0371c57"><code>80c0371</code></a>
docs: Mention <code>reopened</code> trigger in README (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/272">#272</a>
by <a
href="https://github.com/garysassano"><code>@​garysassano</code></a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=amannn/action-semantic-pull-request&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:19 -04:00
dependabot[bot]
d8d93882f9 chore(infra): bump actions/checkout from 4 to 5 (#32584)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
<li>Prepare release v4.3.0 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2237">actions/checkout#2237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/motss"><code>@​motss</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li><a href="https://github.com/mouismail"><code>@​mouismail</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li><a href="https://github.com/benwells"><code>@​benwells</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.0">https://github.com/actions/checkout/compare/v4...v4.3.0</a></p>
<h2>v4.2.2</h2>
<h2>What's Changed</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.1...v4.2.2">https://github.com/actions/checkout/compare/v4.2.1...v4.2.2</a></p>
<h2>v4.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1919">actions/checkout#1919</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.0...v4.2.1">https://github.com/actions/checkout/compare/v4.2.0...v4.2.1</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:14:09 -04:00
Sadiq Khan
228fbac3a6 fix(openai): handle AIMessages without response_id in _get_last_messages (#32824) 2025-09-08 10:12:50 -04:00
JunHyungKang
6ea06ca972 fix(openai): Fix Azure OpenAI Responses API model field issue (#32649) 2025-09-08 10:08:35 -04:00
ccurme
5b0a55ad35 chore(openai): apply formatting changes to AzureChatOpenAI (#32848) 2025-09-08 09:54:20 -04:00
Sydney Runkle
6e2f46d04c feat(langchain): middleware support in create_agent (#32828)
## Overview

Adding new `AgentMiddleware` primitive that supports `before_model`,
`after_model`, and `prepare_model_request` hooks.

This is very exciting! It makes our `create_agent` prebuilt much more
extensible + capable. Still in alpha and subject to change.

This is different than the initial
[implementation](https://github.com/langchain-ai/langgraph/tree/nc/25aug/agent)
in that it:
* Fills in gaps w/ missing features, for ex -- new structured output,
optionality of tools + system prompt, sync and async model requests,
provider builtin tools
* Exposes private state extensions for middleware, enabling things like
model call tracking, etc
* Middleware can register tools
* Uses a `TypedDict` for `AgentState` -- dataclass subclassing is tricky
w/ required values + required decorators
* Addition of `model_settings` to `ModelRequest` so that we can pass
through things to bind (like cache kwargs for anthropic middleware)

## TODOs

### top prio
- [x] add middleware support to existing agent
- [x] top prio middlewares
  - [x] summarization node
  - [x] HITL
  - [x] prompt caching
 
other ones
- [x] model call limits
- [x] tool calling limits
- [ ] usage (requires output state)

### secondary prio
- [x] improve typing for state updates from middleware (not working
right now w/ simple `AgentUpdate` and `AgentJump`, at least in Python)
- [ ] add support for public state (input / output modifications via
pregel channel mods) -- to be tackled in another PR
- [x] testing!

### docs
See https://github.com/langchain-ai/docs/pull/390
- [x] high level docs about middleware
- [x] summarization node
- [x] HITL
- [x] prompt caching

## open questions

Lots of open questions right now, many of them inlined as comments for
the short term, will catalog some more significant ones here.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-09-08 01:10:57 +00:00
Christophe Bornet
5bf0b218c8 chore(cli): fix some ruff preview rules (#32803) 2025-09-07 16:53:19 -04:00
Mason Daugherty
4e39c164bb fix(anthropic): remove beta header warning for TTL (#32832)
No longer beta as of Aug 13
2025-09-05 14:28:58 -04:00
ScarletMercy
0b3af47335 fix(docs): resolve malformed character in tool_calling.ipynb (#32825)
**Description:**  
Remove a character in tool_calling.ipynb that causes a grammatical error
Verification: Local docs build passed after fix 
 
**Issue:**  
None (direct hotfix for rendering issue identified during documentation
review)
 
**Dependencies:**  
None
2025-09-05 11:28:56 -04:00
Mason Daugherty
bc91a4811c chore: update PR template (#32819) 2025-09-04 19:53:54 +00:00
Christophe Bornet
05a61f9508 fix(langchain): fix mypy versions in langchain_v1 (#32816) 2025-09-04 11:51:08 -04:00
Christophe Bornet
aa63de9366 chore(langchain): cleanup langchain_v1 mypy config (#32809)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-03 19:28:06 +00:00
Christophe Bornet
86fa34f3eb chore(langchain): add ruff rules D for langchain_v1 (#32808) 2025-09-03 15:26:17 -04:00
JING
36037c9251 fix(docs): update Anthropic model name and add version warnings (#32807)
**Description:** This PR fixes the broken Anthropic model example in the
documentation introduction page and adds a comment field to display
model version warnings in code blocks. The changes ensure that users can
successfully run the example code and are reminded to check for the
latest model versions.

**Issue:** https://github.com/langchain-ai/langchain/issues/32806

**Changes made:**
- Update Anthropic model from broken "claude-3-5-sonnet-latest" to
working "claude-3-7-sonnet-20250219"
- Add comment field to display model version warnings in code blocks
- Improve user experience by providing working examples and version
guidance

**Dependencies:** None required
2025-09-03 15:25:13 -04:00
Martin Meier-Zavodsky
ad26c892ea docs(langchain): update evaluation tutorial link (#32796)
**Description**
This PR updates the evaluation tutorial link for LangSmith to the new
official docs location.

**Issue**
N/A

**Dependencies**
None
2025-09-03 15:22:46 -04:00
Shahroz Ahmad
4828a85ab0 feat(core): add web_search in OpenAI tools list (#32738) 2025-09-02 21:57:25 +00:00
ccurme
b999f356e8 fix(langchain): update __init__ version (#32793) 2025-09-02 13:14:42 -04:00
Sydney Runkle
062196a7b3 release(langchain): v1.0.0a3 (#32791) 2025-09-02 12:29:14 -04:00
Sydney Runkle
dc9f941326 chore(langchain): rename create_react_agent -> create_agent (#32789) 2025-09-02 12:13:12 -04:00
Adithya1617
238ecd09e0 docs(langchain): update redirect url of "this langsmith conceptual guide" in tracing.mdx (#32776)
…ge (issue : #32775)

- **Description: updated the redirect url of "this langsmith conceptual
guide" in tracing.mdx
  - **Issue:** fixes #32775

---------

Co-authored-by: Adithya <adithya.vardhan1617@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-01 19:02:21 +00:00
Mason Daugherty
6b5fdfb804 release(text-splitters): 0.3.11 (#32770)
Fixes #32747

SpaCy integration test fixture was trying to use pip to download the
SpaCy language model (`en_core_web_sm`), but uv-managed environments
don't include pip by default. Fail test if not installed as opposed to
downloading.
2025-08-31 23:00:05 +00:00
Ravirajsingh Sodha
b42dac5fe6 docs: standardize OllamaLLM and BaseOpenAI docstrings (#32758)
- Add comprehensive docstring following LangChain standards
- Include Setup, Key init args, Instantiate, Invoke, Stream, and Async
sections
- Provide detailed parameter descriptions and code examples
- Fix linting issues for code formatting compliance

Contributes to #24803

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-31 17:45:56 -05:00
Christophe Bornet
e0a4af8d8b docs(text-splitters): fix some docstrings (#32767) 2025-08-31 13:46:11 -05:00
Rémy HUBSCHER
fcf7175392 chore(langchain): improve PostgreSQL Manager upsert SQLAlchemy API calls. (#32748)
- Make explicit the `constraint` parameter name to avoid mixing it with
`index_elements`
[[Documentation](https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#sqlalchemy.dialects.postgresql.Insert.on_conflict_do_update)]
- ~Fallback on the existing `group_id` row value, to avoid setting it to
`None`.~
2025-08-30 14:13:24 -05:00
Kush Goswami
1f2ab17dff docs: fix typo and grammer in Conceptual guide (#32754)
fixed small typo and grammatical inconsistency in Conceptual guide
2025-08-30 13:48:55 -05:00
Mason Daugherty
2dc89a2ae7 release(cli): 0.0.37 (#32760)
It's been a minute. Final release prior to dropping Python 3.9 support.
2025-08-30 13:07:55 -05:00
Christophe Bornet
e3c4aeaea1 chore(cli): add mypy strict checking (#32386)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-30 13:02:45 -05:00
Vikas Shivpuriya
444939945a docs: fix punctuation in style guide (#32756)
Removed a period in bulleted list for consistency

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:56:17 -05:00
Vikas Shivpuriya
ae8db86486 docs: fixed typo in contributing guide (#32755)
Completed the sentence by adding a period ".", in sync with other points

>> Click "Propose changes"

to 

>> Click "Propose changes".

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:55:25 -05:00
Christophe Bornet
8a1419dad1 chore(cli): add ruff rules ANN401 and D1 (#32576) 2025-08-30 12:41:16 -05:00
Kush Goswami
840e4c8e9f docs: fix grammar and typo in Documentation style guide (#32741)
fixed grammer and one typo in the Documentation style guide
2025-08-29 14:22:54 -04:00
Caspar Broekhuizen
37aff0a153 chore: bump langchain-core minimum to 0.3.75 (#32753)
Update `langchain-core` dependency min from `>=0.3.63` to `>=0.3.75`.

### Motivation
- We located the `langchain-core` package locally in the monorepo and
need to align `langchain-tests` with the new minimum version.
2025-08-29 14:11:28 -04:00
Caspar Broekhuizen
a163d59988 chore(standard-tests): relax langchain-core bounds for langchain-tests 1.0.0a1 (#32752)
### Overview
Preparing the `1.0.0a1` release of `langchain-tests` to align with
`langchain-core` version `1.0.0a1`.

### Changes
- Bump package version to `1.0.0a1`
- Relax `langchain-core` requirement from `<1.0.0,>=0.3.63` to
`<2.0.0,>=0.3.63`

### Motivation
All main LangChain packages are now publishing `1.0.0a` prereleases.  
`langchain-tests` needs a matching prerelease so downstreams can install
tests alongside the 1.0 series without conflicts.

### Tests
- Verified installation and tests against both `0.3.75` and `1.0.0a1`.
2025-08-29 13:46:48 -04:00
Sydney Runkle
b26e52aa4d chore(text-splitters): bump version of core (#32740) 2025-08-28 13:14:57 -04:00
Sydney Runkle
38cdd7a2ec chore(text-splitters): relax max bound for langchain-core (#32739) 2025-08-28 13:05:47 -04:00
Sydney Runkle
26e5d1302b chore(langchain): remove upper bound at v1 for core (#32737) 2025-08-28 12:14:42 -04:00
Christopher Jones
107425c68d docs: fix basic Oracle example issues such as capitalization (#32730)
**Description:** fix capitalization and basic issues in
https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/

Signed-off-by: Christopher Jones <christopher.jones@oracle.com>
2025-08-28 10:32:45 -04:00
Tik1993
009cc3bf50 docs(docs): added content= keyword when creating SystemMessage and HumanMessage (#32734)
Description: 
Added the content= keyword when creating SystemMessage and HumanMessage
in the messages list, making it consistent with the API reference.
2025-08-28 10:31:46 -04:00
NOOR UL HUDA
6185558449 docs: replace smart quotes with straight quotes on How-to guides landing page (#32725)
### Summary

This PR updates the sentence on the "How-to guides" landing page to
replace smart (curly) quotes with straight quotes in the phrase:

> "How do I...?"

### Why This Change?

- Ensures formatting consistency across documentation
- Avoids encoding or rendering issues with smart quotes
- Matches standard Markdown and inline code formatting

This is a small change, but improves clarity and polish on a key landing
page.
2025-08-28 10:30:12 -04:00
Kush Goswami
0928ff5b12 docs: fix typo in LangGraph section of Introduction (#32728)
Change "Linkedin" to "LinkedIn" to be consistent with LinkedIn's
spelling.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-28 10:29:35 -04:00
Sydney Runkle
7f9b0772fc chore(langchain): also bump text splitters (#32722) 2025-08-27 18:09:57 +00:00
Sydney Runkle
d6e618258f chore(langchain): use latest core (#32720) 2025-08-27 14:06:07 -04:00
Sydney Runkle
806bc593ab chore(langchain): revert back to static versioning for now (#32719) 2025-08-27 13:54:41 -04:00
Sydney Runkle
047bcbaa13 release(langchain): v1.0.0a1 (#32718)
Also removing globals usage + static version
2025-08-27 13:46:20 -04:00
Sydney Runkle
18db07c292 feat(langchain): revamped create_react_agent (#32705)
Adding `create_react_agent` and introducing `langchain.agents`!

## Enhanced Structured Output

`create_react_agent` supports coercion of outputs to structured data
types like `pydantic` models, dataclasses, typed dicts, or JSON schemas
specifications.

### Structural Changes

In langgraph < 1.0, `create_react_agent` implemented support for
structured output via an additional LLM call to the model after the
standard model / tool calling loop finished. This introduced extra
expense and was unnecessary.

This new version implements structured output support in the main loop,
allowing a model to choose between calling tools or generating
structured output (or both).

The same basic pattern for structured output generation works:

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather)
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

### Advanced Configuration

The new API exposes two ways to configure how structured output is
generated. Under the hood, LangChain will attempt to pick the best
approach if not explicitly specified. That is, if provider native
support is available for a given model, that takes priority over
artificial tool calling.

1. Artificial tool calling (the default for most models)

LangChain generates a tool (or tools) under the hood that match the
schema of your response format. When the model calls those tools,
LangChain coerces the args to the desired format. Note, LangChain does
not validate outputs adhering to JSON schema specifications.

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather, tool_message_content="Final Weather result generated"
    ),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_Gg933BMHMwck50Q39dtBjXm7)
 Call ID: call_Gg933BMHMwck50Q39dtBjXm7
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
Tool Calls:
  Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l)
 Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l
  Args:
    temperature: 70
    condition: sunny
================================= Tool Message =================================
Name: Weather

Final Weather result generated
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

</details>

2. Provider implementations (limited to OpenAI, Groq)

Some providers support structured output generating directly. For those
cases, we offer the `ProviderStrategy` hint:

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ProviderStrategy(Weather),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp)
 Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================

{"temperature":70,"condition":"sunny"}
Weather(temperature=70.0, condition='sunny')
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

Note! The final tool message has the custom content provided by the dev.

</details>

Prompted output was previously supported and is no longer supported via
the `response_format` argument to `create_react_agent`. If there's
significant demand for this, we'd be happy to engineer a solution.

## Error Handling

`create_react_agent` now exposes an API for managing errors associated
with structured output generation. There are two common problems with
structured output generation (w/ artificial tool calling):

1. **Parsing error** -- the model generates data that doesn't match the
desired structure for the output
2. **Multiple tool calls error** -- the model generates 2 or more tool
calls associated with structured output schemas

A developer can control the desired behavior for this via the
`handle_errors` arg to `ToolStrategy`.

<details>
<summary>Extended example</summary>

```py
from langchain_core.messages import HumanMessage
from pydantic import BaseModel

from langchain.agents import create_react_agent
from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"


def handle_validation_error(error: Exception) -> str:
    if isinstance(error, StructuredOutputValidationError):
        return (
            f"Please call the {error.tool_name} call again with the correct arguments. "
            f"Your mistake was: {error.source}"
        )
    raise error


agent = create_react_agent(
    "openai:gpt-5",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather,
        handle_errors=handle_validation_error,
    ),
)
```

</details>

## Error Handling for Tool Calling

Tools fail for two main reasons:

1. **Invocation failure** -- the args generated by the model for the
tool are incorrect (missing, incompatible data types, etc)
2. **Execution failure** -- the tool execution itself fails due to a
developer error, network error, or some other exception.

By default, when tool **invocation** fails, the react agent will return
an artificial `ToolMessage` to the model asking it to correct its
mistakes and retry.

Now, when tool **execution** fails, the react agent raises the
`ToolException` by default instead of asking the model to retry. This
helps to avoid looping that should be avoided due to the aforementioned
issues.

Developers can configure their desired behavior for retries / error
handling via the `handle_tool_errors` arg to `ToolNode`.

## Pre-Bound Models

`create_react_agent` no longer supports inputs to `model` that have been
pre-bound w/ tools or other configuration. To properly support
structured output generation, the agent itself needs the power to bind
tools + structured output kwargs.

This also makes the devx cleaner - it's always expected that `model` is
an instance of `BaseChatModel` (or `str` that we coerce into a chat
model instance).

Dynamic model functions can return a pre-bound model **IF** structured
output is not also used. Dynamic model functions can then bind tools /
structured output logic.

## Import Changes

Users should now use `create_react_agent` from `langchain.agents`
instead of `langgraph.prebuilts`.
Other imports have a similar migration path, `ToolNode` and `AgentState`
for example.

* `chat_agent_executor.py` -> `react_agent.py`

Some notes:
1. Disabled blockbuster + some linting in `langchain/agents` -- beyond
ideal, but necessary to get this across the line for the alpha. We
should re-enable before official release.
2025-08-27 17:32:21 +00:00
Sydney Runkle
1fe2c4084b chore(langchain): remove untested chains for first alpha (#32710)
Also removing globals.py file
2025-08-27 08:24:43 -04:00
Sydney Runkle
c6c7fce6c9 chore(langchain): drop Python 3.9 to prep for v1 (#32704)
Python 3.9 EOL is October 2025, so we're going to drop it for the v1
alpha release.
2025-08-26 23:16:42 +00:00
Mason Daugherty
3d08b6bd11 chore: adress pytest-asyncio deprecation warnings + other nits (#32696)
amongst some linting imcompatible rules
2025-08-26 15:51:38 -04:00
Matthew Farrellee
f2dcdae467 fix(standard-tests): update function_args to match my_adder_tool param types (#32689)
**Description:**

https://api.llama.com implements strong type checking, which results in
a false negative.

with type mismatch (expected integer, received string) -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": \"1\", \"b\": \"2\"}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"title":"Bad request","detail":"Unexpected param value `a`: \"1\"","status":400}
```

with correct type -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": 1, \"b\": 2}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"id":"AhMwBbuaa5payFr_xsOHzxX","model":"Llama-3.3-70B-Instruct","choices":[{"finish_reason":"stop","index":0,"message":{"refusal":"","role":"assistant","content":"The result of 1 + 2 is 3.","id":"AhMwBbuaa5payFr_xsOHzxX"},"logprobs":null}],"created":1756167668,"object":"chat.completions","usage":{"prompt_tokens":248,"completion_tokens":17,"total_tokens":265}}
```
2025-08-26 15:50:47 -04:00
ccurme
dbebe2ca97 release(core): 0.3.75 (#32693) 2025-08-26 11:12:03 -04:00
ccurme
008043977d release(openai): 0.3.32 (#32691) 2025-08-26 14:05:40 +00:00
Jacob Lee
1459d4f4ce fix(openai): Always add raw response object to OpenAI client errors for invoke (#32655) 2025-08-26 09:59:25 -04:00
ccurme
f33480c2cf feat(core): trace response body on error (#32653) 2025-08-25 14:28:19 -04:00
Mason Daugherty
1c55536ec1 chore(core): add note about backward compatibility for tool_calls in additional_kwargs in JsonOutputKeyToolsParser 2025-08-25 10:30:41 -04:00
Maitrey Talware
622337a297 docs(docs): fixed typos in documentations (#32661)
Minor typo fixes. (Not linked to current open issues)
2025-08-25 10:02:53 -04:00
Shahroz Ahmad
1819c73d10 docs(docs): update Docker to ClickHouse 25.7 with vector_similarity support (#32659)
- **Description:** Updated Docker command to use ClickHouse 25.7 (has
`vector_similarity` index support). Added `CLICKHOUSE_SKIP_USER_SETUP=1`
env param to [bypass default user
setup](https://clickhouse.com/docs/install/docker#managing-default-user)
and allow external network access. There was also a bug where if you try
to access results using `similarity_search_with_relevance_scores`, they
need to unpacked first.

- **Issue:** Fixes #32094 if someone following tutorial with default
Clickhouse configurations.
2025-08-25 09:59:28 -04:00
Kim
8171403b4a docs(docs): rebranding of Azure AI Studio to Azure AI Foundry (#32658)
# Description
Updated documentation to reflect Microsoft’s rebranding of Azure AI
Studio to Azure AI Foundry. This ensures consistency with current Azure
terminology across the docs.

# Issue
N/A

# Dependencies
None
2025-08-25 09:58:31 -04:00
Mason Daugherty
2d0713c2fc fix(infra): ollama CI 2025-08-22 16:40:03 -04:00
Mason Daugherty
8060b371bb fix(infra): ollama CI 2025-08-22 16:37:05 -04:00
Mason Daugherty
7851f66503 release(ollama): 0.3.7 (#32651) 2025-08-22 15:18:40 -04:00
Mason Daugherty
af3b88f58d feat(ollama): update reasoning type to support string values for custom intensity levels (e.g. gpt-oss) (#32650) 2025-08-22 15:11:32 -04:00
itaismith
1eb45d17fb feat(chroma): Add support for collection forking (#32627) 2025-08-21 17:57:55 -04:00
ccurme
8545d4731e release(openai): 0.3.31 (#32646) 2025-08-21 16:50:27 -04:00
Alex Naidis
21f7a9a9e5 fix(openai): allow temperature parameter for gpt-5-chat models (#32624) 2025-08-21 16:40:10 -04:00
sa411022
61bc1bf9cc fix(openai): construct responses api input (#32557) 2025-08-21 15:56:29 -04:00
Shahrukh Shaik
4ba222148d fix(openai): Chat Message Annotations defaults to [ ] if not list or None (#32614) 2025-08-21 15:30:12 -04:00
Christophe Bornet
b825f85bf2 fix(standard-tests): fix BaseStoreAsyncTests.test_set_values_is_idempotent (#32638)
The async version of the test should use the `ayield_keys` method
instead of `yield_keys`.
Otherwise tools such as `blockbuster` may trigger on a blocking call.
2025-08-21 10:07:46 -04:00
Mohammed Mohtasim .M.S
b5c44406eb docs(docs): fix typos in table in "How to load PDFs" documentation (#32635)
**Description:**
Fixed corrupted text in the code cell output of the documentation
notebook. The code cell itself was correct, but the saved output
contained garbage text.

**Issue:**
The saved output in the documentation notebook contained garbage/typo
text in the table name.

**Dependencies:**
None
2025-08-21 10:06:45 -04:00
Emmanuel Leroy
2ec63ca7da docs: migration to langchain_oci (#32619)
Doc update. I missed a couple mentions of the old package.
2025-08-21 10:03:44 -04:00
Christophe Bornet
f896bcdb1d chore(langchain): add mypy pydantic plugin (#32610) 2025-08-19 16:59:59 -04:00
Christophe Bornet
73a7de63aa chore(text-splitters): add mypy pydantic plugin (#32611) 2025-08-19 16:58:12 -04:00
Emmanuel Leroy
cd5f3ee364 docs: migrate from community package to langchain-oci (#32608)
Migrate package from langchain_community to langchain_oci
2025-08-19 16:57:37 -04:00
Christophe Bornet
02d6b9106b chore(core): add mypy pydantic plugin (#32604)
This helps to remove a bunch of mypy false positives.
2025-08-19 09:39:53 -04:00
William FH
b470c79f1d refactor(core): Use duck typing for _StreamingCallbackHandler (#32535)
It's used in langgraph and maybe elsewhere, so would be preferable if it
could just be duck-typed
2025-08-19 05:41:07 -07:00
Mason Daugherty
d204f0dd55 feat(infra): add skip-preview tag check in Vercel deployment script (#32600)
Having vercel attempt to deploy on each commit (even if unrelated to
docs) was getting annoying. Options:

- `[skip-preview]`
- `[no-preview]`
- `[skip-deploy]`

Full example: `fix(core): resolve memory leak [no-preview]`
2025-08-18 17:33:27 -04:00
Mohammad Mohtashim
00259b0061 fix(deepseek): Deep Seek Model for LS Tracing (#32575)
- **Description:** Fix for LS Tracing for Provider for DeepSeek.
  - **Issue:** #32484

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 18:48:30 +00:00
Mohammad Mohtashim
4fb1132e30 docs: Classification Notebook Update (#32357)
- **Description:** Updating the Classification notebook which was raised
[here](https://github.com/langchain-ai/langchain/issues/32354)
- **Issue:** Fixes #32354

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-18 18:45:03 +00:00
Mason Daugherty
a6690eb9fd release(anthropic): 0.3.19 (#32595) 2025-08-18 14:25:03 -04:00
Mason Daugherty
f69f9598f5 chore: update references to use the latest version of Claude-3.5 Sonnet (#32594) 2025-08-18 14:11:15 -04:00
Mason Daugherty
8d0fb2d04b fix(anthropic): correct input_token count for streaming (#32591)
* Create usage metadata on
[`message_delta`](https://docs.anthropic.com/en/docs/build-with-claude/streaming#event-types)
instead of at the beginning. Consequently, token counts are not included
during streaming but instead at the end. This allows for accurate
reporting of server-side tool usage (important for billing)
* Add some clarifying comments
* Fix some outstanding Pylance warnings
* Remove unnecessary `text` popping in thinking blocks
* Also now correctly reports `input_cache_read`/`input_cache_creation`
as a result
2025-08-18 17:51:47 +00:00
Mason Daugherty
8042b04da6 fix(anthropic): clean up null file_id fields in citations during message formatting (#32592)
When citations are returned from streaming, they include a `file_id:
null` field in their `content_block_location` structure.

When these citations are passed back to the API in subsequent messages,
the API rejects them with "Extra inputs are not permitted" for the
`file_id` field.
2025-08-18 13:01:52 -04:00
Daehwi Kim
fb74265175 fix(docs): update LangGraph guides link and add JS how-to link (#32583)
**Description:**  
Corrected LangGraph documentation link (changed to “guides”), and added
a link to LangGraph JS how-to guides for clarity.

**Issue:**  
N/A  

**Dependencies:**  
None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 14:27:37 +00:00
Oresztesz Margaritisz
21b61aaf9a fix(docs): Using appropriate argument name in ToolNode for error handling (#32586)
The appropriate `ToolNode` attribute for error handling is called
`handle_tool_errors` instead of `handle_tool_error`.

For further info see [ToolNode source code in
LangGraph](https://github.com/langchain-ai/langgraph/blob/main/libs/prebuilt/langgraph/prebuilt/tool_node.py#L255)

**Twitter handle:** gitaroktato

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-18 10:12:10 -04:00
Keyu Chen
03138f41a0 feat(text-splitters): add optional custom header pattern support (#31887)
## Description

This PR adds support for custom header patterns in
`MarkdownHeaderTextSplitter`, allowing users to define non-standard
Markdown header formats (like `**Header**`) and specify their hierarchy
levels.

**Issue:** Fixes #22738

**Dependencies:** None - this change has no new dependencies

**Key Changes:**
- Added optional `custom_header_patterns` parameter to support
non-standard header formats
- Enable splitting on patterns like `**Header**` and `***Header***`
- Maintain full backward compatibility with existing usage
- Added comprehensive tests for custom and mixed header scenarios

## Example Usage

```python
from langchain_text_splitters import MarkdownHeaderTextSplitter

headers_to_split_on = [
    ("**", "Chapter"),
    ("***", "Section"),
]

custom_header_patterns = {
    "**": 1,   # Level 1 headers
    "***": 2,  # Level 2 headers
}

splitter = MarkdownHeaderTextSplitter(
    headers_to_split_on=headers_to_split_on,
    custom_header_patterns=custom_header_patterns,
)

# Now **Chapter 1** is treated as a level 1 header
# And ***Section 1.1*** is treated as a level 2 header
```

## Testing

-  Added unit tests for custom header patterns
-  Added tests for mixed standard and custom headers
-  All existing tests pass (backward compatibility maintained)
-  Linting and formatting checks pass

---

The implementation provides a flexible solution while maintaining the
simplicity of the existing API. Users can continue using the splitter
exactly as before, with the new functionality being entirely opt-in
through the `custom_header_patterns` parameter.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-18 10:10:49 -04:00
Mason Daugherty
fd891ee3d4 revert(anthropic): streaming token counting to defer input tokens until completion (#32587)
Reverts langchain-ai/langchain#32518
2025-08-18 09:48:33 -04:00
ccurme
b8cdbc4eca fix(anthropic): sanitize tool use block when taking directly from content (#32574) 2025-08-18 09:06:57 -04:00
Christophe Bornet
791d309c06 chore(langchain): add mypy warn_unreachable setting (#32529)
See
https://mypy.readthedocs.io/en/stable/config_file.html#confval-warn_unreachable

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 23:03:53 +00:00
Mason Daugherty
d3d23e2372 fix(anthropic): streaming token counting to defer input tokens until completion (#32518)
Supersedes #32461

Fixed incorrect input token reporting during streaming when tools are
used. Previously, input tokens were counted at `message_start` before
tool execution, leading to inaccurate counts. Now input tokens are
properly deferred until `message_delta` (completion), aligning with
Anthropic's billing model and SDK expectations.

**Before Fix:**
- Streaming with tools: Input tokens = 0 
- Non-streaming with tools: Input tokens = 472 

**After Fix:**
- Streaming with tools: Input tokens = 472 
- Non-streaming with tools: Input tokens = 472 

Aligns with Anthropic's SDK expectations. The SDK handles input token
updates in `message_delta` events:

```python
# https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/streaming/_messages.py
if event.usage.input_tokens is not None:
      current_snapshot.usage.input_tokens = event.usage.input_tokens
```
2025-08-15 17:49:46 -04:00
Mason Daugherty
2f32c444b8 docs: add details on message IDs and their assignment process (#32534) 2025-08-15 18:22:28 +00:00
Mason Daugherty
fe740a9397 fix(docs): chatbot.ipynb trimming regression (#32561)
Supersedes #32544

Changes to the `trimmer` behavior resulted in the call `"What math
problem was asked?"` to no longer see the relevant query due to the
number of the queries' tokens. Adjusted to not trigger trimming the
relevant part of the message history. Also, add print to the trimmer to
increase observability on what is leaving the context window.

Add note to trimming tut & format links as inline
2025-08-15 14:47:22 +00:00
Rostyslav Borovyk
b2b835cb36 docs(docs): add Oxylabs document loader (#32429)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 10:46:26 -04:00
Christophe Bornet
4656f727da chore(text-splitters): add mypy warn_unreachable (#32558) 2025-08-15 09:45:20 -04:00
Mason Daugherty
34800332bf chore: update integrations table (#32556)
Enhance the integrations table by adding the `js:
'@langchain/community'` reference for several packages and updating the
titles of specific integrations to avoid improper capitalization
2025-08-14 22:37:36 -04:00
Mason Daugherty
06ba80ff68 docs: formatting Tavily (#32555) 2025-08-14 23:41:37 +00:00
Mason Daugherty
2bd8096faa docs: add pre-commit setup instructions to the dev setup guide (#32553) 2025-08-14 20:35:57 +00:00
Mason Daugherty
a0331285d7 fix(core): Support no-args tools by defaulting args to empty dict (#32530)
Supersedes #32408

Description:  
This PR ensures that tool calls without explicitly provided `args` will
default to an empty dictionary (`{}`), allowing tools with no parameters
(e.g. `def foo() -> str`) to be registered and invoked without
validation errors. This change improves compatibility with agent
frameworks that may omit the `args` field when generating tool calls.

Issue:  
See
[langgraph#5722](https://github.com/langchain-ai/langgraph/issues/5722)
–
LangGraph currently emits tool calls without `args`, which leads to
validation errors
when tools with no parameters are invoked. This PR ensures compatibility
by defaulting
`args` to `{}` when missing.

Dependencies:  
None

---------

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Signed-off-by: jitokim <pigberger70@gmail.com>
Co-authored-by: jito <pigberger70@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 20:28:36 +00:00
Mason Daugherty
8f68a08528 chore: add Chat LangChain to README.md (#32545) 2025-08-14 16:15:27 -04:00
Lauren Hirata Singh
71651c4a11 docs: update banner (#32552) 2025-08-14 10:54:29 -07:00
Lauren Hirata Singh
44ec1f32b2 docs: banner for academy course (#32550)
Publish at 10AM PT
2025-08-14 10:05:00 -07:00
Yoon
0c81499243 docs(ollama): update API usage examples (#32547)
**Description**  
Corrected a typo in the Ollama chatbot example output in  
`docs/docs/integrations/chat/ollama.ipynb` where `"got-oss"` was  
mistakenly used instead of `"gpt-oss"`.

No functional changes to code; documentation-only update.  
All notebook outputs were cleared to keep the diff minimal.

**Issue**  
N/A

**Dependencies**  
None

**Twitter handle**  
N/A
2025-08-14 12:57:38 -04:00
Mason Daugherty
397cd89988 docs: update outdated README.md content (#32540) 2025-08-13 22:19:38 +00:00
mishraravibhushan
db438d8dcc docs(docs): fixed additional grammar and style issues in how-to index (#32533)
- Fix 'few shot' → 'few-shot' (add hyphen for consistency)
- Fix 'over the database' → 'over a database' (add missing article)
- Fix 'run time' → 'runtime' (more consistent terminology)
- Fix 'in-sync' → 'in sync' (remove unnecessary hyphen)
2025-08-13 14:10:58 -04:00
RecallIO
4f71c35eb0 docs(docs): Add RecallIO.AI as a memory provider (#32331)
Add requested files to add RecallIO as a memory provider.

---------

Co-authored-by: Frey <gfreyburger@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-13 15:09:56 +00:00
Mason Daugherty
156ae2e69b fix(docs): resolve langchain-azure-ai conflict with langchain-core (#32528) 2025-08-13 14:47:23 +00:00
Shenghang Tsai
f4f919768e docs(langchain): create SiliconFlow provider entry (#32342)
SiliconFlow's provider integration will be maintained at
https://github.com/siliconflow/langchain-siliconflow
This PR introduce the basic instruction to make use of the pip package
2025-08-13 10:41:23 -04:00
Mason Daugherty
7932e1edd1 feat(docs): clarify structured output with tools ordering (#32527) 2025-08-13 10:40:48 -04:00
Mason Daugherty
024422e9b0 chore: update to use new LGP docs url (#32522) 2025-08-13 03:38:39 +00:00
Mason Daugherty
d52036accc chore: update README.md to use pepy downloads badge (#32521) 2025-08-13 03:23:11 +00:00
Mason Daugherty
5b701b5189 fix(tests): add anthropic_proxy to configurable test parameters (for v1) 2025-08-12 18:33:21 -04:00
Mason Daugherty
8848b3e018 fix(tests): add anthropic_proxy to configurable test parameters 2025-08-12 18:27:35 -04:00
Mason Daugherty
80068432ed chore(core): bump lock 2025-08-12 17:32:24 -04:00
Jack
b9dcce95be fix(anthropic): Add proxy (#32409)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
fix #30146
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 21:21:26 +00:00
ccurme
be83ce74a7 feat(anthropic): support cache_control as a kwarg (#31523)
```python
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")
caching_llm = llm.bind(cache_control={"type": "ephemeral"})

caching_llm.invoke(
    [
        HumanMessage("..."),
        AIMessage("..."),
        HumanMessage("..."),  # <-- final message / content block gets cache annotation
    ]
)
```
Potentially useful given's Anthropic's [incremental
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation)
capabilities:
> During each turn, we mark the final block of the final message with
cache_control so the conversation can be incrementally cached. The
system will automatically lookup and use the longest previously cached
prefix for follow-up messages.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 16:18:24 -04:00
Mason Daugherty
1167e7458e fix(anthropic): update test model names and adjust token count assertions in integration tests (#32422) 2025-08-12 19:39:35 +00:00
Mason Daugherty
d5fd0bca35 docs(anthropic): add documentation for extended context windows in Claude Sonnet 4 (#32517) 2025-08-12 19:16:26 +00:00
Narasimha Badrinath
30d646b576 docs(docs): remove redundant integration details from ChatGradient page. (#32514)
This commit removes redundant integration info from details page,
additionally, changing reference from "DigitalOcean GradientAI" to
"DigitalOcean Gradient™ AI" and updating the setup instructions
accordingly.
2025-08-12 16:14:18 +00:00
Mason Daugherty
262c83763f release(openai): 0.3.30 (#32515) 2025-08-12 16:06:17 +00:00
Mason Daugherty
0024dffa68 feat(openai): officially support verbosity (#32470) 2025-08-12 16:00:30 +00:00
Brody
98797f367a docs: fix broken links (#32513)
**Description:**

Two broken links were reported by another LangChain employee. This PR
fixes those links.

Fixed and tested locally.
  
**Dependencies:**

None
2025-08-12 15:55:37 +00:00
Christophe Bornet
1563099f3f chore(langchain): select ALL rules with exclusions (#31930)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-12 11:51:31 -04:00
rishiraj
7f259863e1 feat(docs): add truefoundry ai gateway (#32362)
This PR adds documentation for integrating [TrueFoundry’s AI
Gateway](https://www.truefoundry.com/ai-gateway) with Langfuse using the
Langraph OpenAI SDK.
The integration sends requests through TrueFoundry’s AI Gateway for
unified governance, observability, and routing, while Langraph runs on
the client side to capture execution traces and telemetry.
- Issue: N/A
- Dependencies: None
- Twitter - https://x.com/truefoundry


tests - Not applicable

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-12 02:26:45 +00:00
Mason Daugherty
c8df6c7ec9 chore: update CONTRIBUTING.md to more clearly mention forum (#32509) 2025-08-11 23:02:21 +00:00
Christophe Bornet
cf2b4bbe09 chore(cli): select ALL rules with exclusions (#31936)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:43:11 +00:00
Christophe Bornet
09a616fe85 chore(standard-tests): add ruff rules D (#32347)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:26:11 +00:00
Christophe Bornet
46bbd52e81 chore(cli): add ruff rules D1 (#32350)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:25:30 +00:00
Christophe Bornet
8b663ed6c6 chore(text-splitters): bump mypy version to 1.17 (#32387)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:24:49 +00:00
Anderson
166c027434 docs: add scrapeless integration documentation (#32081)
Thank you for contributing to LangChain! 
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"

- **Description:** Integrated the Scrapeless package to enable Langchain
users to seamlessly incorporate Scrapeless into their agents.
- **Dependencies:** None
- **Twitter handle:** [Scrapelessteam](https://x.com/Scrapelessteam)

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 22:16:15 +00:00
GDanksAnchor
4a2a3fcd43 docs: add anchorbrowser (#32494)
# Description

This PR updates the docs for the
[langchain-anchorbrowser](https://pypi.org/project/langchain-anchorbrowser/)
package. It adds a few tools

[Anchor Browser](https://anchorbrowser.io/?utm=langchain) is the
platform for AI Agentic browser automation, which solves the challenge
of automating workflows for web applications that lack APIs or have
limited API coverage. It simplifies the creation, deployment, and
management of browser-based automations, transforming complex web
interactions into simple API endpoints.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:48:10 +00:00
Anubhav Dhawan
d46dcf4a60 docs: add Google partner guide for MCP Toolbox (#32356)
This PR introduces a new Google partner guide for MCP Toolbox. The
primary goal of this new documentation is to enhance the discoverability
of MCP Toolbox for developers working within the Google ecosystem,
providing them with a clear and direct path to using our tools.

> [!IMPORTANT]
> This PR contains link to a page which is added in #32344. This will
cause deployment failure until that PR is merged.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:34:12 +00:00
William Espegren
d2ac3b375c fix(docs): add Spider as a webpage loader (#32453)
[Spider](https://spider.cloud/) is a webpage loader and should be listed
under the
["Webpages"](https://python.langchain.com/docs/integrations/document_loaders/#webpages)
table on the Document loaders page.

Twitter: https://x.com/WilliamEspegren

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:23:03 +00:00
Anubhav Dhawan
1e38fd2ce3 docs: add integration guide for MCP Toolbox (#32344)
This PR introduces a new integration guide for MCP Toolbox. The primary
goal of this new documentation is to enhance the discoverability of MCP
Toolbox for developers working within the LangChain ecosystem, providing
them with a clear and direct path to using our tools.

This approach was chosen to provide users with a practical, hands-on
example that they can easily follow.

> [!NOTE]
> The page added in this PR is linked to from a section in Google
partners page added in #32356.

---------

Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:03:38 +00:00
Yasien Dwieb
155e3740bc fix(docs): handle collection not found error on RAG tutorial when qdrant is selected as vectorStore (#32099)
In [Rag Part 1
Tutorial](https://python.langchain.com/docs/tutorials/rag/), when QDrant
vector store is selected, the sample code does not work
It fails with error  `ValueError: Collection test not found`

So, this fix is creating that collection and ensuring its dimension size
is matching the selection the embedding size of the selected LLM Model

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:31:24 +00:00
Deepesh Dhakal
f9b4e501a8 fix(docs): update llamacpp.ipynb for installation options on Mac (#32341)
The previous code generated data invalid error.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:25:35 +00:00
prem-sagar123
5a50802c9a docs: update prompt_templates.mdx (#32405)
```messages_to_pass = [
    HumanMessage(content="What's the capital of France?"),
    AIMessage(content="The capital of France is Paris."),
    HumanMessage(content="And what about Germany?")
]
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
print(formatted_prompt)```

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 20:16:30 +00:00
Mohammad Mohtashim
9a7e66be60 docs: put standard-tests before other packages (#32424)
- **Description:** Moving `standard-tests` to main ordered section
- **Issue:** #32395

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 20:05:24 +00:00
Mason Daugherty
5597b277c5 feat(docs): add subsection on Tool Artifacts vs. Injected State (#32468)
Clarify the differences between tool artifacts and injected state in
LangChain and LangGraph
2025-08-11 19:53:33 +00:00
Soham Sharma
a1da5697c6 docs: clarify how to get LangSmith API key (#32402)
**Description:**
I've added a small clarification to the chatbot tutorial. The tutorial
mentions setting the `LANGSMITH_API_KEY`, but doesn't explain how a new
user can get the key from the website. This change adds a brief note to
guide them to the Settings page.

P.S. This is my first pull request, so I'm excited to learn and
contribute!

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@sohamactive

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:52:05 +00:00
Divyanshu Gupta
11a54b1f1a docs: clarify SystemMessage usage in LangGraph agent notebook (#32320) (#32346)
Closes #32320

This PR updates the `langgraph_agentic_rag.ipynb` notebook to clarify
that LangGraph does not automatically prepend a `SystemMessage`. A
markdown note and an inline Python comment have been added to guide
users to explicitly include a `SystemMessage` when needed.

This improves documentation for developers working with LangGraph-based
agents and avoids confusion about system-level behavior not being
applied.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:49:42 +00:00
Mason Daugherty
5ccdcd7b7b feat(ollama): docs updates (#32507) 2025-08-11 15:39:44 -04:00
Mason Daugherty
ee4c2510eb feat: port various nit changes from wip-v0.4 (#32506)
Lots of work that wasn't directly related to core
improvements/messages/testing functionality
2025-08-11 15:09:08 -04:00
mishraravibhushan
7db9e60601 docs(docs): fix grammar, capitalization, and style issues across documentation (#32503)
**Changes made:**
- Fix 'Async programming with langchain' → 'Async programming with
LangChain'
- Fix 'Langchain asynchronous APIs' → 'LangChain asynchronous APIs'
- Fix 'How to: init any model' → 'How to: initialize any model'
- Fix 'async programming with Langchain' → 'async programming with
LangChain'
- Fix 'How to propagate callbacks constructor' → 'How to propagate
callbacks to the constructor'
- Fix 'How to add a semantic layer over graph database' → 'How to add a
semantic layer over a graph database'
- Fix 'Build a Question/Answering system' → 'Build a Question-Answering
system'

**Why is this change needed?**
- Improves documentation clarity and readability
- Maintains consistent LangChain branding throughout the docs
- Fixes grammar issues that could confuse users
- Follows proper documentation standards

**Files changed:**
- `docs/docs/concepts/async.mdx`
- `docs/docs/concepts/tools.mdx`
- `docs/docs/how_to/index.mdx`
- `docs/docs/how_to/callbacks_constructor.ipynb`
- `docs/docs/how_to/graph_semantic.ipynb`
- `docs/docs/tutorials/sql_qa.ipynb`

**Issue:** N/A (documentation improvements)

**Dependencies:** None

**Twitter handle:** https://x.com/mishraravibhush

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 13:32:28 -04:00
Mason Daugherty
e5d0a4e4d6 feat(standard-tests): formatting (#32504)
Not touching `pyproject.toml` or chat model related items as to not
interfere with work in wip0.4 branch
2025-08-11 13:30:30 -04:00
Mason Daugherty
457ce9c4b0 feat(text-splitters): ruff fixes and rules (#32502) 2025-08-11 13:28:22 -04:00
Mason Daugherty
27b6b53f20 feat(xai): ruff fixes and rules (#32501) 2025-08-11 13:03:07 -04:00
Christophe Bornet
f55186b38f fix(core): fix beta decorator for properties (#32497) 2025-08-11 12:43:53 -04:00
Mason Daugherty
374f414c91 feat(qdrant): ruff fixes and rules (#32500) 2025-08-11 12:43:41 -04:00
dependabot[bot]
9b3f3dc8d9 chore: bump actions/download-artifact from 4 to 5 (#32495)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@​GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5> No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
  with:
    artifact-ids: 12345
    path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code>&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/actions/download-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-11 12:41:58 -04:00
lineuman
afc3b1824c docs(deepseek): Add DeepSeek model option (#32481) 2025-08-11 09:20:39 -04:00
ran8080
130b7e6170 docs(docs): add missing name to AIMessage in example (#32482)
**Description:**

In the `docs/docs/how_to/structured_output.ipynb` notebook, an
`AIMessage` within the tool-calling few-shot example was missing the
`name="example_assistant"` parameter. This was inconsistent with the
other `AIMessage` instances in the same list.

This change adds the missing `name` parameter to ensure all examples in
the section are consistent, improving the clarity and correctness of the
documentation.

**Issue:** N/A

**Dependencies:** N/A
2025-08-11 09:20:09 -04:00
Navanit Dubey
d40fa534c1 docs(docs): use model_json_schema() (#32485)
While trying the line People.schema got a warning. 
```The `schema` method is deprecated; use `model_json_schema` instead```

So made the changes and now working file.

Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
    - feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
  - Allowed `{SCOPE}` values (optional):
    - core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
  - Once you've written the title, please delete this checklist item; do not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace with
  - **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
  - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
  1. A test for the integration, preferably unit tests that do not rely on network access,
  2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-11 09:19:14 -04:00
mishraravibhushan
20bd296421 docs(docs): fix grammar in "How to deal with high-cardinality categoricals" guide title (#32488)
Description:
Corrected the guide title from "How deal with high cardinality
categoricals" to "How to deal with high-cardinality categoricals".
- Added missing "to" for grammatical correctness.
- Hyphenated "high-cardinality" for standard compound adjective usage.

Issue:
N/A

Dependencies:
None

Twitter handle:
https://x.com/mishraravibhush
2025-08-11 09:17:51 -04:00
ccurme
9259eea846 fix(docs): use pepy for integration package download badges (#32491)
pypi stats has been down for some time.
2025-08-10 18:41:36 -04:00
ccurme
afcb097ef5 fix(docs): DigitalOcean Gradient: link to correct provider page and update page title (#32490) 2025-08-10 17:29:44 -04:00
ccurme
088095b663 release(openai): release 0.3.29 (#32463) 2025-08-08 11:04:33 -04:00
Mason Daugherty
c31236264e chore: formatting across codebase (#32466) 2025-08-08 10:20:10 -04:00
ccurme
02001212b0 fix(openai): revert some changes (#32462)
Keep coverage on `output_version="v0"` (increasing coverage is being
managed in v0.4 branch).
2025-08-08 08:51:18 -04:00
Mason Daugherty
00244122bd feat(openai): minimal and verbosity (#32455) 2025-08-08 02:24:21 +00:00
ccurme
6727d6e8c8 release(core): 0.3.74 (#32454) 2025-08-07 16:39:01 -04:00
Michael Matloka
5036bd7adb fix(openai): don't crash get_num_tokens_from_messages on gpt-5 (#32451) 2025-08-07 16:33:19 -04:00
ccurme
ec2b34a02d feat(openai): custom tools (#32449) 2025-08-07 16:30:01 -04:00
Mason Daugherty
145d38f7dd test(openai): add tests for prompt_cache_key parameter and update docs (#32363)
Introduce tests to validate the behavior and inclusion of the
`prompt_cache_key` parameter in request payloads for the `ChatOpenAI`
model.
2025-08-07 15:29:47 -04:00
ccurme
68c70da33e fix(openai): add in output_text (#32450)
This property was deleted in `openai==1.99.2`.
2025-08-07 15:23:56 -04:00
Eugene Yurtsev
754528d23f feat(langchain): add stuff and map reduce chains (#32333)
* Add stuff and map reduce chains
* We'll need to rename and add unit tests to the chains prior to
official release
2025-08-07 15:20:05 -04:00
CLOVA Studio 개발
ac706c77d4 docs(docs): update v0.1.1 chatModel document on langchain-naver. (#32445)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were released
[v0.1.1](https://pypi.org/project/langchain-naver/0.1.1/).
So we've updated some our documents with the additional changed
features.

## **Dependencies:** 
https://github.com/langchain-ai/langchain/pull/30956

---------

Co-authored-by: 김필환[AI Studio Dev1] <pilhwan.kim@navercorp.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-07 15:45:50 +00:00
Tianyu Chen
8493887b6f docs: update Docker image name for jaguardb setup (#32438)
**Description**
Updated the quick setup instructions for JaguarDB in the documentation.
Replaced the outdated Docker image `jaguardb/jaguardb_with_http` with
the current recommended image `jaguardb/jaguardb` for pulling and
running the server.
2025-08-07 11:23:29 -04:00
Christophe Bornet
a647073b26 feat(standard-tests): add a property to set the name of the parameter for the number of results to return (#32443)
Not all retrievers use `k` as param name to set the number of results to
return. Even in LangChain itself. Eg:
bc4251b9e0/libs/core/langchain_core/indexing/in_memory.py (L31)

So it's helpful to be able to change it for a given retriever.
The change also adds hints to disable the tests if the retriever doesn't
support setting the param in the constructor or in the invoke method
(for instance, the `InMemoryDocumentIndex` in the link supports in the
constructor but not in the invoke method).

This change is backward compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 11:22:24 -04:00
ccurme
e120604774 fix(infra): exclude pre-releases from previous version testing (#32447) 2025-08-07 10:18:59 -04:00
ccurme
06d8754b0b release(core): 0.3.73 (#32446) 2025-08-07 09:03:53 -04:00
ccurme
6e108c1cb4 feat(core): zero-out token costs for cache hits (#32437) 2025-08-07 08:49:34 -04:00
John Bledsoe
bc4251b9e0 fix(core): fix index checking when merging lists (#32431)
**Description:** fix an issue I discovered when attempting to merge
messages in which one message has an `index` key in its content
dictionary and another does not.
2025-08-06 12:47:33 -04:00
Nelson Sproul
2543007436 docs(langchain): complete PDF embedding example for OpenAI, also some minor doc fixes (#32426)
For OpenAI PDF attaching, note the needed metadata.

Also some minor doc updates.
2025-08-06 12:16:16 -04:00
Mason Daugherty
ba83f58141 release(groq): 0.3.7 (#32417) 2025-08-05 15:13:08 -04:00
Mason Daugherty
fb490b0c39 feat(groq): losen restrictions on reasoning_effort, inject effort in meta, update tests (#32415) 2025-08-05 15:03:38 -04:00
Mason Daugherty
419c173225 feat(groq): openai-oss (#32411)
use new openai-oss for integration tests, set module-level testing model
names and improve robustness of tool tests
2025-08-05 14:18:56 -04:00
Pranav Bhartiya
4011257c25 docs: add Windows-specific setup instructions (#32399)
**Description:** This PR improves the contribution setup guide by adding
comprehensive Windows-specific instructions. The changes address a
common pain point for Windows contributors who don't have `make`
installed by default, making the LangChain contribution process more
accessible across different operating systems.
The main improvements include:

- Added a dedicated "Windows Users" section with multiple installation
options for `make` (Chocolatey, Scoop, WSL)
- Provided direct `uv` commands as alternatives to all `make` commands
throughout the setup guide
- Included Windows-specific instructions for testing, formatting,
linting, and spellchecking
- Enhanced the documentation to be more inclusive for Windows developers

This change makes it easier for Windows users to contribute to LangChain
without requiring additional tool installation, while maintaining the
existing workflow for users who already have `make` available.

**Issue:** This addresses the common barrier Windows users face when
trying to contribute to LangChain due to missing `make` commands.

**Dependencies:** None required - this is purely a documentation
improvement.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-05 15:00:03 +00:00
Kanav Bansal
9de0892a77 fix(docs): update package names across multiple integration docs (#32393)
## **Description:** 
Updated incorrect package names across multiple integration docs by
replacing underscores with hyphens to reflect their actual names on
PyPI. This aligns with the actual PyPI package names and prevents
potential confusion or installation issues.
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-04 17:38:29 +00:00
Narasimha Badrinath
dd9f5d7cde feat(docs): add langchain-gradientai as provider (#32202)
langchain-gradientai is Digitalocean's integration with Langchain. It
will help users to build langchain applications using Digitalocean's
GradientAI platform.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-04 14:57:59 +00:00
Ammar Younas
d348cfe968 docs: fix minor typos in image generation description (#32375)
Description:
Fixed minor typos in the `google_imagen.ipynb` integration notebook
related to image generation prompt formatting. No functional changes
were made — just a documentation correction to improve clarity.
2025-08-04 10:52:05 -04:00
Kanav Bansal
84c5048cb8 fix(docs): correct package names in FeatureTables.js (#32377)
## **Description:** 
Updated incorrect package names in `FeatureTables.js` by replacing
underscores with hyphens to reflect their actual names on PyPI. This
aligns with the actual PyPI package names and prevents potential
confusion or installation issues.

The following package names were corrected:
- `langchain_aws` ➝ `langchain-aws`
- `langchain_community` ➝ `langchain-community`
- `langchain_elasticsearch` ➝ `langchain-elasticsearch`
- `langchain_google_community` ➝ `langchain-google-community`

 
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-08-04 10:51:32 -04:00
garciasces
d318c655b6 fix(docs): inconsistent docs for Google Vertex AI (#32381)
Description: Documentation is inconsistent with API docs.

Current documentation implies that to use the integration you must have
credentials configured AND store the path to a service account JSON
file.

API docs explain that you must only complete EITHER of the steps
regarding credentials.

I have updated the docs to make them consistent with the API wording.
2025-08-04 10:50:50 -04:00
Kanav Bansal
df4eed0cea fix(docs): update package names, class links and package links across kv_store_feat_table.py (#32353)
## **Description:** 
Refactored multiple entries in `kv_store_feat_table.py` to ensure that
all vector store metadata is accurate, consistent, and aligned with
LangChain's latest documentation structure and PyPI naming standards.

**Key improvements across all updated entries:**
- Updated `class` links to point to their respective **docs-based
integration pages** (e.g., `/docs/integrations/stores/...`) instead of
raw API reference URLs.
- Corrected `package` display names to use **hyphenated PyPI-compliant
names** (e.g., `langchain-astradb` instead of `langchain_astradb`).
- Updated `package` links to point to the **specific class-level API
references** (e.g., `/api_reference/.../storage/...ClassName.html`) for
precision.

These improvements enhance:
- Navigation experience for users
- Alignment with PyPI and docs naming conventions
- Clarity across LangChain’s integrations documentation


 
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-08-04 09:46:54 -04:00
Dhanesh Gujrathi
a25e196fe9 docs(docs): add link for ALPHAVANTAGE_API_KEY generation in integration notebook (#32364)
docs(alpha_vantage): add link for ALPHAVANTAGE_API_KEY generation in
integration notebook

**Description:**

This PR updates the `docs/docs/integrations/tools/alpha_vantage.ipynb`
integration notebook to help users locate the API key registration page
for Alpha Vantage. The following markdown line was added:

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-03 19:49:44 +00:00
Ethan Knights
3137d49bd9 docs: minor agent tools markdown improvement (#32367)
Minor sharpening of agent tool doc.
2025-08-03 15:42:19 -04:00
Raphaël
9a2f49df1f fix(docs): add missing space (#32349) 2025-07-31 09:28:51 -04:00
Mason Daugherty
32e5040a42 chore: add CLAUDE.md (#32334) 2025-07-30 23:04:45 +00:00
ccurme
a9e52ca605 chore(openai): bump openai sdk (#32322) 2025-07-30 10:58:18 -04:00
Kanav Bansal
e2bc8f19c0 docs(docs): update RAG tutorials link across multiple vector store docs (AstraDB, DatabricksVectorSearch, FAISS, Redis, etc.) (#32301)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following vector store docs.

1. AstraDBVectorStore
2. Clickhouse
3. CouchbaseSearchVectorStore
4. DatabricksVectorSearch
5. ElasticsearchStore
6. FAISS
7. Milvus
8. MongoDBAtlasVectorSearch
9. openGauss
10. PGVector
11. PGVectorStore
12. PineconeVectorStore
13. QdrantVectorStore
14. Redis
15. SQLServer

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-30 09:46:01 -04:00
Mason Daugherty
fbd5a238d8 fix(core): revert "fix: tool call streaming bug with inconsistent indices from Qwen3" (#32307)
Reverts langchain-ai/langchain#32160

Original issue stems from using `ChatOpenAI` to interact with a `qwen`
model. Recommended to use
[langchain-qwq](https://python.langchain.com/docs/integrations/chat/qwq/)
which is built for Qwen
2025-07-29 10:26:38 -04:00
HerrDings
fc2f66ca80 docs: fixed link to docs of unstructured (#32306)
In the section [How to load documents from a
directory](https://python.langchain.com/docs/how_to/document_loader_directory/)
there is a link to the docs of *unstructured*. When you click this link,
it tells you that it has moved. Accordingly this PR fixes this link in
LangChain docs directly

from: `https://unstructured-io.github.io/unstructured/#`
to: `https://docs.unstructured.io/`
2025-07-29 10:12:22 -04:00
Mason Daugherty
0e287763cd fix: lint 2025-07-28 18:49:43 -04:00
Copilot
0b56c1bc4b fix: tool call streaming bug with inconsistent indices from Qwen3 (#32160)
Fixes a streaming bug where models like Qwen3 (using OpenAI interface)
send tool call chunks with inconsistent indices, resulting in
duplicate/erroneous tool calls instead of a single merged tool call.

## Problem

When Qwen3 streams tool calls, it sends chunks with inconsistent `index`
values:
- First chunk: `index=1` with tool name and partial arguments  
- Subsequent chunks: `index=0` with `name=None`, `id=None` and argument
continuation

The existing `merge_lists` function only merges chunks when their
`index` values match exactly, causing these logically related chunks to
remain separate, resulting in multiple incomplete tool calls instead of
one complete tool call.

```python
# Before fix: Results in 1 valid + 1 invalid tool call
chunk1 = AIMessageChunk(tool_call_chunks=[
    {"name": "search", "args": '{"query":', "id": "call_123", "index": 1}
])
chunk2 = AIMessageChunk(tool_call_chunks=[
    {"name": None, "args": ' "test"}', "id": None, "index": 0}  
])
merged = chunk1 + chunk2  # Creates 2 separate tool calls

# After fix: Results in 1 complete tool call
merged = chunk1 + chunk2  # Creates 1 merged tool call: search({"query": "test"})
```

## Solution

Enhanced the `merge_lists` function in `langchain_core/utils/_merge.py`
with intelligent tool call chunk merging:

1. **Preserves existing behavior**: Same-index chunks still merge as
before
2. **Adds special handling**: Tool call chunks with
`name=None`/`id=None` that don't match any existing index are now merged
with the most recent complete tool call chunk
3. **Maintains backward compatibility**: All existing functionality
works unchanged
4. **Targeted fix**: Only affects tool call chunks, doesn't change
behavior for other list items

The fix specifically handles the pattern where:
- A continuation chunk has `name=None` and `id=None` (indicating it's
part of an ongoing tool call)
- No matching index is found in existing chunks
- There exists a recent tool call chunk with a valid name or ID to merge
with

## Testing

Added comprehensive test coverage including:
-  Qwen3-style chunks with different indices now merge correctly
-  Existing same-index behavior preserved  
-  Multiple distinct tool calls remain separate
-  Edge cases handled (empty chunks, orphaned continuations)
-  Backward compatibility maintained

Fixes #31511.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-28 22:31:41 +00:00
Copilot
ad88e5aaec fix(core): resolve cache validation error by safely converting Generation to ChatGeneration objects (#32156)
## Problem

ChatLiteLLM encounters a `ValidationError` when using cache on
subsequent calls, causing the following error:

```
ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': "unexpected value; permitted: 'ChatGeneration'", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}])
```

This occurs because:
1. The cache stores `Generation` objects (with `type="Generation"`)
2. But `ChatResult` expects `ChatGeneration` objects (with
`type="ChatGeneration"` and a required `message` field)
3. When cached values are retrieved, validation fails due to the type
mismatch

## Solution

Added graceful handling in both sync (`_generate_with_cache`) and async
(`_agenerate_with_cache`) cache methods to:

1. **Detect** when cached values contain `Generation` objects instead of
expected `ChatGeneration` objects
2. **Convert** them to `ChatGeneration` objects by wrapping the text
content in an `AIMessage`
3. **Preserve** all original metadata (`generation_info`)
4. **Allow** `ChatResult` creation to succeed without validation errors

## Example

```python
# Before: This would fail with ValidationError
from langchain_community.chat_models import ChatLiteLLM
from langchain_community.cache import SQLiteCache
from langchain.globals import set_llm_cache

set_llm_cache(SQLiteCache(database_path="cache.db"))
llm = ChatLiteLLM(model_name="openai/gpt-4o", cache=True, temperature=0)

print(llm.predict("test"))  # Works fine (cache empty)
print(llm.predict("test"))  # Now works instead of ValidationError

# After: Seamlessly handles both Generation and ChatGeneration objects
```

## Changes

- **`libs/core/langchain_core/language_models/chat_models.py`**: 
  - Added `Generation` import from `langchain_core.outputs`
- Enhanced cache retrieval logic in `_generate_with_cache` and
`_agenerate_with_cache` methods
- Added conversion from `Generation` to `ChatGeneration` objects when
needed

-
**`libs/core/tests/unit_tests/language_models/chat_models/test_cache.py`**:
- Added test case to validate the conversion logic handles mixed object
types

## Impact

- **Backward Compatible**: Existing code continues to work unchanged
- **Minimal Change**: Only affects cache retrieval path, no API changes
- **Robust**: Handles both legacy cached `Generation` objects and new
`ChatGeneration` objects
- **Preserves Data**: All original content and metadata is maintained
during conversion

Fixes #22389.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-28 22:28:16 +00:00
Mason Daugherty
30e3ed6a19 fix: add space in run-name for better readability 2025-07-28 17:46:27 -04:00
Mason Daugherty
8641a95c43 fix: update run-name in scheduled_test.yml to include dynamic inputs 2025-07-28 17:45:05 -04:00
Mason Daugherty
df70c5c186 chore: update actions run-names and add default inputs (#32293) 2025-07-28 17:33:27 -04:00
Mason Daugherty
d5ca77e065 fix: remove erreneous rocket emoji in run-name 2025-07-28 17:11:14 -04:00
Mason Daugherty
b7e4797e8b release(anthropic): 0.3.18 (#32292) 2025-07-28 17:07:11 -04:00
Mason Daugherty
3a487bf720 refactor(anthropic): AnthropicLLM to use Messages API (#32290)
re: #32189
2025-07-28 16:22:58 -04:00
Mason Daugherty
e5fd67024c fix: update link text for reporting security vulnerabilities in SECURITY.md 2025-07-28 15:05:31 -04:00
Mason Daugherty
b86841ac40 fix: update alt attribute for GitHub Codespace badge in README 2025-07-28 15:04:57 -04:00
Mason Daugherty
8db16b5633 fix: use new Google model names in examples (#32288) 2025-07-28 19:03:42 +00:00
Mason Daugherty
6f10160a45 fix: scripts/ errors 2025-07-28 15:03:25 -04:00
Mason Daugherty
e79e0bd6b4 fix(openai): add max_retries parameter to ChatOpenAI for handling 503 capacity errors (#32286)
Some integration tests were failing
2025-07-28 13:58:23 -04:00
ccurme
c55294ecb0 chore(core): add test for nested pydantic fields in schemas (#32285) 2025-07-28 17:27:24 +00:00
Mason Daugherty
7a26c3d233 fix: update bar_model to use the correct model version claude-3-7-sonnet-20250219 (#32284) 2025-07-28 12:57:40 -04:00
Mason Daugherty
c6ffac3ce0 refactor: mdx lint (#32282) 2025-07-28 12:56:22 -04:00
Mason Daugherty
a07d2c5016 refactor: remove references to unsupported model claude-3-sonnet-20240229 (#32281)
Addresses some (but not all) test issues brought about in #32280
2025-07-28 11:57:43 -04:00
Aleksandr Filippov
f0b6baa0ef fix(core): track within-batch deduplication in indexing num_skipped count (#32273)
**Description:** Fixes incorrect `num_skipped` count in the LangChain
indexing API. The current implementation only counts documents that
already exist in RecordManager (cross-batch duplicates) but fails to
count documents removed during within-batch deduplication via
`_deduplicate_in_order()`.

This PR adds tracking of the original batch size before deduplication
and includes the difference in `num_skipped`, ensuring that `num_added +
num_skipped` equals the total number of input documents.

**Issue:** Fixes incorrect document count reporting in indexing
statistics

**Dependencies:** None

Fixes #32272

---------

Co-authored-by: Alex Feel <afilippov@spotware.com>
2025-07-28 09:58:51 -04:00
Mason Daugherty
12c0e9b7d8 fix(docs): local API reference documentation build (#32271)
ensure all relevant packages are correctly processed - cli wasn't
included, also fix ValueError
2025-07-28 00:50:20 -04:00
Mason Daugherty
ed682ae62d fix: explicitly tell uv to copy when using devcontainer (#32267) 2025-07-28 00:01:06 -04:00
Mason Daugherty
caf1919217 fix: devcontainer to use volume to store the workspace (#32266)
should resolve the file sharing issue for users on macOS.
2025-07-27 23:43:06 -04:00
Mason Daugherty
904066f1ec feat: add VSCode configuration files for Python development (#32263) 2025-07-27 23:37:59 -04:00
Mason Daugherty
96cbd90cba fix: formatting issues in docstrings (#32265)
Ensures proper reStructuredText formatting by adding the required blank
line before closing docstring quotes, which resolves the "Block quote
ends without a blank line; unexpected unindent" warning.
2025-07-27 23:37:47 -04:00
Mason Daugherty
a8a2cff129 Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-27 23:34:59 -04:00
Mason Daugherty
f4ff4514ef fix: update workspace folder path in devcontainer configuration 2025-07-27 23:34:57 -04:00
Mason Daugherty
d1679cec91 chore: add .editorconfig for consistent coding styles across files (#32261)
Following existing codebase conventions
2025-07-27 23:25:30 -04:00
Mason Daugherty
5295f2add0 fix: update dev container name to match service name 2025-07-27 22:30:16 -04:00
Mason Daugherty
5f5b87e9a3 fix: update service name in devcontainer configuration 2025-07-27 22:28:47 -04:00
Mason Daugherty
e0ef98dac0 feat: add markdownlint configuration file (#32264) 2025-07-27 22:24:58 -04:00
Mason Daugherty
62212c7ee2 fix: update links in SECURITY.md to use markdown format 2025-07-27 21:54:25 -04:00
Mason Daugherty
9d38f170ce refactor: enhance workflow names and descriptions for clarity (#32262) 2025-07-27 21:31:59 -04:00
Mason Daugherty
c6cb1fae61 fix: devcontainer (#32260) 2025-07-27 20:24:16 -04:00
Kanav Bansal
e42b1d23dc docs(docs): update RAG tutorials link to point to correct path (#32256)
- **Description:** This PR updates the internal documentation link for
the RAG tutorials to reflect the updated path. Previously, the link
pointed to the root `/docs/tutorials/`, which was generic. It now
correctly routes to the RAG-specific tutorial page.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-27 20:00:41 -04:00
Mason Daugherty
53d0bfe9cd refactor: markdownlint (#32259) 2025-07-27 20:00:16 -04:00
Mason Daugherty
eafab52483 refactor: markdownlint SECURITY.md (#32258) 2025-07-27 19:55:25 -04:00
Christophe Bornet
efdfa00d10 chore(langchain): add ruff rules ARG (#32110)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-26 18:32:34 -04:00
Christophe Bornet
a2ad5aca41 chore(langchain): add ruff rules TC (#31921)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
2025-07-26 18:27:26 -04:00
Mason Daugherty
5ecbb5f277 fix(docs): temporary workaround until the underlying dependency issues in the AI21 package ecosystem are resolved. (#32248) 2025-07-25 15:12:44 -04:00
Mason Daugherty
c1028171af fix(docs): update protobuf version constraint to <5.0 in vercel_overrides.txt (#32247) 2025-07-25 15:08:44 -04:00
ccurme
f6236d9f12 fix(infra): add pypdf to vercel overrides (#32242)
>   × No solution found when resolving dependencies:
  ╰─▶ Because only langchain-neo4j==0.5.0 is available and
langchain-neo4j==0.5.0 depends on neo4j-graphrag>=1.9.0, we can conclude
that all versions of langchain-neo4j depend on neo4j-graphrag>=1.9.0.
      And because only neo4j-graphrag<=1.9.0 is available and
neo4j-graphrag==1.9.0 depends on pypdf>=5.1.0,<6.0.0, we can conclude
that all versions of langchain-neo4j depend on pypdf>=5.1.0,<6.0.0.
And because langchain-upstage==0.6.0 depends on pypdf>=4.2.0,<5.0.0
and only langchain-upstage==0.6.0 is available, we can conclude that
all versions of langchain-neo4j and all versions of langchain-upstage
      are incompatible.
And because you require langchain-neo4j and langchain-upstage, we can
      conclude that your requirements are unsatisfiable.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-25 15:05:21 -04:00
Mason Daugherty
df20f111a8 fix(docs): add validation for repository format and name in API docs build workflow (#32246)
for build
2025-07-25 15:05:06 -04:00
Eugene Yurtsev
db22311094 ci(infra): no need for . in the regexp (#32245)
No need for allowing `.`
2025-07-25 15:02:02 -04:00
Mason Daugherty
f624ad489a feat(docs): improve devx, fix Makefile targets (#32237)
**TL;DR much of the provided `Makefile` targets were broken, and any
time I wanted to preview changes locally I either had to refer to a
command Chester gave me or try waiting on a Vercel preview deployment.
With this PR, everything should behave like normal.**

Significant updates to the `Makefile` and documentation files, focusing
on improving usability, adding clear messaging, and fixing/enhancing
documentation workflows.

### Updates to `Makefile`:

#### Enhanced build and cleaning processes:
- Added informative messages (e.g., "📚 Building LangChain
documentation...") to makefile targets like `docs_build`, `docs_clean`,
and `api_docs_build` for better user feedback during execution.
- Introduced a `clean-cache` target to the `docs` `Makefile` to clear
cached dependencies and ensure clean builds.

#### Improved dependency handling:
- Modified `install-py-deps` to create a `.venv/deps_installed` marker,
preventing redundant/duplicate dependency installations and improving
efficiency.

#### Streamlined file generation and infrastructure setup:
- Added caching for the LangServe README download and parallelized
feature table generation
- Added user-friendly completion messages for targets like `copy-infra`
and `render`.

#### Documentation server updates:
- Enhanced the `start` target with messages indicating server start and
URL for local documentation viewing.

---

### Documentation Improvements:

#### Content clarity and consistency:
- Standardized section titles for consistency across documentation
files.
[[1]](diffhunk://#diff-9b1a85ea8a9dcf79f58246c88692cd7a36316665d7e05a69141cfdc50794c82aL1-R1)
[[2]](diffhunk://#diff-944008ad3a79d8a312183618401fcfa71da0e69c75803eff09b779fc8e03183dL1-R1)
- Refined phrasing and formatting in sections like "Dependency
management" and "Formatting and linting" for better readability.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L6-R6)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L84-R82)

#### Enhanced workflows:
- Updated instructions for building and viewing documentation locally,
including tips for specifying server ports and handling API reference
previews.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L60-R94)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
- Expanded guidance on cleaning documentation artifacts and using
linting tools effectively.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)

#### API reference documentation:
- Improved instructions for generating and formatting in-code
documentation, highlighting best practices for docstring writing.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L144-R186)

---

### Minor Changes:
- Added support for a new package name (`langchain_v1`) in the API
documentation generation script.
- Fixed minor capitalization and formatting issues in documentation
files.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L40-R40)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L166-R160)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-25 14:49:03 -04:00
Eugene Yurtsev
549ecd3e78 chore(infra): harden api docs build workflow (#32243)
Harden permissions for api docs build workflow
2025-07-25 14:40:20 -04:00
dishaprakash
a0671676ae feat(docs): add PGVectorStore (#30950)
Thank you for contributing to LangChain!

-  **Adding documentation for PGVectorStore**: 
docs: Adding documentation for the new PGVectorStore as a part of
langchain-postgres

- **Add docs**: The notebook for PGVectorStore is now added to the
directory `docs/docs/integrations`.
As a part of this change, we've also updated the VectorStore features
table and VectorStoreTabs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 13:22:58 -04:00
Christophe Bornet
12ae42c5e9 chore(langchain): add ruff rules D1 (except D100 and D104) (#32123) 2025-07-25 11:59:48 -04:00
Christophe Bornet
e1238b8085 chore(langchain): add ruff rules SLF (#32112)
See https://docs.astral.sh/ruff/rules/private-member-access/
2025-07-25 11:56:40 -04:00
Chaitanya varma
8f5ec20ccf chore(langchain): strip_ansi fucntion to remove ANSI escape sequences (#32200)
**Description:** 
Fixes a bug in the file callback test where ANSI escape codes were
causing test failures. The improved test now properly handles ANSI
escape sequences by:
- Using exact string comparison instead of substring checking
- Applying the `strip_ansi` function consistently to all file contents
- Adding descriptive assertion messages
- Maintaining test coverage and backward compatibility

The changes ensure tests pass reliably even when terminal control
sequences are present in the output

**Issue:** Fixes #32150

**Dependencies:** None required - uses existing dependencies only.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 15:53:19 +00:00
niceg
0d6f915442 fix: LLM mimicking Unicode responses due to forced Unicode conversion of non-ASCII characters. (#32222)
fix: Fix LLM mimicking Unicode responses due to forced Unicode
conversion of non-ASCII characters.

- **Description:** This PR fixes an issue where the LLM would mimic
Unicode responses due to forced Unicode conversion of non-ASCII
characters in tool calls. The fix involves disabling the `ensure_ascii`
flag in `json.dumps()` when converting tool calls to OpenAI format.
- **Issue:** Fixes ↓↓↓
input:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "你好啊集团"}'}}]}
```
output:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "\\u4f60\\u597d\\u554a\\u96c6\\u56e2"}'}}]}
```
then:
llm will mimic outputting unicode. Unicode's vast number of symbols can
lengthen LLM responses, leading to slower performance.
<img width="686" height="277" alt="image"
src="https://github.com/user-attachments/assets/28f3b007-3964-4455-bee2-68f86ac1906d"
/>

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 17:01:31 -04:00
Mason Daugherty
d53ebf367e fix(docs): capitalization, codeblock formatting, and hyperlinks, note blocks (#32235)
widespread cleanup attempt
2025-07-24 16:55:04 -04:00
Copilot
54542b9385 docs(openai): add comprehensive documentation and examples for extra_body + others (#32149)
This PR addresses the common issue where users struggle to pass custom
parameters to OpenAI-compatible APIs like LM Studio, vLLM, and others.
The problem occurs when users try to use `model_kwargs` for custom
parameters, which causes API errors.

## Problem

Users attempting to pass custom parameters (like LM Studio's `ttl`
parameter) were getting errors:

```python
#  This approach fails
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    model_kwargs={"ttl": 5}  # Causes TypeError: unexpected keyword argument 'ttl'
)
```

## Solution

The `extra_body` parameter is the correct way to pass custom parameters
to OpenAI-compatible APIs:

```python
#  This approach works correctly
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 5}  # Custom parameters go in extra_body
)
```

## Changes Made

1. **Enhanced Documentation**: Updated the `extra_body` parameter
docstring with comprehensive examples for LM Studio, vLLM, and other
providers

2. **Added Documentation Section**: Created a new "OpenAI-compatible
APIs" section in the main class docstring with practical examples

3. **Unit Tests**: Added tests to verify `extra_body` functionality
works correctly:
- `test_extra_body_parameter()`: Verifies custom parameters are included
in request payload
- `test_extra_body_with_model_kwargs()`: Ensures `extra_body` and
`model_kwargs` work together

4. **Clear Guidance**: Documented when to use `extra_body` vs
`model_kwargs`

## Examples Added

**LM Studio with TTL (auto-eviction):**
```python
ChatOpenAI(
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 300}  # Auto-evict after 5 minutes
)
```

**vLLM with custom sampling:**
```python
ChatOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="EMPTY",
    model="meta-llama/Llama-2-7b-chat-hf",
    extra_body={
        "use_beam_search": True,
        "best_of": 4
    }
)
```

## Why This Works

- `model_kwargs` parameters are passed directly to the OpenAI client's
`create()` method, causing errors for non-standard parameters
- `extra_body` parameters are included in the HTTP request body, which
is exactly what OpenAI-compatible APIs expect for custom parameters

Fixes #32115.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 16:43:16 -04:00
Mason Daugherty
7d2a13f519 fix: various typos (#32231) 2025-07-24 12:35:08 -04:00
Christophe Bornet
0b34be4ce5 refactor(langchain): refactor unit test stub classes (#32209)
See
https://github.com/langchain-ai/langchain/pull/32098#discussion_r2225961563
2025-07-24 11:05:56 -04:00
Mason Daugherty
6f3169eb49 chore: update copilot development guidelines for clarity and structure (#32230) 2025-07-24 15:05:09 +00:00
Eugene Yurtsev
7995c719c5 chore(langchain_v1): clean anything uncertain (#32228)
Further clean up of namespace:

- Removed prompts (we'll re-add in a separate commit)
- Remove LocalFileStore until we can review whether all the
implementation details are necessary
- Remove message processing logic from memory (we'll figure out where to
expose it)
- Remove `Tool` primitive (should be sufficient to use `BaseTool` for
typing purposes)
- Remove utilities to create kv stores. Unclear if they've had much
usage outside MultiparentRetriever
2025-07-24 14:41:05 +00:00
Mason Daugherty
bdf1cd383c fix(langchain): update deps 2025-07-24 10:37:08 -04:00
Mason Daugherty
77c981999e fix(text-splitters): update langchain-core version to 0.3.72 2025-07-24 10:35:07 -04:00
Mason Daugherty
7f015b6f14 fix(text-splitters): update lock for release 2025-07-24 10:32:04 -04:00
Mason Daugherty
71ad451e1f Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-24 10:24:17 -04:00
Mason Daugherty
2c42893703 fix(langchain): update langchain-core version to 0.3.72 2025-07-24 10:24:04 -04:00
Mason Daugherty
0e139fb9a6 release(langchain): 0.3.27 (#32227) 2025-07-24 10:20:20 -04:00
tanwirahmad
622bb05751 fix(langchain): class HTMLSemanticPreservingSplitter ignores the text inside the div tag (#32213)
**Description:** We collect the text from the "html", "body", "div", and
"main" nodes, if they have any.

**Issue:** Fixes #32206.
2025-07-24 10:09:03 -04:00
Eugene Yurtsev
56dde3ade3 feat(langchain): v1 scaffolding (#32166)
This PR adds scaffolding for langchain 1.0 entry package.

Most contents have been removed. 

Currently remaining entrypoints for:

* chat models
* embedding models
* memory -> trimming messages, filtering messages and counting tokens
[we may remove this]
* prompts -> we may remove some prompts
* storage: primarily to support cache backed embeddings, may remove the
kv store
* tools -> report tool primitives

Things to be added:

* Selected agent implementations
* Selected workflows
* Common primitives: messages, Document
* Primitives for type hinting: BaseChatModel, BaseEmbeddings
* Selected retrievers
* Selected text splitters

Things to be removed:

* Globals needs to be removed (needs an update in langchain core)


Todos: 

* TBD indexing api (requires sqlalchemy which we don't want as a
dependency)
* Be explicit about public/private interfaces (e.g., likely rename
chat_models.base.py to something more internal)
* Remove dockerfiles
* Update module doc-strings and README.md
2025-07-24 09:47:48 -04:00
Mason Daugherty
bd3d6496f3 release(core): 0.3.72 (#32214)
fixes #32170
2025-07-23 20:33:48 -04:00
jmaillefaud
fb5da8384e fix(core): Dereference Refs for pydantic schema fails in tool schema generation (#32203)
The `_dereference_refs_helper` in `langchain_core.utils.json_schema`
incorrectly handled objects with a reference and other fields.

**Issue**: #32170

# Description

We change the check so that it accepts other keys in the object.
2025-07-23 20:28:27 -04:00
Maxime Grenu
a7d0e42f3f docs: fix typos in documentation (#32201)
## Summary
- Fixed redundant word "done" in SECURITY.md line 69  
- Fixed grammar errors in Fireworks README.md line 77: "how it fares
compares" → "how it compares" and "in terms just" → "in terms of"

## Test plan
- [x] Verified changes improve readability and correct grammar
- [x] No functional changes, documentation only

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-23 10:43:25 -04:00
Christophe Bornet
3496e1739e feat(langchain): add ruff rules PL (#32079)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-07-22 23:55:32 -04:00
Jacob Lee
0f39155f62 docs: Specify environment variables for BedrockConverse (#32194) 2025-07-22 17:37:47 -04:00
ccurme
6aeda24a07 docs(chroma): update feature table (#32193)
Supports multi-tenancy.
2025-07-22 20:55:07 +00:00
Mason Daugherty
3ed804a5f3 fix(perplexity): undo xfails (#32192) 2025-07-22 16:29:37 -04:00
Mason Daugherty
ca137bfe62 . 2025-07-22 16:25:02 -04:00
Mason Daugherty
fa487fb62d fix(perplexity): temp xfail int tests (#32191)
It appears the API has changes since the 2025-04-15 release, leading to
failed integration tests.
2025-07-22 16:20:51 -04:00
ccurme
053fb16a05 revert: drop anthropic from core test matrix (#32190)
Reverts langchain-ai/langchain#32185
2025-07-22 20:13:02 +00:00
ccurme
3672bbc71e fix(anthropic): update integration test models (#32189)
Multiple models were
[retired](https://docs.anthropic.com/en/docs/about-claude/model-deprecations#model-status)
yesterday.

Tests remain broken until we figure out what to do with the legacy
Anthropic LLM integration— currently uses their (legacy) text
completions API, for which there appear to be no remaining supported
models.
2025-07-22 19:51:39 +00:00
Mason Daugherty
a02ad3d192 docs: formatting cleanup (#32188)
* formatting cleaning
* make `init_chat_model` more prominent in list of guides
2025-07-22 15:46:15 -04:00
ccurme
0c4054a7fc release(core): 0.3.71 (#32186) 2025-07-22 15:44:36 -04:00
ccurme
75517c3ea9 chore(infra): drop anthropic from core test matrix (#32185) 2025-07-22 19:38:58 +00:00
ccurme
ebf2e11bcb fix(core): exclude api_key from tracing metadata (#32184)
(standard param)
2025-07-22 15:32:12 -04:00
ccurme
e41e6ec6aa release(chroma): 0.2.5 (#32183) 2025-07-22 15:24:03 -04:00
itaismith
09769373b3 feat(chroma): Add Chroma Cloud support (#32125)
* Adding support for more Chroma client options (`HttpClient` and
`CloundClient`). This includes adding arguments necessary for
instantiating these clients.
* Adding support for Chroma's new persisted collection configuration (we
moved index configuration into this new construct).
* Delegate `Settings` configuration to Chroma's client constructors.
2025-07-22 15:14:15 -04:00
ccurme
3fc27e7a95 docs: update feature table for Chroma (#32182) 2025-07-22 18:21:17 +00:00
ccurme
8acfd677bc fix(core): add type key when tracing in some cases (#31825) 2025-07-22 18:08:16 +00:00
Mason Daugherty
af3789b9ed fix(deepseek): release openai version (#32181)
used sdk version instead of langchain by accident
2025-07-22 13:29:52 -04:00
Mason Daugherty
a6896794ca release(ollama): 0.3.6 (#32180) 2025-07-22 13:24:17 -04:00
Copilot
d40fd5a3ce feat(ollama): warn on empty load responses (#32161)
## Problem

When using `ChatOllama` with `create_react_agent`, agents would
sometimes terminate prematurely with empty responses when Ollama
returned `done_reason: 'load'` responses with no content. This caused
agents to return empty `AIMessage` objects instead of actual generated
text.

```python
from langchain_ollama import ChatOllama
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage

llm = ChatOllama(model='qwen2.5:7b', temperature=0)
agent = create_react_agent(model=llm, tools=[])

result = agent.invoke(HumanMessage('Hello'), {"configurable": {"thread_id": "1"}})
# Before fix: AIMessage(content='', response_metadata={'done_reason': 'load'})
# Expected: AIMessage with actual generated content
```

## Root Cause

The `_iterate_over_stream` and `_aiterate_over_stream` methods treated
any response with `done: True` as final, regardless of `done_reason`.
When Ollama returns `done_reason: 'load'` with empty content, it
indicates the model was loaded but no actual generation occurred - this
should not be considered a complete response.

## Solution

Modified the streaming logic to skip responses when:
- `done: True`
- `done_reason: 'load'` 
- Content is empty or contains only whitespace

This ensures agents only receive actual generated content while
preserving backward compatibility for load responses that do contain
content.

## Changes

- **`_iterate_over_stream`**: Skip empty load responses instead of
yielding them
- **`_aiterate_over_stream`**: Apply same fix to async streaming
- **Tests**: Added comprehensive test cases covering all edge cases

## Testing

All scenarios now work correctly:
-  Empty load responses are skipped (fixes original issue)
-  Load responses with actual content are preserved (backward
compatibility)
-  Normal stop responses work unchanged
-  Streaming behavior preserved
-  `create_react_agent` integration fixed

Fixes #31482.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-22 13:21:11 -04:00
Mason Daugherty
116b758498 fix: bump deps for release (#32179)
forgot to bump the `pyproject.toml` files
2025-07-22 13:12:14 -04:00
Mason Daugherty
10996a2821 release(perplexity): 0.1.2 (#32176) 2025-07-22 13:02:19 -04:00
Mason Daugherty
2aed07efb6 release(deepseek): 0.1.4 (#32178) 2025-07-22 13:01:54 -04:00
Mason Daugherty
64dac1faf7 release(huggingface): 0.3.1 (#32177) 2025-07-22 13:01:34 -04:00
Mason Daugherty
58768d8aef release(xai): 0.2.5 (#32174) 2025-07-22 13:01:26 -04:00
Mason Daugherty
d65da13299 docs(ollama): add validate_model_on_init note, bump lock (#32172) 2025-07-22 10:58:45 -04:00
Kanav Bansal
c14bd1fcfe fix(docs): update RAG tutorials link to point to correct path (#32169)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following text-embedding models.

1. DatabricksEmbeddings
2. IBM watsonx.ai
3. OpenAIEmbeddings
4. NomicEmbeddings
5. CohereEmbeddings
6. MistralAIEmbeddings
7. FireworksEmbeddings
8. TogetherEmbeddings
9. LindormAIEmbeddings
10. ModelScopeEmbeddings
11. ClovaXEmbeddings
12. NetmindEmbeddings
13. SambaNovaCloudEmbeddings
14. SambaStudioEmbeddings
15. ZhipuAIEmbeddings

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-22 10:24:50 -04:00
Byeongjin Kang
a1ccabf85d docs: add documentation about how to use extended thinking with ChatBedrockConverse (#32168) 2025-07-22 08:44:08 -04:00
Copilot
2104cf0d9a fix: replace deprecated Pydantic .schema() calls with v1/v2 compatible pattern (#32162)
This PR addresses deprecation warnings users encounter when using
LangChain tools with Pydantic v2:

```
PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. 
Deprecated in Pydantic V2.0 to be removed in V3.0.
```

## Root Cause

Several LangChain components were still using the deprecated `.schema()`
method directly instead of the Pydantic v1/v2 compatible approach. While
users calling `.schema()` on returned models will still see warnings
(which is correct), LangChain's internal code should not generate these
warnings.

## Changes Made

Updated 3 files to use the standard compatibility pattern:

```python
# Before (deprecated)
schema = model.schema()

# After (compatible with both v1 and v2) 
if hasattr(model, "model_json_schema"):
    schema = model.model_json_schema()  # Pydantic v2
else:
    schema = model.schema()  # Pydantic v1
```

### Files Updated:
- **`evaluation/parsing/json_schema.py`**: Fixed `_parse_json()` method
to handle Pydantic models correctly
- **`output_parsers/yaml.py`**: Fixed `get_format_instructions()` to use
compatible schema access
- **`chains/openai_functions/citation_fuzzy_match.py`**: Fixed direct
`.schema()` call on QuestionAnswer model

## Verification

 **Zero breaking changes** - all existing functionality preserved  
 **No deprecation warnings** from LangChain internal code  
 **Backward compatible** with Pydantic v1  
 **Forward compatible** with Pydantic v2  
 **Edge cases handled** (strings, plain objects, etc.)

## User Impact

LangChain users will no longer see deprecation warnings from internal
LangChain code. Users who directly call `.schema()` on schemas returned
by LangChain should adopt the same compatibility pattern:

```python
# User code should use this pattern
input_schema = tool.get_input_schema()
if hasattr(input_schema, "model_json_schema"):
    schema_result = input_schema.model_json_schema()
else:
    schema_result = input_schema.schema()
```

Fixes #31458.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 21:19:53 -04:00
Copilot
18c64aed6d feat(core): add sanitize_for_postgres utility to fix PostgreSQL NUL byte DataError (#32157)
This PR fixes the PostgreSQL NUL byte issue that causes
`psycopg.DataError` when inserting documents containing `\x00` bytes
into PostgreSQL-based vector stores.

## Problem

PostgreSQL text fields cannot contain NUL (0x00) bytes. When documents
with such characters are processed by PGVector or langchain-postgres
implementations, they fail with:

```
(psycopg.DataError) PostgreSQL text fields cannot contain NUL (0x00) bytes
```

This commonly occurs when processing PDFs, documents from various
loaders, or text extracted by libraries like unstructured that may
contain embedded NUL bytes.

## Solution

Added `sanitize_for_postgres()` utility function to
`langchain_core.utils.strings` that removes or replaces NUL bytes from
text content.

### Key Features

- **Simple API**: `sanitize_for_postgres(text, replacement="")`
- **Configurable**: Replace NUL bytes with empty string (default) or
space for readability
- **Comprehensive**: Handles all problematic examples from the original
issue
- **Well-tested**: Complete unit tests with real-world examples
- **Backward compatible**: No breaking changes, purely additive

### Usage Example

```python
from langchain_core.utils import sanitize_for_postgres
from langchain_core.documents import Document

# Before: This would fail with DataError
problematic_content = "Getting\x00Started with embeddings"

# After: Clean the content before database insertion
clean_content = sanitize_for_postgres(problematic_content)
# Result: "GettingStarted with embeddings"

# Or preserve readability with spaces
readable_content = sanitize_for_postgres(problematic_content, " ")
# Result: "Getting Started with embeddings"

# Use in Document processing
doc = Document(page_content=clean_content, metadata={...})
```

### Integration Pattern

PostgreSQL vector store implementations should sanitize content before
insertion:

```python
def add_documents(self, documents: List[Document]) -> List[str]:
    # Sanitize documents before insertion
    sanitized_docs = []
    for doc in documents:
        sanitized_content = sanitize_for_postgres(doc.page_content, " ")
        sanitized_doc = Document(
            page_content=sanitized_content,
            metadata=doc.metadata,
            id=doc.id
        )
        sanitized_docs.append(sanitized_doc)
    
    return self._insert_documents_to_db(sanitized_docs)
```

## Changes Made

- Added `sanitize_for_postgres()` function in
`langchain_core/utils/strings.py`
- Updated `langchain_core/utils/__init__.py` to export the new function
- Added comprehensive unit tests in
`tests/unit_tests/utils/test_strings.py`
- Validated against all examples from the original issue report

## Testing

All tests pass, including:
- Basic NUL byte removal and replacement
- Multiple consecutive NUL bytes
- Empty string handling
- Real examples from the GitHub issue
- Backward compatibility with existing string utilities

This utility enables PostgreSQL integrations in both langchain-community
and langchain-postgres packages to handle documents with NUL bytes
reliably.

Fixes #26033.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 20:33:20 -04:00
Copilot
fc802d8f9f docs: fix vectorstore feature table - correct "IDs in add Documents" values (#32153)
The vectorstore feature table in the documentation was showing incorrect
information for the "IDs in add Documents" capability. Most vectorstores
were marked as  (not supported) when they actually support extracting
IDs from documents.

## Problem

The issue was an inconsistency between two sources of truth:
- **JavaScript feature table** (`docs/src/theme/FeatureTables.js`):
Hardcoded `idsInAddDocuments: false` for most vectorstores
- **Python script** (`docs/scripts/vectorstore_feat_table.py`):
Correctly showed `"IDs in add Documents": True` for most vectorstores

## Root Cause

All vectorstores inherit the base `VectorStore.add_documents()` method
which automatically extracts document IDs:

```python
# From libs/core/langchain_core/vectorstores/base.py lines 277-284
if "ids" not in kwargs:
    ids = [doc.id for doc in documents]
    
    # If there's at least one valid ID, we'll assume that IDs should be used.
    if any(ids):
        kwargs["ids"] = ids
```

Since no vectorstores override `add_documents()`, they all inherit this
behavior and support IDs in documents.

## Solution

Updated `idsInAddDocuments` from `false` to `true` for 13 vectorstores:
- AstraDBVectorStore, Chroma, Clickhouse, DatabricksVectorSearch
- ElasticsearchStore, FAISS, InMemoryVectorStore,
MongoDBAtlasVectorSearch
- PGVector, PineconeVectorStore, Redis, Weaviate, SQLServer

The other 4 vectorstores (CouchbaseSearchVectorStore, Milvus, openGauss,
QdrantVectorStore) were already correctly marked as `true`.

## Impact

Users visiting
https://python.langchain.com/docs/integrations/vectorstores/ will now
see accurate information. The "IDs in add Documents" column will
correctly show  for all vectorstores instead of incorrectly showing 
for most of them.

This aligns with the API documentation which states: "if kwargs contains
ids and documents contain ids, the ids in the kwargs will receive
precedence" - clearly indicating that document IDs are supported.

Fixes #30622.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
2025-07-21 20:29:34 -04:00
Mason Daugherty
b4d87c709c chore: update copilot-instructions.md (#32159) 2025-07-21 20:17:41 -04:00
ccurme
383bc8f2ef revert: drop anthropic from core test matrix (#32152)
Reverts langchain-ai/langchain#32146
2025-07-21 20:15:27 +00:00
Christophe Bornet
64261449b8 feat(langchain): add ruff rules TRY (#32047)
See https://docs.astral.sh/ruff/rules/#tryceratops-try

* TRY004 (replace by TypeError) in main code is escaped with `noqa` to
not break backward compatibility. The rule is still interesting for new
code.
* TRY301 ignored at the moment. This one is quite hard to fix and I'm
not sure it's very interesting to activate it.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 13:41:20 -04:00
Christophe Bornet
8b8d90bea5 feat(langchain): add ruff rules PT (#32010)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-07-21 13:15:05 -04:00
Mohammad Mohtashim
095f4a7c28 fix(core): fix parse_resultin case of self.first_tool_only with multiple keys matching for JsonOutputKeyToolsParser (#32106)
* **Description:** Updated `parse_result` logic to handle cases where
`self.first_tool_only` is `True` and multiple matching keys share the
same function name. Instead of returning the first match prematurely,
the method now prioritizes filtering results by the specified key to
ensure correct selection.
* **Issue:** #32100

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 12:50:22 -04:00
Mason Daugherty
ddaba21e83 chore: copilot instructions (#32075)
https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot
2025-07-21 12:50:07 -04:00
diego-coder
8e4396bb32 fix(ollama): robustly parse single-quoted JSON in tool calls (#32109)
**Description:**
This PR makes argument parsing for Ollama tool calls more robust. Some
LLMs—including Ollama—may return arguments as Python-style dictionaries
with single quotes (e.g., `{'a': 1}`), which are not valid JSON and
previously caused parsing to fail.
The updated `_parse_json_string` method in
`langchain_ollama.chat_models` now attempts standard JSON parsing and,
if that fails, falls back to `ast.literal_eval` for safe evaluation of
Python-style dictionaries. This improves interoperability with LLMs and
fixes a common usability issue for tool-based agents.

**Issue:**
Closes #30910

**Dependencies:**
None

**Tests:**
- Added new unit tests for double-quoted JSON, single-quoted dicts,
mixed quoting, and malformed/failure cases.
- All tests pass locally, including new coverage for single-quoted
inputs.

**Notes:**
- No breaking changes.
- No new dependencies introduced.
- Code is formatted and linted (`ruff format`, `ruff check`).
- If maintainers have suggestions for further improvements, I’m happy to
revise!

Thank you for maintaining LangChain! Looking forward to your feedback.
2025-07-21 12:11:22 -04:00
ccurme
6794422b85 chore(infra): drop anthropic from core test matrix (#32146)
Stricter JSON schema validation broke a test. Test was fixed in
https://github.com/langchain-ai/langchain/pull/32145. Core release runs
old tests (i.e., last released version of langchain-anthropic) against
new core. So we bypass anthropic for release. Will revert after.
2025-07-21 15:06:52 +00:00
ccurme
2ef9465893 fix(anthropic): fix test (#32145) 2025-07-21 14:49:40 +00:00
ccurme
0355da3159 release(core): 0.3.70 (#32144) 2025-07-21 10:49:32 -04:00
Ziafat Majeed
6c18073fe6 docs(core): fix grammar from 'as an bonus' to 'as a bonus (#32143) 2025-07-21 10:48:34 -04:00
Kanav Bansal
38581f31dd docs(docs): update RAG tutorials link to point to correct path in Google Vertex AI Embeddings (#32141) 2025-07-21 09:17:54 -04:00
Kanav Bansal
8246b5b660 docs(docs): update RAG tutorials link to point to correct path in AzureOpenAI (#32131) 2025-07-21 09:17:35 -04:00
astraszab
668c084520 docs(core): move incorrect arg limitation in rate limiter's docstring (#32118) 2025-07-20 14:28:35 -04:00
ccurme
cc076ed891 fix(huggingface): update model used in standard tests (#32116) 2025-07-20 01:50:31 +00:00
Yoshi
6d71bb83de fix(core): fix docstrings and add sleep to FakeListChatModel._call (#32108) 2025-07-19 17:30:15 -04:00
Kanav Bansal
f7d1b1fbb1 docs(docs): update RAG tutorials link to point to correct path (#32113) 2025-07-19 17:27:31 -04:00
Isaac Francisco
98bfd57a76 fix(core): better error message for empty var names (#32073)
Previously, we hit an index out of range error with empty variable names
(accessing tag[0]), now we through a slightly nicer error

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 17:00:02 -04:00
Gurram Siddarth Reddy
427d2d6397 fix(core): implement sleep delay in FakeMessagesListChatModel _generate (#32014)
implement sleep delay in FakeMessagesListChatModel._generate so the
sleep parameter is respected, matching the documented behavior. This
adds artificial latency between responses for testing purposes.

Issue: closes
[#31974](https://github.com/langchain-ai/langchain/issues/31974)
following
[docs](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.html#langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.sleep)

Dependencies: none

Twitter handle: [@siddarthreddyg2](https://x.com/siddarthreddyg2)

---------

Signed-off-by: Siddarthreddygsr <siddarthreddygsr@gmail.com>
2025-07-18 15:54:28 -04:00
Kanav Bansal
50a12a7ee5 fix(docs): fix broken link in VertexAILLM and NVIDIA LLM integrations (#32096)
## **Description:**   
This PR updates the `link` values for the following integration metadata
entries:

1. **VertexAILLM**  
   - Changed from: `google_vertexai`  
   - To: `google_vertex_ai_palm`  
2. **NVIDIA**  
   - Changed from: `NVIDIA`  
   - To: `nvidia_ai_endpoints`  

These changes ensure that the documentation links correspond to the
correct integration paths, improving documentation navigation and
consistency with the integration structure.

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 14:00:49 +00:00
Kanav Bansal
72a0f425ec docs(docs): correct package name from langchain-google_vertexai to langchain-google-vertexai for VertexAILLM (#32095)
- **Description:** This PR updates the `package` field for the VertexAI
integration in the documentation metadata. The original value was
`langchain-google_vertexai`, which has been corrected to
`langchain-google-vertexai` to reflect the actual package name used in
PyPI and LangChain integrations.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-18 09:45:28 -04:00
Sarah Guthals
22535eb4b3 docs: add tensorlake provider (#32046) 2025-07-17 19:28:14 -04:00
open-swe[bot]
5da986c3f6 fix(core): JSON Schema reference resolution for list indices (#32088)
Fixes #32042

## Summary
Fixes a critical bug in JSON Schema reference resolution that prevented
correctly dereferencing numeric components in JSON pointer paths,
specifically for list indices in `anyOf`, `oneOf`, and `allOf` arrays.

## Changes
- Fixed `_retrieve_ref` function in
`libs/core/langchain_core/utils/json_schema.py` to properly handle
numeric components
- Added comprehensive test function `test_dereference_refs_list_index()`
in `libs/core/tests/unit_tests/utils/test_json_schema.py`
- Resolved line length formatting issues
- Improved type checking and index validation for list and dictionary
references

## Key Improvements
- Correctly handles list index references in JSON pointer paths
- Maintains backward compatibility with existing dictionary numeric key
functionality
- Adds robust error handling for out-of-bounds and invalid indices
- Passes all test cases covering various reference scenarios

## Test Coverage
- Verified fix for `#/properties/payload/anyOf/1/properties/startDate`
reference
- Tested edge cases including out-of-bounds and negative indices
- Ensured no regression in existing reference resolution functionality

Resolves the reported issue with JSON Schema reference dereferencing for
list indices.

---------

Co-authored-by: open-swe-dev[bot] <open-swe-dev@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-17 15:54:38 -04:00
Mason Daugherty
6d449df8bb chore: update PR lint (#32091)
remove regex
2025-07-17 15:33:48 -04:00
ccurme
3f4d27fe21 fix(infra): update some notebook cassettes (#32087) 2025-07-17 13:57:29 -04:00
Mason Daugherty
59407338dd docs: remove AI21 embeddings section (#32084)
// no longer exists
2025-07-17 11:32:34 -04:00
Mason Daugherty
a1519af513 fix(docs): fix broken links (#32083) 2025-07-17 10:38:51 -04:00
Christophe Bornet
b61ce9178c refactor(langchain): remove model_rebuild (#32080)
Since #29963 BaseCache and Callbacks are imported in BaseLanguageModel
so there's no need to import them and rebuild the models.
Note: fix is available since `langchain-core==0.3.39` and the current
langchain dependency on core is `>=0.3.66` so the fix will always be
there.
2025-07-17 10:34:41 -04:00
Mason Daugherty
9165cde538 feat(docs): add Slack community link to footer (#32053) 2025-07-17 10:12:09 -04:00
Kanav Bansal
2c0e8dce0d docs(docs): fix broken link in Google Gemini text embedding integration (#32082)
- **Description:** Corrected the `link` path in the Google Gemini
integration entry from
`/docs/integrations/text_embedding/google-generative-ai` to
`/docs/integrations/text_embedding/google_generative_ai` to align with
actual directory structure and prevent broken documentation links.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-17 09:58:07 -04:00
Mason Daugherty
491f63ca82 release(ollama): release 0.3.5 (#32076) 2025-07-16 18:45:32 -04:00
Mason Daugherty
587c213760 bump lcok 2025-07-16 18:44:56 -04:00
Copilot
98c3bbbaf0 fix(ollama): num_gpu parameter not working in async OllamaEmbeddings method (#32074)
The `num_gpu` parameter in `OllamaEmbeddings` was not being passed to
the Ollama client in the async embedding method, causing GPU
acceleration settings to be ignored when using async operations.

## Problem

The issue was in the `aembed_documents` method where the `options`
parameter (containing `num_gpu` and other configuration) was missing:

```python
# Sync method (working correctly)
return self._client.embed(
    self.model, texts, options=self._default_params, keep_alive=self.keep_alive
)["embeddings"]

# Async method (missing options parameter)
return (
    await self._async_client.embed(
        self.model, texts, keep_alive=self.keep_alive  #  No options!
    )
)["embeddings"]
```

This meant that when users specified `num_gpu=4` (or any other GPU
configuration), it would work with sync calls but be ignored with async
calls.

## Solution

Added the missing `options=self._default_params` parameter to the async
embed call to match the sync version:

```python
# Fixed async method
return (
    await self._async_client.embed(
        self.model,
        texts,
        options=self._default_params,  #  Now includes num_gpu!
        keep_alive=self.keep_alive,
    )
)["embeddings"]
```

## Validation

-  Added unit test to verify options are correctly passed in both sync
and async methods
-  All existing tests continue to pass
-  Manual testing confirms `num_gpu` parameter now works correctly
-  Code passes linting and formatting checks

The fix ensures that GPU configuration works consistently across both
synchronous and asynchronous embedding operations.

Fixes #32059.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 18:42:52 -04:00
efj-amzn
d3072e2d2e feat(core): update _import_utils.py to not mask the thrown exception (#32071) 2025-07-16 17:11:56 -04:00
Lauren Hirata Singh
b49372595e docs: update LangSmith links (#32070) 2025-07-16 16:31:28 -04:00
Mason Daugherty
16664d3b68 fix(docs): make docs link absolute (#32068) 2025-07-16 20:15:28 +00:00
Inácio Nery
ea8f2a05ba feat(perplexity): expose search_results in chat model (#31468)
Description
The Perplexity chat model already returns a search_results field, but
LangChain dropped it when mapping Perplexity responses to
additional_kwargs.
This patch adds "search_results" to the allowed attribute lists in both
_stream and _generate, so downstream code can access it just like
images, citations, or related_questions.

Dependencies
None. The change is purely internal; no new imports or optional
dependencies required.


https://community.perplexity.ai/t/new-feature-search-results-field-with-richer-metadata/398

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 15:16:35 -04:00
zygimantas-jac
2df05f6f6a docs: add oxylabs to web browsing table (#31931)
Added Oxylabs to the web browsing table

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 14:00:14 -04:00
nikk0o046
b1c7de98f5 fix(deepseek): convert tool output arrays to strings (#31913)
## Description
When ChatDeepSeek invokes a tool that returns a list, it results in an
openai.UnprocessableEntityError due to a failure in deserializing the
JSON body.

The root of the problem is that ChatDeepSeek uses BaseChatOpenAI
internally, but the APIs are not identical: OpenAI v1/chat/completions
accepts arrays as tool results, but Deepseek API does not.

As a solution added `_get_request_payload` method to ChatDeepSeek, which
inherits the behavior from BaseChatOpenAI but adds a step to stringify
tool message content in case the content is an array. I also add a unit
test for this.

From the linked issue you can find the full reproducible example the
reporter of the issue provided. After the changes it works as expected.

Source: [Deepseek
docs](https://api-docs.deepseek.com/api/create-chat-completion/)


![image](https://github.com/user-attachments/assets/a59ed3e7-6444-46d1-9dcf-97e40e4e8952)

Source: [OpenAI
docs](https://platform.openai.com/docs/api-reference/chat/create)


![image](https://github.com/user-attachments/assets/728f4fc6-e1a3-4897-b39f-6f1ade07d3dc)


## Issue
Fixes #31394

## Dependencies:
No new dependencies.

## Twitter handle:
Don't have one.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 12:19:44 -04:00
Mohammad Mohtashim
96bf8262e2 fix: fixing missing Docstring Bug if no Docstring is provided in BaseModel class (#31608)
- **Description:** Ensure that the tool description is an empty string
when creating a Structured Tool from a Pydantic class in case no
description is provided
- **Issue:** Fixes #31606

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:56:05 -04:00
Mason Daugherty
15103b0520 chore: add closing keyword to PR template (#32065) 2025-07-16 11:54:26 -04:00
Casi
686a6b754c fix: issue a warning if np.nan or np.inf are in _cosine_similarity argument Matrices (#31532)
- **Description**: issues a warning if inf and nan are passed as inputs
to langchain_core.vectorstores.utils._cosine_similarity
- **Issue**: Fixes #31496
- **Dependencies**: no external dependencies added, only warnings module
imported

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:50:09 -04:00
Michael Li
12d370a55a fix(cli): exception to prevent swallowing unexpected errors (#31983) 2025-07-16 10:23:43 -04:00
Michael Li
5a4c0c0816 fix(cli): handle exception in remove() (#31982) 2025-07-16 10:23:02 -04:00
Krishna Somani
e2dc36b126 chore: update SECURITY.md (#32060)
Made minor changes, making it neat
2025-07-16 10:20:59 -04:00
Kanav Bansal
c133eff6c8 docs(docs): fix product name in Google SQL for MySQL description (#32062)
- **Description:** Corrected the service name from "Cloud Cloud SQL" to
"Google Cloud SQL" to accurately reflect the official product branding.
2025-07-16 10:17:59 -04:00
Ahmad Elmalah
1892a67eef docs: adding context for Textract linearization-config param (#32064)
Before jumping into tech implementation, I added a context for
linearization-config param, and explained what's linealization in this
context.
I also linked an AWS blog for more advanced use cases, as this single
example doesn't cover all use cases.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 10:17:20 -04:00
Ahmad Elmalah
2ab2cab203 docs: update titles for Textract examples (#32063)
**On this PR I am doing two things:**

1. Adding titles to the 4 example we have, to allow the reader to
capture the essence of the paragraph quickly
2. Replacing 'samples' with 'examples', for more clarity, 

**Why 'examples' could be a better terminology over 'samples' here?**
1. On the page, we were using both 'samples' and 'examples'
interchangeably which lead to confusion, now 'examples' are the use
cases, while 'samples' are the the sample data being used
2. This is consistent with the rest of the docs, we typically use
'examples' for examples, for example
https://python.langchain.com/docs/integrations/callbacks/fiddler/
2025-07-16 10:17:02 -04:00
Mason Daugherty
ad44f0688b release(core): release 0.3.69 (#32056) 2025-07-15 17:13:46 -04:00
Mason Daugherty
8ad12f3fcf docs: add missing js providers to table (#32055)
Update to show that Cerebras, xAI, and Cloudflare now have JS/TS
equivalents
2025-07-15 17:09:35 -04:00
Jacob Lee
535ba43b0d feat(core): add an option to make deserialization more permissive (#32054)
## Description

Currently when deserializing objects that contain non-deserializable
values, we throw an error. However, there are cases (e.g. proxies that
return response fields containing extra fields like Python datetimes),
where these values are not important and we just want to drop them.

Twitter handle: @hacubu

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-15 17:00:01 -04:00
Eugene Yurtsev
3628dccbf3 chore(docs): add gtm tag to docs (#32048)
Docusarus gtm langchain v2
2025-07-15 15:58:11 -04:00
Mason Daugherty
8d2135ad8a chore: update links to LangChain how-to guides in issue templates (#32052) 2025-07-15 15:38:35 -04:00
Mason Daugherty
8199a5562a chore: update hyperlink (#32051) 2025-07-15 14:41:27 -04:00
Mason Daugherty
0807711dad chore: update contribution guidelines and templates to direct users to the LangChain Forum (#32050) 2025-07-15 14:39:40 -04:00
Eugene Yurtsev
7a36d6b99c chore(docs): bump langgraph in docs & reformat all docs (#32044)
Trying to unblock documentation build pipeline

* Bump langgraph dep in docs
* Update langgraph in lock file (resolves an issue in API reference
generation)
2025-07-15 15:06:59 +00:00
Mason Daugherty
3b9dd1eba0 docs(groq): cleanup (#32043) 2025-07-15 10:37:37 -04:00
Eugene Yurtsev
02d0a9af6c chore(core): unpin packaging dependency (#32032)
Unpin packaging dependency

---------

Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
2025-07-14 21:42:32 +00:00
Christophe Bornet
953592d4f7 feat(langchain): add ruff rules G (#32029)
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-07-14 15:19:36 -04:00
Christophe Bornet
19fff8cba9 feat(langchain): add ruff rules DTZ (#32021)
See https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-07-14 12:47:16 -04:00
Marco Vinciguerra
26c2c8f70a docs: update ScrapeGraphAI tools (#32026)
It was outdated

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 12:38:55 -04:00
Hunter Lovell
d96b75f9d3 chore: update readme with forum link (#32027) 2025-07-14 09:15:26 -07:00
Ahmad Elmalah
2fdccd789c docs: update Textract docs (#31992)
I am modifying two things:

1. "This sample demonstrates" with "The following samples demonstrate"
as we're talking about at least 4 samples
2. Bringing the sentence to after talking about the definition of
textract to keep the document organized (textract definition then
samples)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:36:29 +00:00
董哥的黑板报
553ac1863b docs: add deprecation notice for PipelinePromptTemplate (#31999)
**PR title**: 
add deprecation notice for PipelinePromptTemplate

**PR message**: 
In the API documentation, PipelinePromptTemplate is marked as
deprecated, but this is not mentioned in the docs.

I'm submitting this PR to add a deprecation notice to the docs.

**Tests**:
N/A (documentation only)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:27:29 +00:00
Andreas V. Jonsterhaug
6dcca35a34 fix(core): correct return type hints in BaseChatPromptTemplate (#32009)
This PR changes the return type hints of the `format_prompt` and
`aformat_prompt` methods in `BaseChatPromptTemplate` from `PromptValue`
to `ChatPromptValue`. Since both methods always return a
`ChatPromptValue`.
2025-07-14 11:00:01 -04:00
Christophe Bornet
d57216c295 feat(core): add ruff rules D to tests except D1 (#32000)
Docs are not required for tests but when there are docstrings, they
shall be correctly formatted.
See https://docs.astral.sh/ruff/rules/#pydocstyle-d
2025-07-14 10:42:03 -04:00
Christophe Bornet
58d4261875 feat(langchain): add ruff rules PTH (#32008)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth
2025-07-14 10:41:37 -04:00
Fabio Fontana
fd168e1c11 feat(text-splitters): add Visual Basic 6 support (#31173)
### **Description**

Add Visual Basic 6 support.

---

### **Issue**

No specific issue addressed.

---

### **Dependencies**

No additional dependencies required.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-14 13:51:16 +00:00
Mason Daugherty
7e146a185b chore: add text splitters to PR linting (#32018) 2025-07-14 09:24:40 -04:00
ccurme
4b89483fe6 release(openai): 0.3.28 (#32015) 2025-07-14 10:38:11 +00:00
ccurme
de13f6ae4f fix(openai): support acknowledged safety checks in computer use (#31984) 2025-07-14 07:33:37 -03:00
Mason Daugherty
a5f1774b76 chore: update PR template (#32011) 2025-07-13 17:00:08 -04:00
Michael Li
a04131489e chore: update error message formatting (#31980) 2025-07-11 15:19:51 -04:00
Mason Daugherty
56d6d69ce9 release(groq): 0.3.6 (#31975) 2025-07-11 10:55:26 -04:00
Fazal
8197504fdc docs: fix typo (#31972)
**Description**:

Fixed a typo from "Hi, I'm Bob and I life in SF." to "Hi, I'm Bob and I
live in SF." .

**Connect with me**: https://www.linkedin.com/in/0xFazal/

**More about me**: https://fazal.me/

-**Fazal**.
2025-07-11 09:47:23 -04:00
Akshara
103fd6ac0c docs: add Google-style docstrings to tools and llms modules (zapier, … (#31957)
**Description:**
Added standardized Google-style docstrings to improve documentation
consistency across key modules.

Updated files:
- `tools/zapier/tool.py`
- `tools/jira/tool.py`
- `tools/json/tool.py`
- `llms/base.py`

These changes enhance readability and maintain consistency with
LangChain’s documentation style guide.

**Issue:**
Fixes #21983

**Dependencies:**
None

**Twitter handle :**
@Akshara_p_
2025-07-11 09:46:21 -04:00
ccurme
612ccf847a chore: [openai] bump sdk (#31958) 2025-07-10 15:53:41 -04:00
Mason Daugherty
b5462b8979 chore: update pr_lint.yml to add description (#31954) 2025-07-10 14:45:28 -04:00
Mason Daugherty
6594eb8cc1 docs(xai): update for Grok 4 (#31953) 2025-07-10 11:06:37 -04:00
Christophe Bornet
060fc0e3c9 text-splitters: Add ruff rules FBT (#31935)
See [flake8-boolean-trap
(FBT)](https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt)
2025-07-09 18:36:58 -04:00
Azhagammal
4d9c0b0883 fix[core]: added error message if the query vector or embedding contains NaN values (#31822)
**Description:**  
Added an explicit validation step in
`langchain_core.vectorstores.utils._cosine_similarity` to raise a
`ValueError` if the input query or any embedding contains `NaN` values.
This prevents silent failures or unstable behavior during similarity
calculations, especially when using maximal_marginal_relevance.

**Issue**:
Fixes #31806 

**Dependencies:**  
None

---------

Co-authored-by: Azhagammal S C <azhagammal@kofluence.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-09 18:30:26 -04:00
Michael Li
a8998a1f57 chore[langchain]: fix broad base except in crawler.py (#31941)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-09 15:17:42 -04:00
Mason Daugherty
042da2a2b2 chore: clean up capitalization in workflow names (#31944) 2025-07-09 15:16:48 -04:00
Mason Daugherty
c026a71a06 chore: add PR title linting (#31943) 2025-07-09 15:04:25 -04:00
Eugene Yurtsev
059942b5fc ci: update issue template to refer Q&A to langchain forum (#31829)
Update issue template to link to the langchain forum
2025-07-09 09:49:16 -04:00
Mason Daugherty
8064d3bdc4 ollama: bump core (#31929) 2025-07-08 16:53:18 -04:00
Mason Daugherty
791c0e2e8f ollama: release 0.3.4 (#31928) 2025-07-08 16:44:36 -04:00
Mason Daugherty
0002b1dafa ollama[patch]: fix model validation, ensure per-call reasoning can be set, tests (#31927)
* update model validation due to change in [Ollama
client](https://github.com/ollama/ollama) - ensure you are running the
latest version (0.9.6) to use `validate_model_on_init`
* add code example and fix formatting for ChatOllama reasoning
* ensure that setting `reasoning` in invocation kwargs overrides
class-level setting
* tests
2025-07-08 16:39:41 -04:00
Mason Daugherty
f33a25773e chroma[nit]: descriptions in pyproject.toml (#31925) 2025-07-08 13:52:12 -04:00
Mason Daugherty
1f829aacf4 ollama[patch]: ruff fixes and rules (#31924)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 13:42:19 -04:00
Mason Daugherty
4d9eefecab fix: bump lockfiles (#31923)
* bump lockfiles after upgrading ruff
* resolve resulting linting fixes
2025-07-08 13:27:55 -04:00
Mason Daugherty
e7f1ceee67 nomic[patch]: ruff fixes and rules (#31922)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
* fix makefile targets
2025-07-08 13:25:18 -04:00
Mason Daugherty
71b361936d ruff: restore stacklevels, disable autofixing (#31919) 2025-07-08 12:55:47 -04:00
Mason Daugherty
cbb418b4bf mistralai[patch]: ruff fixes and rules (#31918)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 12:44:42 -04:00
Mason Daugherty
ae210c1590 ruff: add bugbear across packages (#31917)
WIP, other packages will get in next PRs
2025-07-08 12:22:55 -04:00
Michael Li
5b3e29f809 text splitters: add chunk_size and chunk_overlap validations (#31916)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 12:22:33 -04:00
Michael Li
0a17a62548 exception: update Exception to ValueError for clearer error handling (#31915)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 11:58:53 -04:00
Michael Li
a1c1421bf4 cli: ensure the connection always get closed in github.py (#31914)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 11:58:33 -04:00
Eugene Yurtsev
83d8be756a langchain[patch]: harden xml parser for xmloutput agent (#31859)
Harden the default implementation of the XML parser for the agent

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:57:21 -04:00
Christophe Bornet
3f839d566a langchain: Add ruff rules B (#31908)
See https://docs.astral.sh/ruff/rules/#flake8-bugbear-b

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:48:18 -04:00
Chris G
65b098325b core: docs: clarify where the kwargs in on_tool_start and on_tool_end go (#31909)
**Description:**  
I traced the kwargs starting at `.invoke()` and it was not clear where
they go. it was clarified to two layers down. so I changed it to make it
more documented for the next person.


**Issue:**  
No related issue.

**Dependencies:**  
No dependency changes.

**Twitter handle:**  
Nah. We're good.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:35:31 -04:00
burtenshaw
4e513539f8 docs: update huggingface inference to latest usage (#31906)
This PR updates the doc on Hugging Face's inference offering from
'inference API' to 'inference providers'

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:35:10 -04:00
Christophe Bornet
b8e2420865 langchain: Fix Evaluator's _check_evaluation_args (#31910)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:31:02 -04:00
Christophe Bornet
a3e3fd20f2 langchain: Use pytest.raises and pytest.fail to handle exceptions in tests (#31911)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:30:35 -04:00
ak2-lucky
2c7eafffec docs: update tbs parameter (#31905)
Description: update serper dev parameter example
2025-07-08 10:27:40 -04:00
Mason Daugherty
dd76209bbd groq[patch]: ruff fixes and rules (#31904)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:25:46 -04:00
Mason Daugherty
750721b4c3 huggingface[patch]: ruff fixes and rules (#31912)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:07:57 -04:00
Mason Daugherty
06ab2972e3 fireworks[patch]: ruff fixes and rules (#31903)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:14:59 +00:00
Mason Daugherty
63e3f2dea6 exa[patch]: ruff fixes and rules (#31902)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:02:42 +00:00
Mason Daugherty
231e8d0f43 deepseek[patch]: ruff fixes and rules (#31901)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:54:44 -04:00
Mason Daugherty
38bd1abb8c chroma[patch]: ruff fixes and rules (#31900)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:45:19 -04:00
Mason Daugherty
2a7645300c anthropic[patch]: ruff fixes and rules (#31899)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 18:32:27 -04:00
Mason Daugherty
e7eac27241 ruff: more rules across the board & fixes (#31898)
* standardizes ruff dep version across all `pyproject.toml` files
* cli: ruff rules and corrections
* langchain: rules and corrections
2025-07-07 17:48:01 -04:00
Mason Daugherty
706a66eccd fix: automatically fix issues with ruff (#31897)
* Perform safe automatic fixes instead of only selecting
[isort](https://docs.astral.sh/ruff/rules/#isort-i)
2025-07-07 14:13:10 -04:00
Mason Daugherty
e686a70ee0 ollama: thinking, tool streaming, docs, tests (#31772)
* New `reasoning` (bool) param to support toggling [Ollama
thinking](https://ollama.com/blog/thinking) (#31573, #31700). If
`reasoning=True`, Ollama's `thinking` content will be placed in the
model responses' `additional_kwargs.reasoning_content`.
  * Supported by:
    * ChatOllama (class level, invocation level TODO)
    * OllamaLLM (TODO)
* Added tests to ensure streaming tool calls is successful (#29129)
* Refactored tests that relied on `extract_reasoning()`
* Myriad docs additions and consistency/typo fixes
* Improved type safety in some spots

Closes #29129
Addresses #31573 and #31700
Supersedes #31701
2025-07-07 13:56:41 -04:00
ccurme
0eb10f31c1 mistralai: release 0.2.11 (#31896) 2025-07-07 17:47:36 +00:00
Michael Li
47d330f4e6 fix: fix file open with encoding in chat_history.py (#31884)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:30:59 -04:00
Christophe Bornet
4215261be1 core: Cleanup pyproject (#31857)
* Reorganize some toml properties
* Fix some E501: line too long

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:30:48 -04:00
Mason Daugherty
a751a23c4e fix: remove unused type ignore from three_values fixture in TestAsyncInMemoryStore (#31895) 2025-07-07 13:22:53 -04:00
Michael Li
8ef01f8444 fix: complete exception handling for UpstashRedisEntityStore (#31893)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-07 13:13:38 -04:00
Mason Daugherty
6c23c711fb fix: lint/format (#31894) 2025-07-07 13:11:06 -04:00
Christophe Bornet
03e8327e01 core: Ruff preview fixes (#31877)
Auto-fixes from `uv run ruff check --fix --unsafe-fixes --preview`

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:02:40 -04:00
Christophe Bornet
ba144c9d7f langchain: Add ruff rule RUF (#31874)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#ruff-specific-rules-ruf

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 12:49:38 -04:00
Christophe Bornet
ed35372580 langchain: Add ruff rules FBT (#31885)
* Fixed for private functions and in tests
* Added noqa escapes for public functions
See https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 12:35:55 -04:00
Christophe Bornet
56bbfd9723 langchain: Add ruff rule RET (#31875)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-return-ret

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-07 15:33:18 +00:00
Christophe Bornet
fceebbb387 langchain: Add ruff rules C4 (#31879)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-comprehensions-c4

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:55:52 -04:00
Christophe Bornet
4134b36db8 core: make ruff rule PLW1510 unfixable (#31868)
See
https://github.com/astral-sh/ruff/discussions/17087#discussioncomment-12675815

Tha autofix is misleading: it chooses to add `check=False` to keep the
runtime behavior but in reality it hides the fact that most probably the
user would prefer `check=True`.
2025-07-07 10:28:30 -04:00
Christophe Bornet
9368b92b2c standard-tests: Ruff autofixes (#31862)
Auto-fixes from ruff with rule ALL
2025-07-07 10:27:39 -04:00
Christophe Bornet
2df3fdf40d langchain: Add ruff rules SIM (#31881)
See https://docs.astral.sh/ruff/rules/#flake8-simplify-sim

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:27:04 -04:00
Christophe Bornet
f06380516f langchain: Add ruff rules A (#31888)
See https://docs.astral.sh/ruff/rules/#flake8-builtins-a

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:20:27 -04:00
Christophe Bornet
49c316667d langchain: Add ruff rules EM (#31873)
All auto-fixes.
See https://docs.astral.sh/ruff/rules/#flake8-errmsg-em

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:13:56 -04:00
Christophe Bornet
1276bf3e1d standard-tests: Add ruff rules PGH (#31869)
See https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
2025-07-07 10:07:39 -04:00
Christophe Bornet
53c75abba2 langchain: Add ruff rules PIE (#31880)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-pie-pie
2025-07-07 10:07:12 -04:00
Christophe Bornet
a46a2b8bda cli: Ruff autofixes (#31863)
Auto-fixes from ruff with rule ALL
2025-07-07 10:06:34 -04:00
Christophe Bornet
451c90fefa text-splitters: Ruff autofixes (#31858)
Auto-fixes from ruff with rule `ALL`
2025-07-07 10:06:08 -04:00
Christophe Bornet
8aed3b61a9 core: Bump ruff version to 0.12 (#31846) 2025-07-07 10:02:51 -04:00
Michael Li
73552883c3 cli: fix dockerfile incorrect copy (#31883) 2025-07-06 21:20:57 -04:00
m27315
013ce2c47f huggingface: fix HuggingFaceEndpoint._astream() got multiple values for argument 'stop' (#31385) 2025-07-06 15:18:53 +00:00
CuberMessenegr
e934788ca2 docs: Fix uppercase typo in the Similarity search section (#31886) 2025-07-06 10:24:13 -04:00
Christophe Bornet
bf05229029 langchain: Add ruff rule W (#31876)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#warning-w

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-07-05 21:57:30 +00:00
ccurme
3f4b355eef anthropic[patch]: pass back in citations in multi-turn conversations (#31882)
Also adds VCR cassettes for some heavy tests.
2025-07-05 17:33:22 -04:00
Christophe Bornet
46fe09f013 cli: Bump ruff version to 0.12 (#31864) 2025-07-05 17:15:24 -04:00
Christophe Bornet
df5cc024fd langchain: Bump ruff version to 0.12 (#31867) 2025-07-05 17:13:55 -04:00
Christophe Bornet
a15c3e0856 text-splitters: Bump ruff version to 0.12 (#31866) 2025-07-05 17:13:08 -04:00
Christophe Bornet
e1eb3f8d6f standard-tests: Bump ruff version to 0.12 (#31865) 2025-07-05 17:12:00 -04:00
NeatGuyCoding
64815445e4 langchain[patch]: fix a bug where now.replace(day=now.day - 1) would raise a ValueError when now.day is equal to 1 (#31878) 2025-07-05 14:54:26 -04:00
Viet Hoang
15dc684d34 docs: Integration with GreenNode Serverless AI (#31836) 2025-07-05 13:48:35 -04:00
Nithish Raghunandanan
8bdb1de006 [docs] Update couchbase provider, vector store & features list (#31719) 2025-07-05 13:34:48 -04:00
Mohammad Mohtashim
b26d2250ba core[patch]: Int Combine when Merging Dicts (#31572)
- **Description:** Combining the Int Types by adding them which makes
the most sense.
- **Issue:**  #31565
2025-07-04 14:44:16 -04:00
Mason Daugherty
6a5073b227 langchain[patch]: Add bandit rules (#31818)
Integrate Bandit for security analysis, suppress warnings for specific issues, and address potential vulnerabilities such as hardcoded passwords and SQL injection risks. Adjust documentation and formatting for clarity.
2025-07-03 14:20:33 -04:00
ccurme
df06041eb2 docs: Anthropic search_result nits (#31855) 2025-07-03 14:12:10 -04:00
ccurme
ade642b7c5 Revert "infra: temporarily skip tests" (#31854)
Reverts langchain-ai/langchain#31853
2025-07-03 13:55:29 -04:00
ccurme
c9f45dc323 infra: temporarily skip tests (#31853)
Tests failed twice with different timeout errors.
2025-07-03 13:39:14 -04:00
ccurme
f88fff0b8a anthropic: release 0.3.17 (#31852) 2025-07-03 13:18:43 -04:00
ccurme
7cb9388c33 Revert "infra: drop anthropic from core test matrix" (#31851)
Reverts langchain-ai/langchain#31850
2025-07-03 17:14:26 +00:00
ccurme
21664985c7 infra: drop anthropic from core test matrix (#31850)
Overloaded errors blocking release. Will revert after.
2025-07-03 12:52:25 -04:00
ccurme
b140d16696 docs: update ChatAnthropic guide (#31849) 2025-07-03 12:51:11 -04:00
ccurme
2090f85789 core: release 0.3.68 (#31848)
Also add `search_result` to recognized tool message block types.
2025-07-03 12:36:25 -04:00
Mason Daugherty
572020c4d8 ollama: add validate_model_on_init, catch more errors (#31784)
* Ensure access to local model during `ChatOllama` instantiation
(#27720). This adds a new param `validate_model_on_init` (default:
`true`)
* Catch a few more errors from the Ollama client to assist users
2025-07-03 11:07:11 -04:00
Mason Daugherty
1a3a8db3c9 docs: anthropic formatting cleanup (#31847)
inline URLs, capitalization, code blocks
2025-07-03 14:50:23 +00:00
Christophe Bornet
ee3709535d text-splitters: bump spacy version to 3.8.7 (#31834)
This allows to use spacy with Python 3.13
2025-07-03 10:13:25 -04:00
Christophe Bornet
b8e9b4adfc cli: Add ruff rule UP (pyupgrade) (#31843)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-03 14:12:46 +00:00
Christophe Bornet
cd7dce687a standard-tests: Add ruff rule UP (pyupgrade) (#31842)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed
2025-07-03 10:12:31 -04:00
Christophe Bornet
802d2bf249 text-splitters: Add ruff rule UP (pyupgrade) (#31841)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed except `typing.AbstractSet` -> `collections.abc.Set`
2025-07-03 10:11:35 -04:00
Mason Daugherty
911b0b69ea groq: Add service tier option to ChatGroq (#31801)
- Allows users to select a [flex
processing](https://console.groq.com/docs/flex-processing) service tier
2025-07-03 10:11:18 -04:00
Eugene Yurtsev
10ec5c8f02 text-splitters: 0.3.9 (#31844)
Release langchain-text-splitters 0.3.9
2025-07-03 10:02:35 -04:00
Eugene Yurtsev
6dca787a9d ci: set explicit workflow permissions (#31830)
* Set explicit workflow permissions
* Should be a no-op since we're using restricted GITHUB_TOKENs by
default
2025-07-03 10:02:18 -04:00
Christophe Bornet
46745f91b5 core: Use parametric tests in test_openai_tools (#31839) 2025-07-03 08:43:46 -04:00
Eugene Yurtsev
181c22c512 update CODEOWNERS (#31831)
Update CODEOWNERS
2025-07-02 17:31:49 -04:00
Cole Murray
43eef43550 security: Remove xslt_path and harden XML parsers in HTMLSectionSplitter: package: langchain-text-splitters (#31819)
## Summary
- Removes the `xslt_path` parameter from HTMLSectionSplitter to
eliminate XXE attack vector
- Hardens XML/HTML parsers with secure configurations to prevent XXE
attacks
- Adds comprehensive security tests to ensure the vulnerability is fixed

  ## Context
This PR addresses a critical XXE vulnerability discovered in the
HTMLSectionSplitter component. The vulnerability allowed attackers to:
- Read sensitive local files (SSH keys, passwords, configuration files)
  - Perform Server-Side Request Forgery (SSRF) attacks
  - Exfiltrate data to attacker-controlled servers

  ## Changes Made
1. **Removed `xslt_path` parameter** - This eliminates the primary
attack vector where users could supply malicious XSLT files
2. **Hardened XML parsers** - Added security configurations to prevent
XXE attacks even with the default XSLT:
     - `no_network=True` - Blocks network access
- `resolve_entities=False` - Prevents entity expansion -
`load_dtd=False` - Disables DTD processing -
`XSLTAccessControl.DENY_ALL` - Blocks all file/network I/O in XSLT
transformations

3. **Added security tests** - New test file `test_html_security.py` with
comprehensive tests for various XXE attack vectors
4. **Updated existing tests** - Modified tests that were using the
removed `xslt_path` parameter

  ## Test Plan
  - [x] All existing tests pass
  - [x] New security tests verify XXE attacks are blocked
  - [x] Code passes linting and formatting checks
  - [x] Tested with both old and new versions of lxml


Twitter handle: @_colemurray
2025-07-02 15:24:08 -04:00
Mason Daugherty
815d11ed6a docs: Add PR info doc (#31833) 2025-07-02 19:20:27 +00:00
Eugene Yurtsev
73fefe0295 core[path]: Use context manager for FileCallbackHandler (#31813)
Recommend using context manager for FileCallbackHandler to avoid opening
too many file descriptors

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-02 13:31:58 -04:00
ojumah20
377e5f5204 docs: Update agents.ipynb (#31820) 2025-07-02 10:42:28 -04:00
Mason Daugherty
eb12294583 langchain-xai[patch]: Add ruff bandit rules to linter (#31816)
- Add ruff bandit rules
- Some formatting
2025-07-01 18:59:06 +00:00
Mason Daugherty
86a698d1b6 langchain-qdrant[patch]: Add ruff bandit rules to linter (#31815)
- Add ruff bandit rules
- Address a few s101s
- Some formatting
2025-07-01 18:42:55 +00:00
Mason Daugherty
b03e326231 langchain-prompty[patch]: Add ruff bandit rules to linter (#31814)
- Add ruff bandit rules
- Address some s101 assertion warnings
- Address s506 by using `yaml.safe_load()`
2025-07-01 18:32:02 +00:00
Mason Daugherty
3190c4132f langchain-perplexity[patch]: Add ruff bandit rules to linter (#31812)
- Add ruff bandit rules
2025-07-01 18:17:28 +00:00
Mason Daugherty
f30fe07620 update pyproject.toml flake8 comment (#31810) 2025-07-01 18:16:38 +00:00
Mason Daugherty
d0dce5315f langchain-ollama[patch]: Add ruff bandit rules to linter (#31811)
- Add ruff bandit rules
2025-07-01 18:16:07 +00:00
Mason Daugherty
c9e1ce2966 groq: release 0.3.5 (#31809) 2025-07-01 13:21:23 -04:00
Mason Daugherty
404d8408f4 langchain-nomic[patch]: Add ruff bandit rules to linter (#31805)
- Add ruff bandit rules
- Some formatting
2025-07-01 11:39:11 -04:00
Mason Daugherty
0279af60b5 langchain-mistralai[patch]: Add ruff bandit rules to linter, formatting (#31803)
- Add ruff bandit rules
- Address a s101 error
- Formatting
2025-07-01 11:08:01 -04:00
Mason Daugherty
425ee52581 langchain-huggingface[patch]: Add ruff bandit rules to linter (#31798)
- Add ruff bandit rules
2025-07-01 11:07:52 -04:00
Mason Daugherty
0efaa483e4 langchain-groq[patch]: Add ruff bandit rules to linter (#31797)
- Add ruff bandit rules
- Address s105 errors
2025-07-01 11:07:42 -04:00
Mason Daugherty
479b6fd7c5 langchain-fireworks[patch]: Add ruff bandit rules to linter (#31796)
- Add ruff bandit rules
- Address a s113 error
2025-07-01 11:07:26 -04:00
Lauren Hirata Singh
625f7c3710 docs: Add forum link to footer (#31795)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-30 16:14:03 -04:00
Anush
2d3020f6cd docs: Update vectorstores feature matrix for Qdrant (#31786)
## Description

- `Qdrant` vector store supports `add_documents` with IDs.
- Multi-tenancy is supported via [payload
filters](https://qdrant.tech/documentation/guides/multiple-partitions/)
and
[JWT](https://qdrant.tech/documentation/guides/security/#granular-access-control-with-jwt)
if needed.
2025-06-30 14:02:07 -04:00
Mason Daugherty
33c9bf1adc langchain-openai[patch]: Add ruff bandit rules to linter (#31788) 2025-06-30 14:01:32 -04:00
Mason Daugherty
645e25f624 langchain-anthropic[patch]: Add ruff bandit rules (#31789) 2025-06-30 14:00:53 -04:00
Mason Daugherty
247673ddb8 chroma: add ruff bandit rules (#31790) 2025-06-30 14:00:08 -04:00
Mason Daugherty
1a5120dc9d langchain-deepseek[patch]: add ruff bandit rules (#31792)
add ruff bandit rules
2025-06-30 13:59:35 -04:00
Mason Daugherty
6572399174 langchain-exa: add ruff bandit rules (#31793)
Add ruff bandit rules
2025-06-30 13:58:38 -04:00
ccurme
04cc674e80 core: release 0.3.67 (#31791) 2025-06-30 12:00:39 -04:00
ccurme
46cef90f7b core: expose tool message recognized block types (#31787) 2025-06-30 11:19:34 -04:00
ccurme
428c276948 infra: skip notebook in CI (#31773) 2025-06-28 14:00:45 -04:00
Yiwei
375f53adac IBM DB2 vector store documentation addition (#31008) 2025-06-27 18:34:32 +00:00
ccurme
9f17fabc43 openai: release 0.3.27 (#31769)
To pick up https://github.com/langchain-ai/langchain/pull/31756.
2025-06-27 13:44:45 -04:00
Andrew Jaeger
0189c50570 openai[fix]: Correctly set usage metadata for OpenAI Responses API (#31756) 2025-06-27 15:35:14 +00:00
Mason Daugherty
9aa75eaef3 docs: enhance docstring for disable_streaming parameter in BaseChatModel (#31759)
Resolves #31758
2025-06-27 11:27:41 -04:00
ccurme
e8e89b0b82 docs: updates from langchain-openai 0.3.26 (#31764) 2025-06-27 11:27:25 -04:00
Eugene Yurtsev
eb08b064bb docs: Remove giscus comments (#31755)
Remove giscus comments from langchain
2025-06-27 09:56:55 -04:00
Mason Daugherty
e1aff00cc1 groq: support reasoning_effort, update docs for clarity (#31754)
- There was some ambiguous wording that has been updated to hopefully
clarify the functionality of `reasoning_format` in ChatGroq.
- Added support for `reasoning_effort`
- Added links to see models capable of `reasoning_format` and
`reasoning_effort`
- Other minor nits
2025-06-27 09:43:40 -04:00
ccurme
ea1345a58b openai[patch]: update cassette (#31752)
Following changes in `openai==1.92`.
2025-06-26 14:52:12 -04:00
ccurme
066be383e3 openai[patch]: update test following release of openai 1.92 (#31751)
Added new required fields for `ResponseFunctionWebSearch`
2025-06-26 18:22:58 +00:00
ccurme
61feaa4656 openai: release 0.3.26 (#31749) 2025-06-26 13:51:51 -04:00
ccurme
88d5f3edcc openai[patch]: allow specification of output format for Responses API (#31686) 2025-06-26 13:41:43 -04:00
Mason Daugherty
59c2b81627 docs: fix some inline links (#31748) 2025-06-26 13:35:14 -04:00
Lauren Hirata Singh
83774902e7 docs: Academy banner (#31745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-26 10:03:06 -04:00
ccurme
0ae434be21 anthropic: release 0.3.16 (#31744) 2025-06-26 09:09:29 -04:00
Mason Daugherty
a08e73f07e docs: remove trailing backticks (#31740) 2025-06-26 01:23:39 -04:00
Martin Schaer
421554007f docs: Update surrealdb vectorestore documentation (#31199) 2025-06-25 20:16:43 +00:00
Mason Daugherty
2fb27b63f5 ollama: update tests, docs (#31736)
- docs: for the Ollama notebooks, improve the specificity of some links,
add `homebrew` install info, update some wording
- tests: reduce number of local models needed to run in half from 4 → 2
(shedding 8gb of required installs)
- bump deps (non-breaking) in anticipation of upcoming "thinking" PR
2025-06-25 20:13:20 +00:00
ccurme
a1f3147989 docs: update sort order for integrations table (#31737)
Pull latest download statistics.
2025-06-25 16:03:36 -04:00
ccurme
84500704ab openai[patch]: fix bug where function call IDs were not populated (#31735)
(optional) IDs were getting dropped in some cases.
2025-06-25 19:08:27 +00:00
ccurme
0bf223d6cf openai[patch]: add attribute to always use previous_response_id (#31734) 2025-06-25 19:01:43 +00:00
ccurme
b02bd67788 anthropic[patch]: cache clients (#31659) 2025-06-25 14:49:02 -04:00
Michael Li
e3f1ce0ac5 docs: fix retriever typos (#31733) 2025-06-25 16:06:21 +00:00
Michael Li
5d734ac8a8 docs: fix typo in clarifai.ipynb (#31732) 2025-06-25 15:59:29 +00:00
Michael Li
a09583204c docs: fix typos in tool_feat_table.py (#31731) 2025-06-25 15:56:59 +00:00
Michael Li
df1a4c0085 docs: fix typo in timescalevector.ipynb (#31727) 2025-06-25 11:49:19 -04:00
Michael Li
990a69d9d7 docs: fix typo in globals.py (#31728) 2025-06-25 11:47:02 -04:00
Mason Daugherty
3c3320ae30 fix: update import paths for ChatOllama to use langchain_ollama instead of community (#31721) 2025-06-24 16:19:31 -04:00
ccurme
e09abf8170 anthropic[patch]: add benchmark (#31718)
Account for lazy loading of clients in init time benchmark
2025-06-24 15:17:22 -04:00
Eugene Yurtsev
9164e6f906 core[patch]: Add additional hashing options to indexing API, warn on SHA-1 (#31649)
Add additional hashing options to the indexing API, warn on SHA-1

Requires:

- Bumping langchain-core version
- bumping min langchain-core in langchain

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-06-24 14:44:06 -04:00
Daniel Fjeldstad
cc4f5269b1 docs: replace deprecated llama 3 model with sonar in ChatPerplexity example (#31716)
**Description:** Updates ChatPerplexity documentation to replace
deprecated llama 3 model reference with the current sonar model in the
API key example code block.

**Issue:** N/A (maintenance update for deprecated model)

**Dependencies:** No new dependencies required

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-06-24 17:35:18 +00:00
Mason Daugherty
8878a7b143 docs: ollama nits (#31714) 2025-06-24 13:19:15 -04:00
ccurme
7cdd53390d docs: fix embeddings links (#31715)
This table is referenced in multiple places, so links should be global.
2025-06-24 11:27:59 -04:00
Mason Daugherty
6d71b6b6ee standard-tests: refactoring and fixes (#31703)
- `libs/core/langchain_core/messages/base.py`: add model name to
examples [per
docs](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata)
("0.3.17: Additionally check for the presence of model_name in the
response metadata, which is needed for usage tracking in callback
handlers")
- `libs/core/langchain_core/utils/function_calling.py`: correct typo
-
`libs/standard-tests/langchain_tests/integration_tests/chat_models.py`:
- `magic_function(input)` -> `magic_function(_input)` to prevent warning
about redefining built in `input`
    - relocate a few tests for better grouping and narrative flow
    - suppress some type hint warnings following suit from similar tests
    - fix a few more typos
- validate not only that `model_name` is defined, but that it is not
empty (test_usage_metadata)
2025-06-23 23:22:31 +00:00
Christopher Jones
b6f74bff40 Update and simplify Oracle Database example cookbook (#31364) 2025-06-23 19:03:00 -04:00
Christophe Bornet
c7e82ad95d core: Use parametrized test in test_correct_get_tracer_project (#31513) 2025-06-23 18:55:57 -04:00
joshy-deshaw
8a0782c46c openai[patch]: fix dropping response headers while streaming / Azure (#31580) 2025-06-23 17:59:58 -04:00
Mason Daugherty
8868701c16 docs: updated ChatGroq docs and example (#31710) 2025-06-23 20:36:46 +00:00
ccurme
ee83993b91 docs: document Anthropic cache TTL count details (#31708) 2025-06-23 20:16:42 +00:00
Mason Daugherty
e6191d58e7 groq: release 0.3.4 (#31709)
bump groq dependency to ensure reasoning is supported
2025-06-23 19:30:05 +00:00
Mason Daugherty
40bb7d00fc groq: release 0.3.3 (#31707) 2025-06-23 14:54:56 -04:00
Christophe Bornet
b1cc972567 core[patch]: Improve RunnableWithMessageHistory init arg types (#31639)
`Runnable`'s `Input` is contravariant so we need to enumerate all
possible inputs and it's not possible to put them in a `Union`.
Also, it's better to only require a runnable that
accepts`list[BaseMessage]` instead of a broader `Sequence[BaseMessage]`
as internally the runnable is only called with a list.
2025-06-23 13:45:52 -04:00
Mason Daugherty
dcf5c7b472 groq: add support for accessing reasoning output from Groq models (#31662)
**Description:** return
[reasoning](https://console.groq.com/docs/reasoning) output in
`additional_kwargs` as `reasoning_content`
**Issue:** Resolves #31052
2025-06-23 11:33:12 -04:00
Mason Daugherty
af2188b848 docs: Add section on tool choice (#31692)
- Make it known in the concepts guide that tool choice is available
2025-06-23 11:31:49 -04:00
Mason Daugherty
ba38997c7a docs: add tool_calls attribute link in tool calling documentation and indicate output is a list in code example (#31689)
- Minor QOL improvements:
  - Add link to tool_calls api ref
- Show code example output as a list to more clearly indicate response
type
2025-06-23 11:31:26 -04:00
ccurme
643741497a openai: release 0.3.25 (#31702) 2025-06-23 10:55:48 -04:00
ccurme
b268ab6a28 openai[patch]: fix client caching when request_timeout is specified via httpx.Timeout (#31698)
Resolves https://github.com/langchain-ai/langchain/issues/31697
2025-06-23 14:37:49 +00:00
Li-Kuang Chen
4ee6112161 openai[patch]: Improve error message when response type is malformed (#31619) 2025-06-21 14:15:21 -04:00
dennism-tulcolabs
9de4f22205 langchain[patch]: smith.evaluation.progress.ProgressBarCallback: Make output after progress bar ends configurable (#31583) 2025-06-20 19:24:35 -04:00
Mikhail
6105a5841b core: fix get_buffer_string output for structured message content (#31600) 2025-06-20 23:21:50 +00:00
ccurme
cf5a442e4c langchain: release 0.3.26 (#31695) 2025-06-20 18:19:47 -04:00
ccurme
5015188530 Revert "infra: temporarily drop OpenAI from core release test matrix" (#31694)
Reverts langchain-ai/langchain#31693
2025-06-20 22:12:38 +00:00
ccurme
26030abb70 infra: temporarily drop OpenAI from core release test matrix (#31693)
As part of core releases we run tests on the last released version of
some packages (including langchain-openai) using the new version of
langchain-core. We run langchain-openai's test suite as it was when it
was last released.

Our test for computer use started raising 500 error at some point during
the day today (test passed as part of scheduled test job in the
morning):
> InternalServerError: Error code: 500 - {'error': {'message': 'An error
occurred while processing your request. You can retry your request, or
contact us through our help center at help.openai.com if the error
persists.

Will revert this change after we release langchain-core.
2025-06-20 21:58:39 +00:00
Bagatur
5271fd76f1 core[patch]: check before removing tags (#31691) 2025-06-20 17:46:50 -04:00
ccurme
39a8a1121a core: release 0.3.66 (#31690) 2025-06-20 17:45:03 -04:00
97tkddnjs
4fe490c0ea [Docs] Update deprecated Pydantic .schema() method to .model_json_schema() in How to convert Runnables to Tools guide (#31618) 2025-06-20 20:43:59 +00:00
Raghu Kapur
2c9859956a text-splitters: fix stale header metadata in ExperimentalMarkdownSyntaxTextSplitter (#31622)
**Description:**

Previously, when transitioning from a deeper Markdown header (e.g., ###)
to a shallower one (e.g., ##), the
ExperimentalMarkdownSyntaxTextSplitter retained the deeper header in the
metadata.

This commit updates the `_resolve_header_stack` method to remove headers
at the same or deeper levels before appending the current header. As a
result, each chunk now reflects only the active header context.

Fixes unexpected metadata leakage across sections in nested Markdown
documents.

Additionally, test cases have been updated to:
- Validate correct header resolution and metadata assignment.
- Cover edge cases with nested headers and horizontal rules.

**Issue:** 
Fixes [#31596](https://github.com/langchain-ai/langchain/issues/31596)

**Dependencies:**
None

**Twitter handle:** -> [_RaghuKapur](https://twitter.com/_RaghuKapur)

**LinkedIn:** ->
[https://www.linkedin.com/in/raghukapur/](https://www.linkedin.com/in/raghukapur/)
2025-06-20 15:52:17 -04:00
ZhangShenao
9d4d258162 [Doc] Improve api doc for DeepSeek (#31655)
- Add param in api doc
- Fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-20 19:47:54 +00:00
Saran Connolly
22e6d90937 langchain_mistralai: Include finish_reason in response metadata when parsing MistralAI chunks toAIMessageChunk (#31667)
## Description
<!-- What does this pull request accomplish? -->
- When parsing MistralAI chunk dicts to Langchain to `AIMessageChunk`
schemas via the `_convert_chunk_to_message_chunk` utility function, the
`finish_reason` was not being included in `response_metadata` as it is
for other providers.
- This PR adds a one-liner fix to include the finish reason.

- fixes: https://github.com/langchain-ai/langchain/issues/31666
2025-06-20 15:41:20 -04:00
Mohammad Mohtashim
7ff405077d core[patch]: Returning always 2D Array for _cosine_similarity (#31528)
- **Description:** Very simple change in `_cosine_similarity` which
always 2D array.
- **Issue:** #31497
2025-06-20 11:25:02 -04:00
Eugene Yurtsev
2842e0c8c1 core[patch]: Add doc-strings to tools/base.py (#31684)
Add doc-strings
2025-06-20 11:16:57 -04:00
Tony Gravagno
5d0bea8378 docs: OPENAI_API_KEY typo in google_serper.ipynb (#31665)
Simple typo fix.
2025-06-20 11:02:13 -04:00
Christophe Bornet
7e046ea848 core: Cleanup Pydantic models and handle deprecation warnings (#30799)
* Simplified Pydantic handling since Pydantic v1 is not supported
anymore.
* Replace use of deprecated v1 methods by corresponding v2 methods.
* Remove use of other deprecated methods.
* Activate mypy errors on deprecated methods use.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-20 10:42:52 -04:00
ccurme
29e17fbd6b docs: use langchain-tavily (#31663)
Commandeering https://github.com/langchain-ai/langchain/pull/31640

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: pulvedu <dusty.pulver28@gmail.com>
2025-06-18 16:06:26 -04:00
ccurme
e2a0ff07fd openai[patch]: include 'type' key internally when streaming reasoning blocks (#31661)
Covered by existing tests.

Will make it easier to process streamed reasoning blocks.
2025-06-18 15:01:54 -04:00
Tanmay Singhal
19544ba3c9 docsFix documentation where triggering image generation from openai (#31652)
Description: Fixing Minor Error in ChatOpenAI Documentation
2025-06-18 13:31:47 -04:00
Jannik Maierhöfer
0cadf4fc9a docs: upgrade langfuse example to python sdk v3 (#31654)
As the Langfuse python sdk v3 includes breaking changes, this PR updates
the code examples.

https://langfuse.com/docs/integrations/langchain/upgrade-paths#python
2025-06-18 13:30:39 -04:00
Mason Daugherty
a79998800c fix: correct typo in docstring for three_values fixture (#31638)
Docstring typo fix in `base_store.py`
2025-06-17 16:51:07 -04:00
ccurme
da97013f96 docs: update OpenAI integration page (#31646)
model_kwargs is no longer needed for `truncation` and `reasoning`.
2025-06-17 16:23:06 -04:00
ccurme
6409498f6c openai[patch]: route to Responses API if relevant attributes are set (#31645)
Following https://github.com/langchain-ai/langchain/pull/30329.
2025-06-17 16:04:38 -04:00
ccurme
3044bd37a9 openai: release 0.3.24 (#31642) 2025-06-17 15:06:52 -04:00
ccurme
c1c3e13a54 openai[patch]: add Responses API attributes to BaseChatOpenAI (#30329)
`reasoning`, `include`, `store`, `truncation`.

Previously these had to be added through `model_kwargs`.
2025-06-17 14:45:50 -04:00
ccurme
b610859633 openai[patch]: support Responses streaming in AzureChatOpenAI (#31641)
Resolves https://github.com/langchain-ai/langchain/issues/31303,
https://github.com/langchain-ai/langchain/issues/31624
2025-06-17 14:41:09 -04:00
ccurme
bc1b5ffc91 docs: update agents tutorial to use langchain-tavily (#31637) 2025-06-17 11:25:03 -04:00
Himanshu Sharma
bb7c190d2c langchain: Fix error in LLMListwiseRerank when Document list is empty (#31300)
**Description:**
This PR fixes an `IndexError` that occurs when `LLMListwiseRerank` is
called with an empty list of documents.

Earlier, the code assumed the presence of at least one document and
attempted to construct the context string based on `len(documents) - 1`,
which raises an error when documents is an empty list.

The fix works with gpt-4o-mini if I make the list empty, but fails
occasionally with gpt-3.5-turbo. In case of empty list, setting the
string to "empty list" seems to have the expected response.

**Issue:**  #31192
2025-06-17 10:14:07 -04:00
ZhangShenao
0b5c06e89f [Doc] Improve api doc for perplexity (#31636)
- add param in api doc
- fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-17 14:10:43 +00:00
FT
c4c39c1ae6 mistralai[patch]: Fix Typos in Comments and Improve Compatibility Note (#31616)
Description:  
This pull request corrects minor spelling mistakes in the comments
within the `chat_models.py` file of the MistralAI partner integration.
Specifically, it fixes the spelling of "equivalent" and "compatibility"
in two separate comments. These changes improve code readability and
maintain professional documentation standards. No functional code
changes are included.
2025-06-17 09:23:25 -04:00
Cherilyn Buren
cc1e53008f langchain: add missing milvus branch for self_query (#31630)
Fix #29603
2025-06-17 09:21:54 -04:00
Xin Jin
7702691baf core and langchain: Remove upper bound restriction langsmith dependency (#31629)
Remove upper bound limitation of LS for good measure: we have full
control over LS so we'll be careful when minor bumping so this shouldn't
risk too much, while on the other hand existing such upperboud
restriction will likely introduce occasional dependency headache for
users

Discussion:
https://langchain.slack.com/archives/C06UEEE4DSS/p1750111219634649?thread_ts=1750107647.115289&cid=C06UEEE4DSS
2025-06-17 09:19:03 -04:00
Xin Jin
e979cd106a chore: Bump langsmith in splitter uv (#31626)
`uv lock --upgrade-package langsmith
`
Original issue: The lock file (uv.lock) was constraining
langsmith>=0.1.125,<0.4, preventing LangSmith 0.4.1 installation. Even
though the pyproject.toml wasn't restricting langchain core.


Issue:
https://langchain.slack.com/archives/C050X0VTN56/p1750107176007629
2025-06-16 16:58:46 -07:00
Shivnath Tathe
1682b59f92 docs(rockset): add deprecation notice for Rockset integration (#31621) 2025-06-16 22:17:27 +00:00
Shivnath Tathe
d4c84acc39 docs: update deprecated .schema() to .model_json_schema() in tool_run… (#31615)
This PR updates the tool runtime example notebook to replace the
deprecated `.schema()` method with `.model_json_schema()`, aligning it
with Pydantic V2.

### 🔧 Changes:
- Replaced:
```python
update_favorite_pets.get_input_schema().schema()

with 

update_favorite_pets.get_input_schema().model_json_schema()

```

Fixes #31609
2025-06-16 18:07:18 -04:00
ccurme
b9357d456e openai[patch]: refactor handling of Responses API (#31587) 2025-06-16 14:01:39 -04:00
Tom-Trumper
532e6455e9 text-splitters: Add keep_separator arg to HTMLSemanticPreservingSplitter (#31588)
### Description
Add keep_separator arg to HTMLSemanticPreservingSplitter and pass value
to instance of RecursiveCharacterTextSplitter used under the hood.
### Issue
Documents returned by `HTMLSemanticPreservingSplitter.split_text(text)`
are defaulted to use separators at beginning of page_content. [See third
and fourth document in example output from how-to
guide](https://python.langchain.com/docs/how_to/split_html/#using-htmlsemanticpreservingsplitter):
```
[Document(metadata={'Header 1': 'Main Title'}, page_content='This is an introductory paragraph with some basic content.'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='This section introduces the topic'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='. Below is a list: First item Second item Third item with bold text and a link Subsection 1.1: Details This subsection provides additional details'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content=". Here's a table: Header 1 Header 2 Header 3 Row 1, Cell 1 Row 1, Cell 2 Row 1, Cell 3 Row 2, Cell 1 Row 2, Cell 2 Row 2, Cell 3"),
 Document(metadata={'Header 2': 'Section 2: Media Content'}, page_content='This section contains an image and a video: ![image:example_image_link.mp4](example_image_link.mp4) ![video:example_video_link.mp4](example_video_link.mp4)'),
 Document(metadata={'Header 2': 'Section 3: Code Example'}, page_content='This section contains a code block: <code:html> <div> <p>This is a paragraph inside a div.</p> </div> </code>'),
 Document(metadata={'Header 2': 'Conclusion'}, page_content='This is the conclusion of the document.')]
```
### Dependencies
None

@ttrumper3
2025-06-14 17:56:14 -04:00
dayvidborges
52e57cdc20 docs: update multimodal PDF and image usage for gpt-4.1 (#31595)
docs: update multimodal PDF and image usage for gpt-4.1

**Description:**
This update revises the LangChain documentation to support the new
GPT-4.1 multimodal API format. It fixes the previous broken example for
PDF uploads (which returned a 400 error: "Missing required parameter:
'messages[0].content[1].file'") and adds clear instructions on how to
include base64-encoded images for OpenAI models.

**Issue:**
error appointed in foruns for pdf load into api ->
'''
@[Albaeld](https://github.com/Albaeld)
Albaeld
[8 days
ago](https://github.com/langchain-ai/langchain/discussions/27702#discussioncomment-13369460)
This simply does not work with openai:gpt-4.1. I get:
Error code: 400 - {'error': {'message': "Missing required parameter:
'messages[0].content[1].file'.", 'type': 'invalid_request_error',
'param': 'messages[0].content[1].file', 'code':
'missing_required_parameter'}}
'''

**Dependencies:**
None

**Twitter handle:**
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-14 17:52:01 -04:00
Peter Schneider
cecfec5efa huggingface: handle image-text-to-text pipeline task (#31611)
**Description:** Allows for HuggingFacePipeline to handle
image-text-to-text pipeline
2025-06-14 16:41:11 -04:00
fuder.eth
50f998a138 Fix Typos in Vectorstore Integration Documentation Notebooks (#31612)
Description:  
This pull request corrects minor typographical errors in the
documentation notebooks for vectorstore integrations. Specifically, it
fixes the spelling of "datastore" in `llm_rails.ipynb` and
"pre-existent" in `redis.ipynb`. These changes improve the clarity and
professionalism of the documentation. No functional code changes are
included.
2025-06-14 16:40:18 -04:00
Akim Tsvigun
f345ae5a1d docs: Integration with Nebius AI Studio (#31293)
Thank you for contributing to LangChain!

[x] PR title: langchain_ollama: support custom headers for Ollama
partner APIs

Where "package" is whichever of langchain, core, etc. is being modified.
Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
Example: "core: add foobar LLM"
[x] PR message:

**Description: This PR adds support for passing custom HTTP headers to
Ollama models when used as a LangChain integration. This is especially
useful for enterprise users or partners who need to send authentication
tokens, API keys, or custom tracking headers when querying secured
Ollama servers.
Issue: N/A (new enhancement)
**Dependencies: No external dependencies introduced.
Twitter handle: @arunkumar_offl
[x] Add tests and docs: If you're adding a new integration, please
include
1.Added a unit test in test_chat_models.py to validate headers are
passed correctly.
2. Added an example notebook:
docs/docs/integrations/llms/ollama_custom_headers.ipynb showing how to
use custom headers.

[x] Lint and test: Ran make format, make lint, and make test to ensure
the code is clean and passing all checks.

Additional guidelines:

Make sure optional dependencies are imported within a function.
Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
Most PRs should not touch more than one package.
Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

This MR is only for the docs. Added integration with Nebius AI Studio to
docs. The integration package is available at
[https://github.com/nebius/langchain-nebius](https://github.com/nebius/langchain-nebius).

---------

Co-authored-by: Akim Tsvigun <aktsvigun@nebius.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-14 16:15:27 -04:00
Xin Jin
01fcdff118 bump langsmith to allow 0.4 (#31594)
Langsmith 0.4 is launched so bump it up across OSS: langchain and
langchain-core. Will have separate langsmith-doc announcement for that
2025-06-13 07:59:42 -07:00
ccurme
5839801897 openai: release 0.3.23 (#31604) 2025-06-13 14:02:38 +00:00
ccurme
0c10ff6418 openai[patch]: handle annotation change in openai==1.82.0 (#31597)
https://github.com/openai/openai-python/pull/2372/files#diff-91cfd5576e71b4b72da91e04c3a029bab50a72b5f7a2ac8393fca0a06e865fb3
2025-06-12 23:38:41 -04:00
Lauren Hirata Singh
bb625081c8 revert incident banner (#31592)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-12 17:25:09 -04:00
Nuno Campos
ddc850ca72 core: In LangChainTracer, send only the first token event (#31591)
- only the first one is used for analytics
2025-06-12 14:04:23 -07:00
Lauren Hirata Singh
50f9354d31 incident banner (#31590)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-12 16:05:35 -04:00
ccurme
446a9d5647 docs: document built-in tools for ChatVertexAI (#31564) 2025-06-11 10:31:46 -04:00
Eugene Yurtsev
d10fd02bb3 langchain[patch]: Allow specifying other hashing functions in embeddings (#31561)
Allow specifying other hashing functions in embeddings
2025-06-11 10:18:07 -04:00
ccurme
4071670f56 huggingface[patch]: bump transformers (#31559) 2025-06-10 20:43:33 +00:00
ccurme
40d6d4c738 huggingface[patch]: bump core dep (#31558) 2025-06-10 20:26:13 +00:00
Mohammad Mohtashim
42eb356a44 [OpenAI]: Encoding Model (#31402)
- **Description:** Small Fix for when getting the encoder in case of
KeyError and using the correct encoder for newer models
- **Issue:** #31390
2025-06-10 16:00:00 -04:00
Mrityunjay Jha
40bd71caa5 doc:Updating doc with consistent bullet format. (#31527)
Description: Added line-break for each of the step mentioned, also made
bullet points consistent with other docs.
Dependencies: N/A
Twitter handle: [mrityu___](https://x.com/mrityu___)

Before Changes:

![image](https://github.com/user-attachments/assets/867b0c05-7ebd-418b-bfe4-e1cb6134baf2)

After Changes:

![image](https://github.com/user-attachments/assets/3ff5db49-a713-4bb0-b814-dc02b9dc6bb4)



The styling(non-bold of bullet) is consistent with
[chat_models.mdx](https://github.com/langchain-ai/langchain/edit/master/docs/docs/concepts/chat_models.mdx)
(SS below from same, take note that how bullet numbers are not bold).


![image](https://github.com/user-attachments/assets/712b1a73-85ca-45d0-86d6-aa1c0fb3f164)
2025-06-10 15:45:11 -04:00
CCM
1935e4526a docs: update graph_rag.mdx (#31548)
**Description:** fix broken links
2025-06-10 15:43:30 -04:00
Steven Silvester
323850fae1 docs: Update MongoDB feature status (#31553)
**Description:** a description of the change

Update the MongoDBAtlasVectorSearch feature status.

As of https://github.com/langchain-ai/langchain-mongodb/pull/98,
MongoDBAtlasVectorSearch is using the standard test suite.
2025-06-10 15:42:17 -04:00
Mateusz Szewczyk
eadbb9077e docs: Updated text embedding model for IBM provider (#31554)
Thank you for contributing to LangChain!

Description: Updated text embedding (`WatsonxEmbeddings`) model for IBM
provider
2025-06-10 15:41:36 -04:00
ccurme
b0f100af7e core: release 0.3.65 (#31557) 2025-06-10 19:39:50 +00:00
Sydney Runkle
5b165effcd core(fix): revert set_text optimization (#31555)
Revert serialization regression introduced in
https://github.com/langchain-ai/langchain/pull/31238

Fixes https://github.com/langchain-ai/langchain/issues/31486
2025-06-10 13:36:55 -04:00
Soumendra kumar sahoo
e455fab5d3 docs: Update Multi_modal_RAG_google.ipynb to remove the unsupported Gemini models (#31526)
- **Description:** Remove the outdated Gemini models and replace those
with the latest models.
- **Issue:** Earlier the code was not running, now the code runs.
- **Dependencies:** No
- **Twitter handle:** [soumendrak_](https://x.com/soumendrak_)
2025-06-09 21:26:01 -04:00
Jacob Peddicord
b21526fe38 docs: update Exa integration examples (#31547)
## Description

Updating Exa integration documentation to showcase the latest features
and best practices.

## Changes

- Added examples for `ExaSearchResults` tool with advanced search
options
- Added examples for `ExaFindSimilarResults` tool
- Updated agent example to use LangGraph
- Demonstrated text content options, summaries, and highlights
- Included examples of search type control and live crawling

## Additional Context

I'm from the Exa team updating our integration documentation to reflect
current capabilities and best practices.
2025-06-09 21:24:42 -04:00
Eugene Yurtsev
9ce974247c langchain[patch]: Remove proxy imports to langchain_experimental (#31541)
Remove proxy imports to langchain_experimental.

Previously, these imports would work if a user manually installed
langchain_experimental. However, we want to drop support even for that
as langchain_experimental is generally not recommended to be run in
production.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-06-09 17:09:09 -04:00
Zameel Hassan
16e5a12806 docs: fix grammar in retrievers.mdx ("be built" → "build") (#31537)
**Description:**  
Fixed a small grammatical error in the `retrievers.mdx` documentation.  
Replaced "we can be built retrievers on top of search APIs..." with  
"we can build retrievers on top of search APIs..." for clarity and
correctness.

**Issue:**  
N/A

**Dependencies:**  
None

**Twitter handle:**  
@hassan_zameel
2025-06-09 16:29:14 -04:00
ccurme
71b0f78952 openai: release 0.3.22 (#31542) 2025-06-09 15:29:15 -04:00
ccurme
575662d5f1 openai[patch]: accommodate change in image generation API (#31522)
OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.

As described in https://github.com/langchain-ai/langchain/pull/31424, we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
2025-06-09 14:57:46 -04:00
ccurme
ece9e31a7a openai[patch]: VCR some tests (#31524) 2025-06-06 23:00:57 +00:00
Bagatur
5187817006 openai[release]: 0.3.21 (#31519) 2025-06-06 11:40:09 -04:00
Bagatur
761f8c3231 openai[patch]: pass through with_structured_output kwargs (#31518)
Support 
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel


class ResponseSchema(BaseModel):
    response: str


def get_weather(location: str) -> str:
    """Get weather"""
    pass

llm = init_chat_model("openai:gpt-4o-mini")

structured_llm = llm.with_structured_output(
    ResponseSchema,
    tools=[get_weather],
    strict=True,
    include_raw=True,
    tool_choice="required",
    parallel_tool_calls=False,
)

structured_llm.invoke("whats up?")
```
2025-06-06 11:17:34 -04:00
Bagatur
0375848f6c openai[patch]: update with_structured_outputs docstring (#31517)
Update docstrings
2025-06-06 10:03:47 -04:00
ccurme
9c639035c0 standard-tests: add cache_control to Anthropic inputs test (#31516) 2025-06-06 10:00:43 -04:00
ccurme
a1f068eb85 openai: release 0.3.20 (#31515) 2025-06-06 13:29:12 +00:00
ccurme
4cc2f6b807 openai[patch]: guard against None text completions in BaseOpenAI (#31514)
Some chat completions APIs will return null `text` output (even though
this is typed as string).
2025-06-06 09:14:37 -04:00
Mrityunjay Jha
abc8bf9f1c docs:Adding line breaks to better explain each step, also making sure that bulleting style is consistent with other docs. (#31506)
- **Description:** Added line-break for each of the step mentioned for a
diagram in the doc.
- **Dependencies:** N/A
- **Twitter handle:** [mrityu___](https://x.com/mrityu___)

Before changes:

![image](https://github.com/user-attachments/assets/c9946aec-79c7-4ad5-a28e-06ea4c163ce5)


After Changes:

![image](https://github.com/user-attachments/assets/933db561-bea2-421e-88e8-f79cbb30856d)




The styling(non-bold of bullet) is consistent with
[chat_models.mdx](https://github.com/langchain-ai/langchain/edit/master/docs/docs/concepts/chat_models.mdx)
(SS below from same, take note that how bullet numbers are not bold).


![image](https://github.com/user-attachments/assets/958c1e25-e52c-4d6f-8b46-f59af3aa3d3b)
2025-06-06 08:43:15 -04:00
nakanoh
c25b832f51 fix: typo in SECURITY.md (practicies -> practices) (#31509)
**Description:**
Fixes a typo in SECURITY.md ("practicies" → "practices").
Note: This PR also unifies apostrophe usage (’ → ').

**Issue:**
N/A

**Dependencies:**
None

**Twitter handle:**
N/A

Co-authored-by: 中野 博文 <hirofumi0082@gmail.com>
2025-06-06 08:42:01 -04:00
lc-arjun
35ae5eab4f core: use run tree post/patch (#31500)
Use run post/patch
2025-06-05 14:05:57 -07:00
Eugene Yurtsev
73655b0ca8 huggingface: 0.3.0 release (#31503)
Breaking change to make some dependencies optional:
https://github.com/langchain-ai/langchain/pull/31268

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-05 20:20:15 +00:00
Bagatur
f7f52cab12 anthropic[patch]: cache tokens nit (#31484)
if you pass in beta headers directly cache_creation is a dict
2025-06-05 16:15:03 -04:00
ccurme
14c561e15d infra: relax types-requests version range (#31504) 2025-06-05 18:57:08 +00:00
ccurme
6d6f305748 openai[patch]: clarify docs on api_version in docstring for AzureChatOpenAI (#31502) 2025-06-05 16:06:22 +00:00
Simon Stone
815bfa5408 huggingface[major]: Reduce disk footprint by 95% by making large dependencies optional (#31268)
**Description:** 
`langchain_huggingface` has a very large installation size of around 600
MB (on a Mac with Python 3.11). This is due to its dependency on
`sentence-transformers`, which in turn depends on `torch`, which is 320
MB all by itself. Similarly, the depedency on `transformers` adds
another set of heavy dependencies. With those dependencies removed, the
installation of `langchain_huggingface` only takes up ~26 MB. This is
only 5 % of the full installation!

These libraries are not necessary to use `langchain_huggingface`'s API
wrapper classes, only for local inferences/embeddings. All import
statements for those two libraries already have import guards in place
(try/catch with a helpful "please install x" message).

This PR therefore moves those two libraries to an optional dependency
group `full`. So a `pip install langchain_huggingface` will only install
the lightweight version, and a `pip install
"langchain_huggingface[full]"` will install all dependencies.

I know this may break existing code, because `sentence-transformers` and
`transformers` are now no longer installed by default. Given that users
will see helpful error messages when that happens, and the major impact
of this small change, I hope that you will still consider this PR.

**Dependencies:** No new dependencies, but new optional grouping.
2025-06-05 12:04:19 -04:00
Mohammad Mohtashim
ae3551c96b core[patch]: Correct type casting of annotations in _infer_arg_descriptions (#31181)
- **Description:** 
- In _infer_arg_descriptions, the annotations dictionary contains string
representations of types instead of actual typing objects. This causes
_is_annotated_type to fail, preventing the correct description from
being generated.
- This is a simple fix using the get_type_hints method, which resolves
the annotations properly and is supported across all Python versions.

  - **Issue:** #31051

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-05 11:58:36 -04:00
Mrityunjay Jha
dea43436ea docs: updated incorrect datatype for custom tool notebook (#31498)
- **Description:** `"string"` is given as the `"type"` for a custom tool
argument, even though it is an `integer`. This can be validated from the
Colab notebook output.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** [mrityu___](https://x.com/mrityu___)

    
Current: 

![image](https://github.com/user-attachments/assets/403c04c5-ba35-4845-a8ce-9e9c584a57b8)

After Change:

![image](https://github.com/user-attachments/assets/c0af90c4-2039-4b92-9be3-7b77d08bae3d)

Colab Output:

![image](https://github.com/user-attachments/assets/9495c574-21bf-475d-8ede-a14cb2576ffa)
2025-06-05 11:40:52 -04:00
ccurme
43bee469ce standard-tests: release 0.3.20 (#31499) 2025-06-05 11:28:18 -04:00
ccurme
741bb1ffa1 core[patch]: revert change to stream type hint (#31501)
https://github.com/langchain-ai/langchain/pull/31286 included an update
to the return type for `BaseChatModel.(a)stream`, from
`Iterator[BaseMessageChunk]` to `Iterator[BaseMessage]`.

This change is correct, because when streaming is disabled, the stream
methods return an iterator of `BaseMessage`, and the inheritance is such
that an `BaseMessage` is not a `BaseMessageChunk` (but the reverse is
true).

However, LangChain includes a pattern throughout its docs of [summing
BaseMessageChunks](https://python.langchain.com/docs/how_to/streaming/#llms-and-chat-models)
to accumulate a chat model stream. This pattern is implemented in tests
for most integration packages and appears in application code. So
https://github.com/langchain-ai/langchain/pull/31286 introduces mypy
errors throughout the ecosystem (or maybe more accurately, it reveals
that this pattern does not account for use of the `.stream` method when
streaming is disabled).

Here we revert just the change to the stream return type to unblock
things. A fix for this should address docs + integration packages (or if
we elect to just force people to update code, be explicit about that).
2025-06-05 11:20:06 -04:00
Eugene Yurtsev
b149cce5f8 Revert "docs: replace deprecated initialize_agent with create_react_agent and non-deprecated functions" (#31492)
Reverts langchain-ai/langchain#31361
2025-06-04 14:42:47 +00:00
Michael Li
222578b296 docs: fix grammar issues in rag_with_quantized_embeddings.ipynb and imessage.ipynb (#31460) 2025-06-04 10:28:55 -04:00
Michael Li
e845a83099 docs: fix grammar issues in airbyte_typeform.ipynb and airbyte_zendes… (#31476) 2025-06-04 10:28:32 -04:00
Michael Li
457b235b5d docs: fix grammar issues in alibaba_cloud_maxcompute.ipynb and async_chromium.ipynb (#31477) 2025-06-04 10:28:01 -04:00
Michael Li
d2a023a183 docs: fix grammar and typo issues in async_html.ipynb and azure_blob_storage_file.ipynb (#31478) 2025-06-04 10:27:44 -04:00
Michael Li
21d6f1fc6a docs: fix lets typos in multiple files (#31481)
Fix typo
2025-06-04 10:27:16 -04:00
Ahmad Elmalah
f97e1825b7 Docs: Amazon Textract Page - removing a redundancy and fixing a title (#31488)
- The second 'the' is redundant 
- Fixing the title grammatically
2025-06-04 10:26:51 -04:00
Ahmed Hassan
81db124351 docs: replace deprecated initialize_agent with create_react_agent and non-deprecated functions (#31361)
**Description:**  
This PR updates approximately 4' occurences of the deprecated
`initialize_agent` function in LangChain documentation and examples,
replacing it with the recommended `create_react_agent` and pattern. It
also refactors related examples to align with current best practices.

**Issue:**  
Partially Fixes #29277


**Dependencies:**  
None

**X handle:**  
@TK1475

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-04 10:24:30 -04:00
Marlene
4ec46aeb73 Docs: Updating Microsoft Provider Documentation to Include Azure AI (#31467)
This PR adds documentation to our Microsft Provider page for LangChain
Azure AI. This PR does not add any extra dependencies or require any
tests besides passing CI.
2025-06-04 10:23:01 -04:00
Cheney Zhang
993e34fafb docs: Update Milvus feature table (#31472)
We found the [table of langchain milvus
feature](https://python.langchain.com/docs/integrations/vectorstores/)
is not consistent with the currently implemented code. So we change it
with a PR.

- searchByVector: code is
[here](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L1543))
- passesStandardTests: All methods will be tested(including unittest and
integration test) , see an example [here](
https://github.com/langchain-ai/langchain-milvus/actions/runs/15347213828/job/43186093988)
, the test code it
[here](https://github.com/langchain-ai/langchain-milvus/tree/main/libs/milvus/tests)
and the github workflow is defined
[here](https://github.com/langchain-ai/langchain-milvus/blob/main/.github/workflows/_test.yml)
- multiTenancy: milvus supports different kinds of [multi
tenancy](https://milvus.io/docs/multi_tenancy.md#Implement-Multi-tenancy),
they also implemented by langchain_milvus
- database level: specify the database name in
[connection_args](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L374))
- collection level: specify the collection in [collection_name
param](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L337))
- partition level: specify the [partition-related params
](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L280))
- idsInAddDocuments: [add document
method](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L2030))
supports ids param passed in( then passed to add_texts method
[here](e29ff1bff5/libs/milvus/langchain_milvus/vectorstores/milvus.py (L1102)))
@ccurme  please take a review, thanks.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-06-03 16:56:52 -04:00
Bagatur
ec8bab83f8 anthropic[fix]: bump langchain-core dep (#31483) 2025-06-03 10:56:48 -04:00
Bagatur
310e643842 release[anthropic]: 0.3.15 (#31479)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-03 10:38:11 -04:00
Ahmad Elmalah
e70ec3b9fa Docs: Textract Integration Page - Fixing a typo (#31475)
Fixing a little typo in AWS textract integration page
2025-06-03 10:10:40 -04:00
Michael Li
9649222322 docs: fix typo in multiple files (#31480)
Fix typo
2025-06-03 10:10:20 -04:00
Christophe Bornet
539e5b6936 core: Add mypy strict-equality rule (#31286) 2025-06-02 18:24:35 +00:00
Sam Zhang
2c4e0ab3bc fix: module 'defusedxml' has no attribute 'ElementTree' (#31429) (#31431)
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-02 18:09:22 +00:00
Eugene Yurtsev
b93ed192bd docs: openai responses api image generation (#31444)
Document feature enabled in this PR:
https://github.com/langchain-ai/langchain/pull/31424
2025-06-02 13:50:35 -04:00
Diego Tabares
c6885a0f23 docs: Document Loader for the Outline collaborative knowledge base (#31395)
**Description:** Adds documentation on how to use `langchain-outline`
document loader package.
**Issue:** None - document loader documentation
**Dependencies:** None
**Twitter handle:** `@10Pines`
2025-06-02 12:50:10 -04:00
Michael Li
f64d48d507 docs: fix grammar issues in slack.ipynb and telegram.ipynb (#31461)
fix grammar
2025-06-02 12:45:33 -04:00
Eugene Yurtsev
6cb3ea514a openai: release 0.3.19 (#31466)
Release 0.3.19
2025-06-02 12:44:49 -04:00
Michael Li
227aac5d07 docs: fix grammar issues in wechat.ipynb and airbyte_cdk.ipynb (#31462)
Fix grammar
2025-06-02 12:44:23 -04:00
Michael Li
05b9bce05b docs: fix grammar issues in airbyte_gong.ipynb and airbyte_hubspot.ipynb (#31463)
Fix grammar
2025-06-02 12:43:57 -04:00
Michael Li
d359b7b737 docs: fix grammar issues in airbyte_json.ipynb and airbyte_salesforce.ipynb (#31464)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-02 12:43:27 -04:00
Michael Li
2e4d76d772 docs: fix grammar issues in airbyte_shopify.ipynb and airbyte_stripe.ipynb (#31465)
fix grammar issue
2025-06-02 12:42:52 -04:00
Cole McIntosh
ffa32a1802 docs: enhance Salesforce Toolkit documentation (#31387)
## PR Title
```
docs: enhance Salesforce Toolkit documentation
```

## PR Description

**Description:** Enhanced the Salesforce Toolkit documentation to
provide a more comprehensive overview of the `langchain-salesforce`
package. The updates include improved descriptions of the toolkit's
capabilities, detailed setup instructions for authentication using
environment variables, updated code snippets with consistent parameter
naming and improved readability, and additional resources with API
references for better user guidance.

**Issue:** N/A (documentation improvement)

**Dependencies:** None

**Twitter handle:** @colesmcintosh

---

### Changes Made:
- Improved description of the Salesforce Toolkit's capabilities and
features
- Added detailed setup instructions for authentication using environment
variables
- Updated code snippets to use consistent parameter naming and improved
readability
- Included additional resources and API references for better user
guidance
- Enhanced overall documentation structure and clarity

### Files Modified:
- `docs/docs/integrations/tools/salesforce.ipynb` (83 insertions, 36
deletions)

This is a documentation-only change that improves the user experience
for developers working with the Salesforce Toolkit. The changes are
backwards compatible and follow LangChain's documentation standards.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-02 11:01:22 -04:00
Eugene Yurtsev
17f34baa88 openai[minor]: add image generation to responses api (#31424)
Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-02 10:03:54 -04:00
Jeel
9a78246d29 docs: fix typo in local_llms.ipynb (#31449)
change `--no-cache-dirclear` -> `--no-cache-dir`.

pip throws `no such option: --no-cache-dirclear` since its invalid.
`--no-cache-dir` is the correct one.
2025-06-01 10:50:30 -04:00
ccurme
d3be4a0c56 infra: remove use of --vcr-record=none (#31452)
This option is specific to `pytest-vcr`. `pytest-recording` runs in this
mode by default.
2025-06-01 10:49:59 -04:00
ccurme
3db1aa0ba6 standard-tests: migrate to pytest-recording (#31425)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-31 15:21:15 -04:00
Tian Siyuan
d7f90f233b fix a typo (#31447)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-31 13:55:07 -04:00
ccurme
38c19d2891 docs: remove unnecessary setup from tutorial (#31445) 2025-05-30 21:59:21 +00:00
Michael Li
c284fdae89 docs: fix naver_search description at https://python.langchain.com/docs/integrations/tools/ All tools section (#31426)
…cs/integrations/tools/ All tools section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:51:55 -04:00
Michael Li
e16e09637c docs: fix vectara description at https://python.langchain.com/docs/integrations/tools/ All tools section (#31427)
…tegrations/tools/ All tools section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:51:33 -04:00
Michael Li
d1b32e0ecf docs: fix grammar issues in tencent_hunyuan.ipynb and together.ipynb (#31438)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:51:10 -04:00
Michael Li
079b97efde docs: fix grammar and vocabulary errors in vllm.ipynb and volcengine_maas.ipynb (#31439)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:50:06 -04:00
Michael Li
8e3dc1f2ea docs: fix grammar errors in yi.ipynb and zhipuai.ipynb (#31441)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:49:11 -04:00
Michael Li
48fa4ca271 docs: fix grammar issues in facebook.ipynb and langsmith_dataset.ipynb (#31442)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-30 17:48:45 -04:00
ccurme
5bf89628bf groq[patch]: update model for integration tests (#31440)
Llama-3.1 started failing consistently with
> groq.BadRequestError: Error code: 400 - ***'error': ***'message':
"Failed to call a function. Please adjust your prompt. See
'failed_generation' for more details.", 'type': 'invalid_request_error',
'code': 'tool_use_failed', 'failed_generation':
'<function=brave_search>***"query": "Hello!"***</function>'***
2025-05-30 17:27:12 +00:00
Jorge Piedrahita Ortiz
5b9394319b docs: samabanova doc minor fixes (#31436)
- **Description:** samabanova provider docs minor fixes
2025-05-30 12:07:04 -04:00
ccurme
bbb60e210a docs: add example of simultaneous tool-calling + structured output for OpenAI (#31433) 2025-05-30 09:29:36 -04:00
Michael Li
d79b5813a0 doc: fix grammar in writer.ipynb (#31400)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 20:03:17 +00:00
अंkur गोswami
729526ff7c huggingface: Undefined model_id fix (#31358)
**Description:** This change fixes the undefined model_id issue when
instantiating
[ChatHuggingFace](https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L306)
**Issue:** Fixes https://github.com/langchain-ai/langchain/issues/31357


@baskaryan @hwchase17
2025-05-29 15:59:35 -04:00
Michael Li
b7f34749b1 docs: fix grammar issue in assign.ipynb and fireworks.ipynb (#31412)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 19:55:36 +00:00
Michael Li
dd4fc8ab8f docs: fix misspelled word in kinetica.ipynb and nvidia_ai_endpoints.ipynb (#31415)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 19:41:38 +00:00
Michael Li
cc6df95e58 docs: fix grammar and vocabulary issue in reka.ipynb (#31417)
Fix grammar

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 15:25:45 -04:00
ccurme
c8951ca124 infra: drop azure from streaming benchmarks (#31421)
Covered by BaseChatOpenAI
2025-05-29 15:06:12 -04:00
Eugene Yurtsev
19f2a92609 core: release 0.3.63 (#31419)
Release core 0.3.63

Small update just to expand the list of well known tools. This is
necessary while the logic lives in langchain-core.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-29 14:48:18 -04:00
ccurme
394d42b4ae docs: update default model (#31420) 2025-05-29 14:28:05 -04:00
ccurme
afd349cc95 openai: cache httpx client (#31260)
![Screenshot 2025-05-16 at 3 49
54 PM](https://github.com/user-attachments/assets/4b377384-a769-4487-b801-bd1aa0ed66c1)

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-05-29 14:03:06 -04:00
Eugene Yurtsev
e6633a7efb langchain-core: Add image_generation tool to list of known openai tools (#31396)
Add image generation tool to the list of well known tools. This is needed for changes in the ChatOpenAI client. 

TODO: Some of this logic needs to be moved from core directly into the client as changes in core should not be required to add a new tool to the openai chat client.
2025-05-29 13:13:21 -04:00
Michael Li
d9631edd87 docs: fix misspelled word in promptlayer.ipynb (#31410) 2025-05-29 15:55:35 +00:00
Michael Li
c532facbc4 docs: fix misspelled word in uptrain.ipynb (#31411) 2025-05-29 11:46:36 -04:00
Sydney Runkle
1917dd1ccd benchmarks: always run (not conditional on changes) (#31409) 2025-05-29 11:45:57 -04:00
ccurme
49eeb0f3c3 standard-tests: add benchmarks (#31302)
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
2025-05-29 15:21:37 +00:00
ccurme
6d39e59c2e infra: update cassettes for notebook (#31407) 2025-05-29 15:21:29 +00:00
Michael Li
b808d27284 docs: fix grammar in xai.ipynb (#31399)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-28 14:22:30 -04:00
Michael Li
b81a4ed77e doc: fix grammar in vectara.ipynb (#31401)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-28 14:21:58 -04:00
ccurme
0e3f35effe anthropic: store cache ttl details on usage metadata (#31393) 2025-05-28 13:52:37 -04:00
Michael Li
0aec05bde5 docs: fix grammar in multiple docs (#31375)
Fix grammar in multiple docs
2025-05-28 12:12:56 -04:00
Yilin Juang
bf1750a771 docs: fix typo (#31388)
- **Description:** fix typo in ragatouille doc
2025-05-28 12:12:16 -04:00
Hanieh Moshki
7b8b8a26e7 docs: update typos in release_policy.mdx (#31397)
Deleted two outdated phrases that were reflecting the current versions
of packages at the time i.e.: 1-"langchain-community is currently on
version 0.2.x." 2-langchain-"experimental is currently on version 0.0.x"
2025-05-28 11:45:55 -04:00
Alexander Ng
60119b9ba6 [UPDATE] Valyu 0.2.0 langchain package (#31363)
docs: update Valyu integration notebooks to reflect current
langchain-valyu package implementation

Updated the Valyu integration documentation notebooks to align with the
current implementation of the langchain-valyu package. The changes
include:

- Updated ValyuContextRetriever to ValyuRetriever class name
- Changed parameter name from similarity_threshold to
relevance_threshold
- Removed query_rewrite parameter from search tool examples
- Added start_date and end_date parameters for time filtering
- Updated default values to match current implementation
(relevance_threshold: 0.5)
- Enhanced parameter documentation with proper descriptions and
constraints
- Updated section titles to reflect "Deep Search" functionality
2025-05-27 16:06:05 -04:00
Michael Li
5a13ad02ca docs: fix wikipedia description at https://python.langchain.com/docs/integrations/retrievers/ All retrievers section (#31372)
…integrations/retrievers/ All retrievers section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-27 15:59:14 -04:00
Michael Li
0c80a2c55c docs: fix google_books and memgraph descriptions at https://python.langchain.com/docs/integrations/tools/ All tools section (#31373)
…ngchain.com/docs/integrations/tools/ All tools section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-27 15:57:57 -04:00
Michael Li
dcb88b618d docs: fix scrapfly and scrapingant descriptions at https://python.langchain.com/docs/integrations/document_loaders/ All document loaders section (#31370)
…gchain.com/docs/integrations/document_loaders/ All document loaders
section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-27 15:57:38 -04:00
Michael Li
9bd956598d docs: fix pdfloaders' descriptions at https://python.langchain.com/docs/integrations/document_loaders/ All document loaders section (#31371)
…cs/integrations/document_loaders/ All document loaders section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-27 15:56:55 -04:00
Michael Li
0478f544d5 docs: fix blockchain and mintbase descriptions at https://python.langchain.com/docs/integrations/document_loaders/ All document loaders section (#31369)
…chain.com/docs/integrations/document_loaders/ All document loaders
section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-27 15:56:33 -04:00
ccurme
ab8b4003be openai[patch]: add test case for code interpreter (#31383) 2025-05-27 19:11:31 +00:00
ccurme
c8a656c05b docs: update xai docs (#31382) 2025-05-27 15:09:51 -04:00
ccurme
6ecc85c163 xai: document live search feature (#31381) 2025-05-27 14:51:19 -04:00
ccurme
5bff018951 xai: release 0.2.4 (#31380) 2025-05-27 14:33:36 -04:00
ccurme
8b1f54c419 xai: support live search (#31379)
https://docs.x.ai/docs/guides/live-search
2025-05-27 14:08:59 -04:00
ccurme
443341a20d anthropic: release 0.3.14 (#31378) 2025-05-27 17:31:05 +00:00
ccurme
930aa6073e core: release 0.3.62 (#31376) 2025-05-27 16:52:09 +00:00
ccurme
580986b260 anthropic: support for code execution, MCP connector, files API features (#31340)
Support for the new [batch of beta
features](https://www.anthropic.com/news/agent-capabilities-api)
released yesterday:

- [Code
execution](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool)
- [MCP
connector](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector)
- [Files
API](https://docs.anthropic.com/en/docs/build-with-claude/files)

Also verified support for [prompt cache
TTL](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#1-hour-cache-duration-beta).
2025-05-27 12:45:45 -04:00
Ahmad Elmalah
1ebcbf1d11 Docs: Fixing a little typo in extraction tutorial (#31360)
Just a little typo: force should be forced
2025-05-26 15:30:05 -04:00
Michael Li
611a3cb02a docs: fix incorrect grammar in azure_ai.ipynb (#31345)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-26 19:23:33 +00:00
Hyunjoon Jeong
4f94548bb7 docs: fix vocabs and missing redirect links in concepts (#31352)
### Description
Fix typo errors and missing redirect links in LangChain concepts
documents.

### Issue
None

### Dependencies
None

---------

Co-authored-by: 정현준/AIX 팀 <126749237+hyunjoon1015@users.noreply.github.com>
2025-05-26 15:22:35 -04:00
Michael Li
71cf26095e docs: fix model2vec description at https://python.langchain.com/docs/integrations/text_embedding/ All embedding models section (#31349)
…integrations/document_loaders/ All document loaders section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 15:17:29 -04:00
Michael Li
4e8779b3a5 docs: fix incorrect grammar in octoai.ipynb and predictionguard.ipynb (#31347)
…tionguard.ipynb

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 15:16:42 -04:00
Michael Li
19166a6af6 docs: fix incorrect grammar and vocabulary in llamacpp.ipynb (#31346)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 15:12:34 -04:00
Michael Li
b5dfdd1ab2 docs: fix incorrect grammar in abso.ipynb (#31344)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 15:07:15 -04:00
Michael Li
c8448c9a61 docs: fix incorrect grammar and vocabulary in ernie.ipynb (#31343)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 15:05:24 -04:00
Michael Li
de3d8b4b5c docs: fix Langgraph Platform URL in Readme file (#31341)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-26 14:59:48 -04:00
Aman pandey
d0b7e2f03d docs: fix truncation of retriever descriptions in overview table (#31327)
### Description
Added a note above the retriever overview table to clarify that the
descriptions are truncated for readability and how to view the full
version (via hover or click).

### Issue
Fixes #31311 — Users were confused by incomplete retriever descriptions
in the integration docs.

### Dependencies
None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-24 02:14:48 +00:00
Michael Li
20e48598dc docs: fix ai21 description at https://python.langchain.com/docs/integrations/chat/ All chat models section (#31313)
…rations/chat/ All chat models section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-23 16:49:18 -04:00
Michael Li
1c03555f95 docs: fix maritalk description at https://python.langchain.com/docs/integrations/chat/ All chat models section (#31314)
…ntegrations/chat/ All chat models section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-23 16:48:10 -04:00
Michael Li
1a873fca87 docs: fix nebula description at https://python.langchain.com/docs/integrations/chat/ All chat models section (#31315)
…egrations/chat/ All chat models section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-23 16:47:35 -04:00
Michael Li
6bc497cc0f docs: fix vectara description at https://python.langchain.com/docs/integrations/chat/ All chat models section (#31316)
…tegrations/chat/ All chat models section

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-23 16:46:52 -04:00
DayDang
5bf539f405 docs: fix missing 'self' reference in for-loop over documents (#31332)
**Description:** fix missing 'self' reference in for-loop over documents
2025-05-23 16:39:59 -04:00
Harikrishna
f2b4698b54 langchain[patch]: update AgentType docstring with correct documentation URL (#31333)
### What does this PR do?

Updates the docstring for `AgentType` in the
`langchain.agents.agent_types` module to reflect the current URL for the
documentation.

### Why is this needed?

The existing URL
(https://python.langchain.com/docs/modules/agents/agent_types/) returns
"Page Moved" message. This fix improves developer experience by pointing
to the correct API reference documentation.

### Reference

New link:
https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent_types.AgentType.html

Co-authored-by: Harikrishna <harikrishna.gurram@walmart.com>
2025-05-23 16:38:57 -04:00
Michael Li
e2b54a5d72 docs: update deprecated doc for solar.ipynb (#31337)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-23 16:38:03 -04:00
Ante Javor
83ea571d57 docs: Fix Memgraph class name (#31329)
- **Description:** Fix Memgraph name that has been changed in the
integration.
2025-05-23 16:36:39 -04:00
ccurme
635ce60a22 Revert "infra: temporarily drop OpenAI from core release test matrix" (#31322)
Reverts langchain-ai/langchain#31318
2025-05-22 17:25:17 +00:00
ccurme
0ce2e69cc1 openai: release 0.3.18 (#31320) 2025-05-22 12:53:53 -04:00
ccurme
851fd438cf openai[patch]: relax Azure llm streaming callback test (#31319)
Effectively reverts
https://github.com/langchain-ai/langchain/pull/29302, but check that
counts are "less than" instead of equal to an expected count.
2025-05-22 16:14:53 +00:00
ccurme
092697de60 infra: temporarily drop OpenAI from core release test matrix (#31318)
As part of core releases we run tests on the last released version of
some packages (including langchain-openai) using the new version of
langchain-core. We run langchain-openai's test suite as it was when it
was last released.

OpenAI has since updated their API— relaxing constraints on what schemas
are supported when `strict=True`— causing these tests to break. They
have since been fixed. But the old tests will continue to fail.

Will revert this change after we release OpenAI today.
2025-05-22 16:07:45 +00:00
ccurme
71c074d28f core: release 0.3.61 (#31317) 2025-05-22 11:54:28 -04:00
ccurme
053a1246da openai[patch]: support built-in code interpreter and remote MCP tools (#31304) 2025-05-22 11:47:57 -04:00
ccurme
1b5ffe4107 openai[patch]: run _tokenize in background thread in async embedding invocations (#31312) 2025-05-22 10:27:33 -04:00
Ishan Goswami
f16456139b exa docs and python package update (#31307)
Added support for new Exa API features. Updated Exa docs and python
package (langchain-exa).

Description

Added support for new Exa API features in the langchain-exa package:
- Added max_characters option for text content
- Added support for summary and custom summary prompts
- Added livecrawl option with "always", "fallback", "never" settings
- Added "auto" option for search type
- Updated documentation and tests

Dependencies
- No new dependencies required. Using existing features from exa-py.

twitter: @theishangoswami

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-21 21:33:30 -04:00
Mason Daugherty
cf1fa27e27 docs: correct groq description typo (#31308)
**Description:** a space was missing between the words "Groq" and
"developed" and has been added
2025-05-21 21:23:37 -04:00
ccurme
beacedd6b3 openai[patch]: update tests for strict schemas (#31306)
Following recent [changes](https://platform.openai.com/docs/changelog).
2025-05-21 22:06:17 +00:00
Vikram Saraph
53d6286539 Fix link to deprecation alternative for ConversationChain in docs (#31299)
**Description:** ConversationChain has been deprecated, and the
documentation says to use RunnableWithMessageHistory in its place, but
the link at the top of the page to RunnableWithMessageHistory is broken
(it's rendering as "html()"). See here at the top of the page:
https://python.langchain.com/api_reference/langchain/chains/langchain.chains.conversation.base.ConversationChain.html.
This PR fixes the link.
**Issue**: N/A
**Dependencies**: N/A
**Twitter handle:**: If you're on Bluesky, I'm @vikramsaraph.com
2025-05-21 09:31:06 -04:00
Ako
7b45d46210 ci: fix typo in doc-string (#31284)
Fix typo

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-05-20 20:52:18 +00:00
Shaya Ulman
580fc7d464 docs: fix broken link (#31294)
As stated
[here](https://github.com/langchain-ai/langchain/blob/master/libs/community/README.md)
this package has moved, hence the broken link
2025-05-20 16:49:50 -04:00
Mert Arcan
6993bc9ad1 typo fix (#31295)
typo fix
%pip install -upgrade langchain-google
to:
%pip install --upgrade langchain-google
2025-05-20 16:49:21 -04:00
ccurme
dcb5aba999 openai[patch]: reduce tested constraints on strict schema adherence for Responses API (#31290)
Scheduled testing started failing today because the Responses API
stopped raising `BadRequestError` for a schema that was previously
invalid when `strict=True`.

Although docs still say that [some type-specific keywords are not yet
supported](https://platform.openai.com/docs/guides/structured-outputs#some-type-specific-keywords-are-not-yet-supported)
(including `minimum` and `maximum` for numbers), the below appears to
run and correctly respect the constraints:
```python
import json
import openai

maximums = list(range(1, 11))
arg_values = []
for maximum in maximums:

    tool = {
        "type": "function",
        "name": "magic_function",
        "description": "Applies a magic function to an input.",
        "parameters": {
            "properties": {
                "input": {"maximum": maximum, "minimum": 0, "type": "integer"}
            },
            "required": ["input"],
            "type": "object",
            "additionalProperties": False
        },
        "strict": True
    }
    
    client = openai.OpenAI()
    
    response = client.responses.create(
        model="gpt-4.1",
        input=[{"role": "user", "content": "What is the value of magic_function(3)? Use the tool."}],
        tools=[tool],
    )
    function_call = next(item for item in response.output if item.type == "function_call")
    args = json.loads(function_call.arguments)
    arg_values.append(args["input"])


print(maximums)
print(arg_values)

# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# [1, 2, 3, 3, 3, 3, 3, 3, 3, 3]
```
Until yesterday this raised BadRequestError.

The same is not true of Chat Completions, which appears to still raise
BadRequestError
```python
tool = {
    "type": "function",
    "function": {
        "name": "magic_function",
        "description": "Applies a magic function to an input.",
        "parameters": {
            "properties": {
                "input": {"maximum": 5, "minimum": 0, "type": "integer"}
            },
            "required": ["input"],
            "type": "object",
            "additionalProperties": False
        },
        "strict": True
    }
}

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "What is the value of magic_function(3)? Use the tool."}],
    tools=[tool],
)
response  # raises BadRequestError
```

Here we update tests accordingly.
2025-05-20 14:50:31 +00:00
Ahmad Elmalah
f29659728c Docs: Fixing a typo in retrievers concepts page (#31288)
The second 'is` is redundant, it makes the meaning confusing
2025-05-20 09:08:33 -04:00
Michael Li
916768e3c1 doc: replace initialize agent in steam.ipynb (#31283)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-19 19:21:35 -04:00
Himanshu Sharma
ff12555bdc docs: clarify hub.pull behavior with non-US LangSmith endpoints (#31270)
**Description**:

This PR updates the documentation to address a potential issue when
using `hub.pull(...)` with non-US LangSmith endpoints (e.g.,
`https://eu.api.smith.langchain.com`).

By default, the `hub.pull` function assumes the non US-based API URL.
When the `LANGSMITH_ENDPOINT` environment variable is set to a non-US
region, this can lead to `LangSmithNotFoundError 404 not found` errors
when pulling public assets from the LangChain Hub.

Issue: #31191

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-19 23:06:37 +00:00
Michael Li
0c6137ec2b docs: replace deprecated load_tools in google_jobs.ipynb (#31272)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-19 18:58:31 -04:00
ccurme
bf645c83f4 voyageai: remove from monorepo (#31281)
langchain-voyageai is now maintained at
https://github.com/voyage-ai/langchain-voyageai.
2025-05-19 16:33:38 +00:00
DarinVerheijke
49fbcec34f community: add Featherless.ai integration (#31250)
Update docs to add Featherless.ai Provider & Chat Model
- **Description:** Adding Featherless.ai as provider in teh
documentations giving access to over 4300+ open-source models
- **Twitter handle:** https://x.com/FeatherlessAI
2025-05-19 10:40:25 -04:00
ccurme
32fcc97a90 openai[patch]: compat with Bedrock Converse (#31280)
ChatBedrockConverse passes through reasoning content blocks in [Bedrock
Converse
format](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html).

Similar to how we handle Anthropic thinking blocks, here we ensure these
are filtered out of OpenAI request payloads.

Resolves https://github.com/langchain-ai/langchain/issues/31279.
2025-05-19 10:35:26 -04:00
Xu Song
8b6fec89bc docs: Update flashrank-reranker.ipynb (#31274)
- **Description:** Fix import in doc, `FlashrankRerank` has been moved
to `langchain_community`
2025-05-19 10:11:15 -04:00
ccurme
ea1f9e2d5b docs: remove integration page with DSPy (#31278)
DSPy removed their LangChain integration in version 2.6.6.

Here we remove the page and add a redirect to the LangChain v0.2 docs
for posterity.

We add an admonition to the v0.2 docs in
https://github.com/langchain-ai/langchain/pull/31277.
2025-05-19 10:10:20 -04:00
Christophe Bornet
17c5a1621f core: Improve Runnable __or__ method typing annotations (#31273)
* It is possible to chain a `Runnable` with an `AsyncIterator` as seen
in `test_runnable.py`.
* Iterator and AsyncIterator Input/Output of Callables must be put
before `Callable[[Other], Any]` otherwise the pattern matching picks the
latter.
2025-05-19 09:32:31 -04:00
mathislindner
e1af509966 anthropic: emit informative error message if there are only system messages in a prompt (#30822)
**PR message**: Not sure if I put the check at the right spot, but I
thought throwing the error before the loop made sense to me.
**Description:** Checks if there are only system messages using
AnthropicChat model and throws an error if it's the case. Check Issue
for more details
**Issue:** #30764

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-16 20:43:59 +00:00
OysterMax
eb25d7472d core: support Union type args in strict mode of OpenAI function calling / structured output (#30971)
**Issue:**[
#309070](https://github.com/langchain-ai/langchain/issues/30970)

**Cause**
Arg type in python code
```
arg: Union[SubSchema1, SubSchema2]
``` 
is translated to `anyOf` in **json schema**
```
"anyOf" : [{sub schema 1 ...}, {sub schema 1 ...}]
```
The value of anyOf is a list sub schemas. 
The bug is caused since the sub schemas inside `anyOf` list is not taken
care of.
The location where the issue happens is `convert_to_openai_function`
function -> `_recursive_set_additional_properties_false` function, that
recursively adds `"additionalProperties": false` to json schema which is
[required by OpenAI's strict function
calling](https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#additionalproperties-false-must-always-be-set-in-objects)

**Solution:**
This PR fixes this issue by iterating each sub schema inside `anyOf`
list.
A unit test is added.

**Twitter handle:** shengboma 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-16 16:20:32 -04:00
Christophe Bornet
c982573f1e core: Add ruff rules A (builtins shadowing) (#29312)
See https://docs.astral.sh/ruff/rules/#flake8-builtins-a
* Renamed vars where possible
* Added `noqa` where backward compatibility was needed
* Added `@override` when applicable
2025-05-16 15:19:37 -04:00
Shkarupa Alex
671e4fd114 langchain[patch]: Allow async indexing code to work for vectorstores that only defined sync delete (#30869)
`aindex` function should check not only `adelete` method, but `delete`
method too

**PR title**: "core: fix async indexing issue with adelete/delete
checking"
**PR message**: Currently `langchain.indexes.aindex` checks if vector
store has overrided adelete method. But due to `adelete` default
implementation store can have just `delete` overrided to make `adelete`
working.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-16 15:10:25 -04:00
Shashank Reddy Boosi
bd367ba10c docs: Update confluence code blocks with all the latest changes and update documentation (#31242)
Description: This document change concerns the document-loader
integration, specifically `Confluence`.

I am trying to use the ConfluenceLoader and came across deprecations
when I followed the instructions in the documentation. So I updated the
code blocks with the latest changes made to langchain, and also updated
the documentation for better readability

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-16 09:47:06 -04:00
Baitur Ulukbekov
1f43b6062e Fix link SQL: Agents to proper destination in sql_large_db.ipynb (#31255)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-16 12:52:09 +00:00
Eugene Yurtsev
c178ad87b6 docs: remove docs that disable ssl verification (#31254)
Remove examples that disable ssl verification
2025-05-15 17:21:22 -04:00
mateencog
12b063eb67 Fix: Extend pyproject.toml check in create_api_rst.py to langchain/libs folder (#31128)
- **Description:** The file ```docs/api_reference/create_api_rst.py```
uses a pyproject.toml check to remove partners which don't have a valid
pyproject.toml. This PR extends that check to ```/langchain/libs/*```
sub-directories as well. Without this the ```make api_docs_build```
command fails (see error).
- **Issue:** #31109
- **Dependencies:** none
- **Error Traceback:** 
uv run --no-group test python docs/api_reference/create_api_rst.py
Starting to build API reference files.
Building package: community
pyproject.toml not found in /langchain/libs/community.
You are either attempting to build a directory which is not a package or
the package is missing a pyproject.toml file which should be
added.Aborting the build.
make: *** [Makefile:35: api_docs_build] Error 1
2025-05-15 16:53:45 -04:00
ccurme
a401d7e52a ollama: release 0.3.3 (#31253) 2025-05-15 16:24:04 -04:00
Alexey Bondarenko
9efafe3337 ollama: Add separate kwargs parameter for async client (#31209)
**Description**:

Add a `async_client_kwargs` field to ollama chat/llm/embeddings adapters
that is passed to async httpx client constructor.

**Motivation:**

In my use-case:
- chat/embedding model adapters may be created frequently, sometimes to
be called just once or to never be called at all
- they may be used in bots sunc and async mode (not known at the moment
they are created)

So, I want to keep a static transport instance maintaining connection
pool, so model adapters can be created and destroyed freely. But that
doesn't work when both sync and async functions are in use as I can only
pass one transport instance for both sync and async client, while
transport types must be different for them. So I can't make both sync
and async calls use shared transport with current model adapter
interfaces.

In this PR I add a separate `async_client_kwargs` that gets passed to
async client constructor, so it will be possible to pass a separate
transport instance. For sake of backwards compatibility, it is merged
with `client_kwargs`, so nothing changes when it is not set.

I am unable to run linter right now, but the changes look ok.
2025-05-15 16:10:10 -04:00
dependabot[bot]
03adca6c44 build(deps): bump Ana06/get-changed-files from 2.2.0 to 2.3.0 (#31236)
Bumps
[Ana06/get-changed-files](https://github.com/ana06/get-changed-files)
from 2.2.0 to 2.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ana06/get-changed-files/releases">Ana06/get-changed-files's
releases</a>.</em></p>
<blockquote>
<h2>v2.3.0</h2>
<p>This project is a fork of <a
href="https://github.com/jitterbit/get-changed-files">jitterbit/get-changed-files</a>,
which:</p>
<ul>
<li>Supports <code>pull_request_target</code></li>
<li>Allows to filter files using regular expressions</li>
<li>Removes the ahead check</li>
<li>Considers renamed modified files as modified</li>
<li>Adds <code>added_modified_renamed</code> that includes renamed
non-modified files and all files in <code>added_modified</code></li>
<li>Uses Node 20</li>
</ul>
<h2>Changes</h2>
<ul>
<li>Update to Node 20</li>
<li>Update dependencies</li>
</ul>
<h2>Raw diff</h2>
<p><a
href="https://github.com/Ana06/get-changed-files/compare/v2.2.0...v2.3.0">https://github.com/Ana06/get-changed-files/compare/v2.2.0...v2.3.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="25f79e676e"><code>25f79e6</code></a>
Update version in package.json</li>
<li><a
href="6f5373eb01"><code>6f5373e</code></a>
Merge pull request <a
href="https://redirect.github.com/ana06/get-changed-files/issues/42">#42</a>
from Ana06/release2-3</li>
<li><a
href="64dffb46a4"><code>64dffb4</code></a>
Prepare 2.3.0 release</li>
<li><a
href="791f7645b7"><code>791f764</code></a>
[CI] Update actions/checkout</li>
<li><a
href="5a4a136e91"><code>5a4a136</code></a>
[CI] Ensure GH action uses node version 20</li>
<li><a
href="38bdb2e498"><code>38bdb2e</code></a>
Update to Node 20</li>
<li><a
href="5558be5781"><code>5558be5</code></a>
Merge pull request <a
href="https://redirect.github.com/ana06/get-changed-files/issues/30">#30</a>
from Ana06/dependabot/npm_and_yarn/decode-uri-componen...</li>
<li><a
href="6a376fdbb3"><code>6a376fd</code></a>
Merge pull request <a
href="https://redirect.github.com/ana06/get-changed-files/issues/31">#31</a>
from Ana06/dependabot/npm_and_yarn/qs-6.5.3</li>
<li><a
href="ace6e7bcbb"><code>ace6e7b</code></a>
Merge pull request <a
href="https://redirect.github.com/ana06/get-changed-files/issues/32">#32</a>
from brtrick/main</li>
<li><a
href="a102fae9bf"><code>a102fae</code></a>
Merge pull request <a
href="https://redirect.github.com/ana06/get-changed-files/issues/33">#33</a>
from Ana06/dependabot/npm_and_yarn/json5-2.2.3</li>
<li>Additional commits viewable in <a
href="https://github.com/ana06/get-changed-files/compare/v2.2.0...v2.3.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Ana06/get-changed-files&package-manager=github_actions&previous-version=2.2.0&new-version=2.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-15 16:08:16 -04:00
ccurme
6bbc12b7f7 chroma: release 0.2.4 (#31252) 2025-05-15 15:58:29 -04:00
Jai Radhakrishnan
aa4890c136 partners: update deps for langchain-chroma (#31251)
Updates dependencies to Chroma to integrate the major release of Chroma
with improved performance, and to fix issues users have been seeing
using the latest chroma docker image with langchain-chroma

https://github.com/langchain-ai/langchain/issues/31047#issuecomment-2850790841
Updates chromadb dependency to >=1.0.9

This also removes the dependency of chroma-hnswlib, meaning it can run
against python 3.13 runners for tests as well.

Tested this by pulling the latest Chroma docker image, running
langchain-chroma using client mode
```
httpClient = chromadb.HttpClient(host="localhost", port=8000)

vector_store = Chroma(
    client=httpClient,
    collection_name="test",
    embedding_function=embeddings,
)
```
2025-05-15 15:55:15 -04:00
Christophe Bornet
a8f2ddee31 core: Add ruff rules RUF (#29353)
See https://docs.astral.sh/ruff/rules/#ruff-specific-rules-ruf
Mostly:
* [RUF022](https://docs.astral.sh/ruff/rules/unsorted-dunder-all/)
(unsorted `__all__`)
* [RUF100](https://docs.astral.sh/ruff/rules/unused-noqa/) (unused noqa)
*
[RUF021](https://docs.astral.sh/ruff/rules/parenthesize-chained-operators/)
(parenthesize-chained-operators)
*
[RUF015](https://docs.astral.sh/ruff/rules/unnecessary-iterable-allocation-for-first-element/)
(unnecessary-iterable-allocation-for-first-element)
*
[RUF005](https://docs.astral.sh/ruff/rules/collection-literal-concatenation/)
(collection-literal-concatenation)
* [RUF046](https://docs.astral.sh/ruff/rules/unnecessary-cast-to-int/)
(unnecessary-cast-to-int)

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-15 15:43:57 -04:00
Christophe Bornet
6cd1aadf60 langchain: use mypy strict checking with exemptions (#31018)
* Use strict checking and exclude some rules as TODOs
* Fix imports not exposed in `__all__`

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-15 11:37:18 -04:00
Christophe Bornet
eab8484a80 text-splitters[patch]: fix some import-untyped errors (#31030) 2025-05-15 11:34:22 -04:00
ccurme
672339f3c6 core: release 0.3.60 (#31249) 2025-05-15 11:14:04 -04:00
renchao
6f2acbcf2e Update sql_large_db.ipynb (#31241)
"Alanis Morissette" spelling error

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-15 11:07:51 -04:00
ccurme
8b145d5dc3 openai: release 0.3.17 (#31246) 2025-05-15 09:18:22 -04:00
dependabot[bot]
d4f77a8c8f build(deps): bump actions/setup-python from 3 to 5 (#31234)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 3 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release, we update node version runtime from node16
to node20 (<a
href="https://redirect.github.com/actions/setup-python/pull/772">actions/setup-python#772</a>).
Besides, we update dependencies to the latest versions.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4.8.0...v5.0.0">https://github.com/actions/setup-python/compare/v4.8.0...v5.0.0</a></p>
<h2>v4.9.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1084">actions/setup-python#1084</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4...v4.9.1">https://github.com/actions/setup-python/compare/v4...v4.9.1</a></p>
<h2>v4.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade <code>actions/cache</code> to 4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1073">actions/setup-python#1073</a>
In scope of this release we updated actions/cache package to ensure
continued support and compatibility, as older versions of the package
are now deprecated. For more information please refer to the <a
href="https://github.com/actions/toolkit/discussions/1890">toolkit/cache</a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4.8.0...v4.9.0">https://github.com/actions/setup-python/compare/v4.8.0...v4.9.0</a></p>
<h2>v4.8.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release we added support for GraalPy (<a
href="https://redirect.github.com/actions/setup-python/pull/694">actions/setup-python#694</a>).
You can use this snippet to set up GraalPy:</p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4 
  with:
    python-version: 'graalpy-22.3' 
- run: python my_script.py
</code></pre>
<p>Besides, the release contains such changes as:</p>
<ul>
<li>Trim python version when reading from file by <a
href="https://github.com/FerranPares"><code>@​FerranPares</code></a> in
<a
href="https://redirect.github.com/actions/setup-python/pull/628">actions/setup-python#628</a></li>
<li>Use non-deprecated versions in examples by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/724">actions/setup-python#724</a></li>
<li>Change deprecation comment to past tense by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/723">actions/setup-python#723</a></li>
<li>Bump <code>@​babel/traverse</code> from 7.9.0 to 7.23.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/743">actions/setup-python#743</a></li>
<li>advanced-usage.md: Encourage the use actions/checkout@v4 by <a
href="https://github.com/cclauss"><code>@​cclauss</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/729">actions/setup-python#729</a></li>
<li>Examples now use checkout@v4 by <a
href="https://github.com/simonw"><code>@​simonw</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/738">actions/setup-python#738</a></li>
<li>Update actions/checkout to v4 by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/761">actions/setup-python#761</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FerranPares"><code>@​FerranPares</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/628">actions/setup-python#628</a></li>
<li><a href="https://github.com/timfel"><code>@​timfel</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/694">actions/setup-python#694</a></li>
<li><a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/724">actions/setup-python#724</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4...v4.8.0">https://github.com/actions/setup-python/compare/v4...v4.8.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a26af69be9"><code>a26af69</code></a>
Bump ts-jest from 29.1.2 to 29.3.2 (<a
href="https://redirect.github.com/actions/setup-python/issues/1081">#1081</a>)</li>
<li><a
href="30eafe9548"><code>30eafe9</code></a>
Bump prettier from 2.8.8 to 3.5.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1046">#1046</a>)</li>
<li><a
href="5d95bc16d4"><code>5d95bc1</code></a>
Bump semver and <code>@​types/semver</code> (<a
href="https://redirect.github.com/actions/setup-python/issues/1091">#1091</a>)</li>
<li><a
href="6ed2c67c8a"><code>6ed2c67</code></a>
Fix for Candidate Not Iterable Error (<a
href="https://redirect.github.com/actions/setup-python/issues/1082">#1082</a>)</li>
<li><a
href="e348410e00"><code>e348410</code></a>
Remove Ubuntu 20.04 from workflows due to deprecation from 2025-04-15
(<a
href="https://redirect.github.com/actions/setup-python/issues/1065">#1065</a>)</li>
<li><a
href="8d9ed9ac5c"><code>8d9ed9a</code></a>
Add e2e Testing for free threaded and Bump <code>@​action/cache</code>
from 4.0.0 to 4.0.3 ...</li>
<li><a
href="19e4675e06"><code>19e4675</code></a>
Add support for .tool-versions file in setup-python (<a
href="https://redirect.github.com/actions/setup-python/issues/1043">#1043</a>)</li>
<li><a
href="6fd11e170a"><code>6fd11e1</code></a>
Bump <code>@​actions/glob</code> from 0.4.0 to 0.5.0 (<a
href="https://redirect.github.com/actions/setup-python/issues/1015">#1015</a>)</li>
<li><a
href="9e62be81b2"><code>9e62be8</code></a>
Support free threaded Python versions like '3.13t' (<a
href="https://redirect.github.com/actions/setup-python/issues/973">#973</a>)</li>
<li><a
href="6ca8e8598f"><code>6ca8e85</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1016">#1016</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-python/compare/v3...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=3&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 15:48:33 -04:00
dependabot[bot]
71b71768bf build(deps): bump actions/setup-node from 3 to 4 (#31237)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release we changed version of node runtime for
action from node16 to node20 and updated dependencies in <a
href="https://redirect.github.com/actions/setup-node/pull/866">actions/setup-node#866</a></p>
<p>Besides, release contains such changes as:</p>
<ul>
<li>Upgrade actions/checkout to v4 by <a
href="https://github.com/gmembre-zenika"><code>@​gmembre-zenika</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/868">actions/setup-node#868</a></li>
<li>Update actions/checkout for documentation and yaml by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/876">actions/setup-node#876</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/gmembre-zenika"><code>@​gmembre-zenika</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/868">actions/setup-node#868</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v4.0.0">https://github.com/actions/setup-node/compare/v3...v4.0.0</a></p>
<h2>v3.9.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1281">actions/setup-node#1281</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.9.1">https://github.com/actions/setup-node/compare/v3...v3.9.1</a></p>
<h2>v3.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade <code>@​actions/cache</code> to 4.0.3 by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1270">actions/setup-node#1270</a>
In scope of this release we updated actions/cache package to ensure
continued support and compatibility, as older versions of the package
are now deprecated. For more information please refer to the <a
href="https://github.com/actions/toolkit/discussions/1890">toolkit/cache</a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.9.0">https://github.com/actions/setup-node/compare/v3...v3.9.0</a></p>
<h2>v3.8.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update semver by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/861">actions/setup-node#861</a></li>
<li>Update temp directory creation by <a
href="https://github.com/nikolai-laevskii"><code>@​nikolai-laevskii</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/859">actions/setup-node#859</a></li>
<li>Bump <code>@​babel/traverse</code> from 7.15.4 to 7.23.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/870">actions/setup-node#870</a></li>
<li>Add notice about binaries not being updated yet by <a
href="https://github.com/nikolai-laevskii"><code>@​nikolai-laevskii</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/872">actions/setup-node#872</a></li>
<li>Update toolkit cache and core by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
and <a
href="https://github.com/seongwon-privatenote"><code>@​seongwon-privatenote</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/875">actions/setup-node#875</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.8.2">https://github.com/actions/setup-node/compare/v3...v3.8.2</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<p>In scope of this release, the filter was removed within the
cache-save step by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/831">actions/setup-node#831</a>.
It is filtered and checked in the toolkit/cache library.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.8.1">https://github.com/actions/setup-node/compare/v3...v3.8.1</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Add check for existing paths by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/803">actions/setup-node#803</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="49933ea528"><code>49933ea</code></a>
Bump <code>@​action/cache</code> from 4.0.2 to 4.0.3 (<a
href="https://redirect.github.com/actions/setup-node/issues/1262">#1262</a>)</li>
<li><a
href="e3ce749e20"><code>e3ce749</code></a>
feat: support private mirrors (<a
href="https://redirect.github.com/actions/setup-node/issues/1240">#1240</a>)</li>
<li><a
href="40337cb8f7"><code>40337cb</code></a>
Add support for indented eslint output (<a
href="https://redirect.github.com/actions/setup-node/issues/1245">#1245</a>)</li>
<li><a
href="1ccdddc9b8"><code>1ccdddc</code></a>
Make eslint-compact matcher compatible with Stylelint (<a
href="https://redirect.github.com/actions/setup-node/issues/98">#98</a>)</li>
<li><a
href="cdca7365b2"><code>cdca736</code></a>
Bump <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 (<a
href="https://redirect.github.com/actions/setup-node/issues/1220">#1220</a>)</li>
<li><a
href="22c0e7494f"><code>22c0e74</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/setup-node/issues/1203">#1203</a>)</li>
<li><a
href="a7c2d9473e"><code>a7c2d94</code></a>
actions/cache upgrade (<a
href="https://redirect.github.com/actions/setup-node/issues/1251">#1251</a>)</li>
<li><a
href="802632921f"><code>8026329</code></a>
Bump <code>@​actions/glob</code> from 0.4.0 to 0.5.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1200">#1200</a>)</li>
<li><a
href="1d0ff469b7"><code>1d0ff46</code></a>
Bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1205">#1205</a>)</li>
<li><a
href="574f09a9fa"><code>574f09a</code></a>
Bump <code>@​types/jest</code> from 29.5.12 to 29.5.14 (<a
href="https://redirect.github.com/actions/setup-node/issues/1201">#1201</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-node/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 14:43:18 -04:00
Christophe Bornet
921573e2b7 core: Add ruff rules SLF (#30666)
Add ruff rules SLF: https://docs.astral.sh/ruff/rules/#flake8-self-slf

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-14 18:42:39 +00:00
dependabot[bot]
d8a7eda12e build(deps): bump astral-sh/setup-uv from 5 to 6 (#31235)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 5
to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/setup-uv/releases">astral-sh/setup-uv's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0 🌈 activate-environment and working-directory</h2>
<h2>Changes</h2>
<p>This version contains some breaking changes which have been gathering
up for a while. Lets dive into them:</p>
<ul>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#activate-environment">Activate
environment</a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#working-directory">Working
Directory</a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#default-cache-dependency-glob">Default
<code>cache-dependency-glob</code></a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#use-default-cache-dir-on-self-hosted-runners">Use
default cache dir on self hosted runners</a></li>
</ul>
<h3>Activate environment</h3>
<p>In previous versions using the input <code>python-version</code>
automatically activated a venv at the repository root.
This led to some unwanted side-effects, was sometimes unexpected and not
flexible enough.</p>
<p>The venv activation is now explicitly controlled with the new input
<code>activate-environment</code> (false by default):</p>
<pre lang="yaml"><code>- name: Install the latest version of uv and
activate the environment
  uses: astral-sh/setup-uv@v6
  with:
    activate-environment: true
- run: uv pip install pip
</code></pre>
<p>The venv gets created by the <a
href="https://docs.astral.sh/uv/pip/environments/"><code>uv
venv</code></a> command so the python version is controlled by the
<code>python-version</code> input or the files
<code>pyproject.toml</code>, <code>uv.toml</code>,
<code>.python-version</code> in the <code>working-directory</code>.</p>
<h3>Working Directory</h3>
<p>The new input <code>working-directory</code> controls where we look
for <code>pyproject.toml</code>, <code>uv.toml</code> and
<code>.python-version</code> files
which are used to determine the version of uv and python to install.</p>
<p>It can also be used to control where the venv gets created.</p>
<pre lang="yaml"><code>- name: Install uv based on the config files in
the working-directory
  uses: astral-sh/setup-uv@v6
  with:
    working-directory: my/subproject/dir
</code></pre>
<blockquote>
<p>[!CAUTION]</p>
<p>The inputs <code>pyproject-file</code> and <code>uv-file</code> have
been removed.</p>
</blockquote>
<h3>Default <code>cache-dependency-glob</code></h3>
<p><a href="https://github.com/ssbarnea"><code>@​ssbarnea</code></a>
found out that the default <code>cache-dependency-glob</code> was not
suitable for a lot of users.</p>
<p>The old default</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6b9c6063ab"><code>6b9c606</code></a>
Bump dependencies (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/389">#389</a>)</li>
<li><a
href="ef6bcdff59"><code>ef6bcdf</code></a>
Fix default cache dependency glob (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/388">#388</a>)</li>
<li><a
href="9a311713f4"><code>9a31171</code></a>
chore: update known checksums for 0.6.17 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/384">#384</a>)</li>
<li><a
href="c7f87aa956"><code>c7f87aa</code></a>
bump to v6 in README (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/382">#382</a>)</li>
<li><a
href="aadfaf08d6"><code>aadfaf0</code></a>
Change default cache-dependency-glob (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/352">#352</a>)</li>
<li><a
href="a0f9da6273"><code>a0f9da6</code></a>
No default UV_CACHE_DIR on selfhosted runners (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/380">#380</a>)</li>
<li><a
href="ec4c691628"><code>ec4c691</code></a>
new inputs activate-environment and working-directory (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/381">#381</a>)</li>
<li><a
href="aa1290542e"><code>aa12905</code></a>
chore: update known checksums for 0.6.16 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/378">#378</a>)</li>
<li><a
href="fcaddda076"><code>fcaddda</code></a>
chore: update known checksums for 0.6.15 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/377">#377</a>)</li>
<li><a
href="fb3a0a97fa"><code>fb3a0a9</code></a>
log info on venv activation (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/375">#375</a>)</li>
<li>See full diff in <a
href="https://github.com/astral-sh/setup-uv/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=astral-sh/setup-uv&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 14:41:13 -04:00
Minh Nguyen
8af0dc5fd6 docs: Update langchain-anthropic version for tutorial with web search tool (#31240)
**Description:** This is a document change regarding integration with
package `langchain-anthropic` for newly released websearch tool ([Claude
doc](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool)).

Issue 1: The sample in [Web Search
section](https://python.langchain.com/docs/integrations/chat/anthropic/#web-search)
did not run. You would get an error as below:
```
File "my_file.py", line 170, in call
    model_with_tools = model.bind_tools([websearch_tool])
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/langchain_anthropic/chat_models.py", line 1363, in bind_tools
    tool if _is_builtin_tool(tool) else convert_to_anthropic_tool(tool)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/langchain_anthropic/chat_models.py", line 1645, in convert_to_anthropic_tool
    input_schema=oai_formatted["parameters"],
                 ~~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'parameters'
```
This is because websearch tool is only recently supported in
langchain-anthropic==0.3.13`, in [0.3.13
release](https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain-anthropic%3D%3D0%22&expanded=true)
mentioning:
> anthropic[patch]: support web search
(https://github.com/langchain-ai/langchain/pull/31157)



Issue 2: The current doc has outdated package requirements for Websearch
tool: "This guide requires langchain-anthropic>=0.3.10".

Changes:
- Updated the required `langchain-anthropic` package version (0.3.10 ->
0.3.13).
- Added notes to user when using websearch sample.

I believe this will help avoid future confusion from readers.

**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A
2025-05-14 14:19:32 -04:00
Sydney Runkle
7263011b24 perf[core]: remove unnecessary model validators (#31238)
* Remove unnecessary cast of id -> str (can do with a field setting)
* Remove unnecessary `set_text` model validator (can be done with a
computed field - though we had to make some changes to the `Generation`
class to make this possible

Before: ~2.4s

Blue circles represent time spent in custom validators :(

<img width="1337" alt="Screenshot 2025-05-14 at 10 10 12 AM"
src="https://github.com/user-attachments/assets/bb4f477f-4ee3-4870-ae93-14ca7f197d55"
/>


After: ~2.2s

<img width="1344" alt="Screenshot 2025-05-14 at 10 11 03 AM"
src="https://github.com/user-attachments/assets/99f97d80-49de-462f-856f-9e7e8662adbc"
/>

We still want to optimize the backwards compatible tool calls model
validator, though I think this might involve breaking changes, so wanted
to separate that into a different PR. This is circled in green.
2025-05-14 10:20:22 -07:00
Sydney Runkle
1523602196 packaging[core]: bump min pydantic version (#31239)
Bumping to a version that's a year old, so seems like a reasonable bump.
2025-05-14 10:01:24 -07:00
ccurme
367566b02f docs: fix notebook (#31233)
This is no longer runnable in CI.
2025-05-14 11:53:38 -04:00
Scott Brenner
29bfbc0ea6 infra: Dependabot configuration to update actions in workflow (#31026)
Noticed a few Actions used in the workflows here are outdated, proposing
a Dependabot configuration to update - reference
https://docs.github.com/en/actions/security-guides/using-githubs-security-features-to-secure-your-use-of-github-actions#keeping-the-actions-in-your-workflows-secure-and-up-to-date

Suggest enabling
https://docs.github.com/en/code-security/dependabot/working-with-dependabot/about-dependabot-on-github-actions-runners#enabling-or-disabling-for-your-repository
as well
2025-05-14 11:40:54 -04:00
Lope Ramos
b8ae2de169 langchain-core[patch]: Incremental record manager deletion should be batched (#31206)
**Description:** Before this commit, if one record is batched in more
than 32k rows for sqlite3 >= 3.32 or more than 999 rows for sqlite3 <
3.31, the `record_manager.delete_keys()` will fail, as we are creating a
query with too many variables.

This commit ensures that we are batching the delete operation leveraging
the `cleanup_batch_size` as it is already done for `full` cleanup.

Added unit tests for incremental mode as well on different deleting
batch size.
2025-05-14 11:38:21 -04:00
Sydney Runkle
263c215112 perf[core]: remove generations summation from hot loop (#31231)
1. Removes summation of `ChatGenerationChunk` from hot loops in `stream`
and `astream`
2. Removes run id gen from loop as well (minor impact)

Again, benchmarking on processing ~200k chunks (a poem about broccoli).

Before: ~4.2s

Blue circle is all the time spent adding up gen chunks

<img width="1345" alt="Screenshot 2025-05-14 at 7 48 33 AM"
src="https://github.com/user-attachments/assets/08a59d78-134d-4cd3-9d54-214de689df51"
/>

After: ~2.3s

Blue circle is remaining time spent on adding chunks, which can be
minimized in a future PR by optimizing the `merge_content`,
`merge_dicts`, and `merge_lists` utilities.

<img width="1353" alt="Screenshot 2025-05-14 at 7 50 08 AM"
src="https://github.com/user-attachments/assets/df6b3506-929e-4b6d-b198-7c4e992c6d34"
/>
2025-05-14 08:13:05 -07:00
Sydney Runkle
17b799860f perf[core]: remove costly async helpers for non-end event handlers (#31230)
1. Remove `shielded` decorator from non-end event handlers
2. Exit early with a `self.handlers` check instead of doing unnecessary
asyncio work

Using a benchmark that processes ~200k chunks (a poem about broccoli).

Before: ~15s

Circled in blue is unnecessary event handling time. This is addressed by
point 2 above

<img width="1347" alt="Screenshot 2025-05-14 at 7 37 53 AM"
src="https://github.com/user-attachments/assets/675e0fed-8f37-46c0-90b3-bef3cb9a1e86"
/>

After: ~4.2s

The total time is largely reduced by the removal of the `shielded`
decorator, which holds little significance for non-end handlers.

<img width="1348" alt="Screenshot 2025-05-14 at 7 37 22 AM"
src="https://github.com/user-attachments/assets/54be8a3e-5827-4136-a87b-54b0d40fe331"
/>
2025-05-14 07:42:56 -07:00
ccurme
0b8837a0cc openai: support runtime kwargs in embeddings (#31195) 2025-05-14 09:14:40 -04:00
Rares Vernica
4f41b54bcb docs:Fix Google GenAI Embedding params (#31188)
Extend Google parameters in the embeddings tab to include Google GenAI
(Gemini)

**Description:** Update embeddings tab to include example for Google
GenAI (Gemini)

**Issue:** N/A

**Dependencies:** N/A

**Twitter handle:** N/A


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-14 08:50:11 -04:00
MedlockM
ce0b1a9428 docs: update how_to docs to reflect new loaders interface and functionalities (#31219)
- **Description:** Updates two notebooks in the how_to documentation to
reflect new loader interfaces and functionalities.
- **Issue:** Some how_to notebooks were still using loader interfaces
from previous versions of LangChain and did not demonstrate the latest
loader functionalities (e.g., extracting images with `ImageBlobParser`,
extracting tables in specific output formats, parsing documents using
Vision-Language Models with `ZeroxPDFLoader`, and using
`CloudBlobLoader` in the `GenericLoader`, etc.).
- **Dependencies:** `py-zerox`
- **Twitter handle:** @MarcMedlock2

---------

Co-authored-by: Marc Medlock <marc.medlock@octo.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-13 17:59:01 -04:00
Michael Li
275e3b6710 docs: replace initialize_agent with create_react_agent in searchapi - replace deprecated load_tools (#31203)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 16:19:18 -04:00
Collier King
e53c10e546 docs: update Cloudflare examples for env references (#31205)
- [ ] **Docs Update**: "langchain-cloudflare: add env var references in
example notebooks"
- We've updated our Cloudflare integration example notebooks with
examples showing environmental variables to initialize the class
instances.
2025-05-13 16:18:27 -04:00
Michael Li
395f057243 docs: replace deprecated load_tools in google_finance.ipynb (#31220)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:56:11 -04:00
Michael Li
a9ee625f32 docs: replace deprecated load_tools in searx.mdx (#31218)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:44 -04:00
Michael Li
544648eb71 docs: replace deprecated load_tools in wolfram_alpha.mdx (#31217)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:29 -04:00
Michael Li
40be8d1d90 docs: replace deprecated load_tools in stackexchange.mdx (#31216)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:15 -04:00
Michael Li
f034bd7933 docs: replace deprecated load_tools in serpapi.mdx (#31215)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:01 -04:00
Michael Li
17a04dd598 docs: replace deprecated load_tools in google.mdx (#31214)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:46 -04:00
Michael Li
a44e707811 docs: replace deprecated load_tools in google_serper.mdx (#31213)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:30 -04:00
Michael Li
3520520a48 docs: replace deprecated load_tools in golden.mdx (#31212)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:15 -04:00
Michael Li
09d74504e3 docs: replace deprecated load_tools in dataforseo.mdx (#31211)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:51:40 -04:00
Shorthills AI
b2f0fbfea5 Update tools.mdx (#124) (#31207)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
  
Changed toolkit=ExampleTookit to toolkit = ExampleToolkit(...) in
tools.mdx file

- [ ] **PR message**: ***Changed toolkit=ExampleTookit to toolkit =
ExampleToolkit(...) in tools.mdx file
- [ ] ***
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: SiddharthAnandShorthillsAI <siddharth.anand@shorthills.ai>
2025-05-13 11:01:54 -04:00
Michael Li
636a35fc2d docs: replace initialize_agent with create_react_agent in llmonitor.md (#31200)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-12 22:05:11 +00:00
Michael Li
7b9feb60cc docs: replace initialize_agent with create_react_agent in openweathermap - replace deprecated load_tools (#31202)
…map.ipynb

Update openweathermap markdown file for tools

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-12 15:12:02 -04:00
Michael Li
87add0809f docs: replace initialize_agent with create_react_agent in graphql.ipynb - replace deprecated load_tools (#31201)
Replace the deprecated load_tools

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-12 15:11:39 -04:00
ccurme
868cfc4a8f openai: ignore function_calls if tool_calls are present (#31198)
Some providers include (legacy) function calls in `additional_kwargs` in
addition to tool calls. We currently unpack both function calls and tool
calls if present, but OpenAI will raise 400 in this case.

This can come up if providers are mixed in a tool-calling loop. Example:
```python
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool


@tool
def get_weather(location: str) -> str:
    """Get weather at a location."""
    return "It's sunny."



gemini = init_chat_model("google_genai:gemini-2.0-flash-001").bind_tools([get_weather])
openai = init_chat_model("openai:gpt-4.1-mini").bind_tools([get_weather])

input_message = HumanMessage("What's the weather in Boston?")
tool_call_message = gemini.invoke([input_message])

assert len(tool_call_message.tool_calls) == 1
tool_call = tool_call_message.tool_calls[0]
tool_message = get_weather.invoke(tool_call)

response = openai.invoke(  # currently raises 400 / BadRequestError
    [input_message, tool_call_message, tool_message]
)
```

Here we ignore function calls if tool calls are present.
2025-05-12 13:50:56 -04:00
Christophe Bornet
83d006190d core: Fix some private member accesses (#30912)
See https://github.com/langchain-ai/langchain/pull/30666

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-05-12 17:42:26 +00:00
CtrlMj
1e56c66f86 core: Fix issue 31035 alias fields in base tool langchain core (#31112)
**Description**: The 'inspect' package in python skips over the aliases
set in the schema of a pydantic model. This is a workound to include the
aliases from the original input.
**issue**: #31035 


Cc: @ccurme @eyurtsev

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-12 11:04:13 -04:00
mateencog
92af7b0933 infra: Suppress error in make api_docs_clean if index.md is missing (#31129)
- **Description:** Added -f flag to rm when calling ```make
api_docs_clean``` to suppress error by make if index.md doesn't exist.
- **Dependencies:** none

On calling ```make api_docs_clean```

Behavior without this PR:
```
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm docs/api_reference/index.md
rm: cannot remove 'docs/api_reference/index.md': No such file or directory
make: *** [Makefile:51: api_docs_clean] Error 1
```
After this PR:

```
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
```
2025-05-11 17:26:49 -04:00
meirk-brd
e6147ce5d2 docs: Add Brightdata integration documentation (#31114)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"

- **Description:** Integrated the Bright Data package to enable
Langchain users to seamlessly incorporate Bright Data into their agents.
 - **Dependencies:** None
- **LinkedIn handle**:[Bright
Data](https://www.linkedin.com/company/bright-data)

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-11 16:07:21 +00:00
Michael Li
0d59fe9789 docs: replace initialize_agent with create_react_agent in agent_vectorstore.ipynb (#31183)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-11 11:19:18 -04:00
ccurme
ff9183fd3c docs: add Gel integration (#31186)
Continued from https://github.com/langchain-ai/langchain/pull/31050

---------

Co-authored-by: deepbuzin <contactbuzin@gmail.com>
2025-05-11 10:17:18 -04:00
Michael Li
65fbbb0249 docs: replace initialize_agent with create_react_agent in searchapi.ipynb (#31184)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-11 09:42:25 -04:00
ccurme
77d3f04e0a docs: add Aerospike to package registry (#31185)
Missed as part of https://github.com/langchain-ai/langchain/pull/31156
2025-05-11 09:33:58 -04:00
dwelch-spike
0dee089ba7 docs: document the move of the aerospike vector store integration to langchain-aerospike vec-595 (#31156)
**Description:** The Aerospike Vector Search vector store integration
has moved out of langchain-community and to its own repository,
https://github.com/aerospike/langchain-aerospike. This PR updates
langchain documentation to reference it.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-11 09:30:29 -04:00
Asif Mehmood
2ec74fea44 docs: update DoctranPropertyExtractor import path and fix typo (#31177)
**Description:**
Updated the import path for `DoctranPropertyExtractor` from
`langchain_community.document_loaders` to
`langchain_community.document_transformers` in multiple locations to
reflect recent package structure changes. Also corrected a minor typo in
the word "variable".

**Issue:**
N/A

**Dependencies:**
N/A

**LinkedIn handle:** For shout out if announced [Asif
Mehmood](https://www.linkedin.com/in/asifmehmood1997/).
2025-05-10 15:43:40 -04:00
Sumin Shin
683da2c9e9 text-splitters: Fix regex separator merge bug in CharacterTextSplitter (#31137)
**Description:**
Fix the merge logic in `CharacterTextSplitter.split_text` so that when
using a regex lookahead separator (`is_separator_regex=True`) with
`keep_separator=False`, the raw pattern is not re-inserted between
chunks.

**Issue:**
Fixes #31136 

**Dependencies:**
None

**Twitter handle:**
None

Since this is my first open-source PR, please feel free to point out any
mistakes, and I'll be eager to make corrections.
2025-05-10 15:42:03 -04:00
Michael Li
0ef4ac75b7 docs: remove duplicated and inaccurate mulvus doc (part of langchain-ai#31104) (#31154) 2025-05-10 19:38:11 +00:00
ccurme
23ec06b481 docs: showcase gemini-2.0-flash-preview-image-generation (#31176) 2025-05-09 11:17:15 -04:00
ccurme
e9e597be8e docs: update sort order in integrations table (#31171) 2025-05-08 20:44:21 +00:00
ccurme
0ba8697286 infra: add to vercel overrides (#31170)
Incompatibility between langchain-redis and langchain-ai21 `tenacity`
dep
2025-05-08 20:36:43 +00:00
ccurme
9aac8923a3 docs: add web search to anthropic docs (#31169) 2025-05-08 16:20:11 -04:00
Victor Hiairrassary
efc52e18e9 docs: fix typing in how_to/custom_tools.ipynb (#31164)
Fix typing in how_to/custom_tools.ipynb
2025-05-08 13:51:42 -04:00
ccurme
2d202f9762 anthropic[patch]: split test into two (#31167) 2025-05-08 09:23:36 -04:00
ccurme
d4555ac924 anthropic: release 0.3.13 (#31162) 2025-05-08 03:13:15 +00:00
ccurme
e34f9fd6f7 anthropic: update streaming usage metadata (#31158)
Anthropic updated how they report token counts during streaming today.
See changes to `MessageDeltaUsage` in [this
commit](2da00f26c5 (diff-1a396eba0cd9cd8952dcdb58049d3b13f6b7768ead1411888d66e28211f7bfc5)).

It's clean and simple to grab these fields from the final
`message_delta` event. However, some of them are typed as Optional, and
language
[here](e42451ab3f/src/anthropic/lib/streaming/_messages.py (L462))
suggests they may not always be present. So here we take the required
field from the `message_delta` event as we were doing previously, and
ignore the rest.
2025-05-07 23:09:56 -04:00
Tanushree
6c3901f9f9 Change banner (#31159)
Changing the banner to lead to careers page instead of interrupt

---------

Co-authored-by: Brace Sproul <braceasproul@gmail.com>
2025-05-07 18:07:10 -07:00
ccurme
682f338c17 anthropic[patch]: support web search (#31157) 2025-05-07 18:04:06 -04:00
ccurme
d7e016c5fc huggingface: release 0.2 (#31153) 2025-05-07 15:33:07 -04:00
ccurme
4b11cbeb47 huggingface[patch]: update lockfile (#31152) 2025-05-07 15:17:33 -04:00
ccurme
b5b90b5929 anthropic[patch]: be robust to null fields when translating usage metadata (#31151) 2025-05-07 18:30:21 +00:00
ccurme
f70b263ff3 core: release 0.3.59 (#31150) 2025-05-07 17:36:59 +00:00
ccurme
bb69d4c42e docs: specify js support for tavily (#31149) 2025-05-07 11:30:04 -04:00
zhurou603
1df3ee91e7 partners: (langchain-openai) total_tokens should not add 'Nonetype' t… (#31146)
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…

# PR Description

## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.

## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.

## Dependencies
None

## Twitter handle
None

![image](https://github.com/user-attachments/assets/9683a795-a003-455a-ada9-fe277245e2b2)

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
2025-05-07 11:09:50 -04:00
Collier King
19041dcc95 docs: update langchain-cloudflare repo/path on packages.yaml (#31138)
**Library Repo Path Update **: "langchain-cloudflare"

We recently changed our `langchain-cloudflare` repo to allow for future
libraries.
Created a `libs` folder to hold `langchain-cloudflare` python package.


https://github.com/cloudflare/langchain-cloudflare/tree/main/libs/langchain-cloudflare
 
On `langchain`, updating `packages.yaml` to point to new
`libs/langchain-cloudflare` library folder.
2025-05-07 11:01:25 -04:00
Simonas Jakubonis
3cba22d8d7 docs: Pinecone Rerank example notebook (#31147)
Created an example notebook of how to use Pinecone Reranking service
cc @jamescalam
2025-05-07 11:00:42 -04:00
Jacob Lee
66d1ed6099 fix(core): Permit OpenAI style blocks to be passed into convert_to_openai_messages (#31140)
Should effectively be a noop, just shouldn't throw

CC @madams0013

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-07 10:57:37 -04:00
Tushar Nitave
a15034d8d1 docs: Fixed grammar for chat prompt composition (#31148)
This PR fixes a grammar issue in the sentence:

"A chat prompt is made up a of a list of messages..." → "A chat prompt
is made up of a list of messages. "
2025-05-07 10:51:34 -04:00
Michael Li
57c81dc3e3 docs: replace initialize_agent with create_react_agent in graphql.ipynb (#31133)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-06 16:09:52 -04:00
pulvedu
52732a4d13 Update docs (#31135)
Updating tavily docs to indicate JS support

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-05-06 16:09:39 -04:00
Simonas Jakubonis
5dde64583e docs: Updated pinecone.mdx in the integration providers (#31123)
Updated pinecone.mdx in the integration providers

Added short description and examples for SparseVector store and
SparseEmbeddings
2025-05-06 12:58:32 -04:00
Tomaz Bratanic
6b6750967a Docs: Change to async llm graph transformer (#31126) 2025-05-06 12:53:57 -04:00
ccurme
703fce7972 docs: document that Anthropic supports boolean parallel_tool_calls param in guide (#31122) 2025-05-05 20:25:27 -04:00
唐小鸭
50fa524a6d partners: (langchain-deepseek) fix deepseek-r1 always returns an empty reasoning_content when reasoning (#31065)
## Description
deepseek-r1 always returns an empty string `reasoning_content` to the
first chunk when thinking, and sets `reasoning_content` to None when
thinking is over, to determine when to switch to normal output.

Therefore, whether the reasoning_content field exists should be judged
as None.

## Demo
deepseek-r1 reasoning output: 

```
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': 'assistant', 'tool_calls': None, 'reasoning_content': ''}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '好的'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': ','}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '用户'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

deepseek-r1 first normal output
```
...
{'delta': {'content': ' main', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': '\n\nimport', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-05 22:31:58 +00:00
Michael Li
c0b69808a8 docs: replace initialize_agent with create_react_agent in openweathermap.ipynb (#31115)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-05 18:08:09 -04:00
ccurme
fce8caca16 docs: minor fix in Pinecone (#31110) 2025-05-03 20:16:27 +00:00
Simonas Jakubonis
b8d0403671 docs: updated pinecone example notebook (#30993)
- **Description:** Update Pinecone notebook example
  - **Issue:** N\A
  - **Dependencies:** N\A
  - **Twitter handle:** N\A


- [ x ] **Add tests and docs**: Just notebook updates


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-03 16:02:21 -04:00
Michael Li
1204fb8010 docs: replace initialize_agent with create_react_agent in yahoo_finance_news.ipynb (#31108)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-03 15:45:52 -04:00
Adeel Ehsan
1e00116ae7 docs: add docs for vectara tools (#30958)
Thank you for contributing to LangChain!

- [ ] **Docs for Vectara Tools**: "langchain-vectara"
2025-05-03 15:39:16 -04:00
Haris Colic
3e25a93136 Docs: replace initialize agent with create react agent for google tools (#31043)
- **Description:** The deprecated initialize_agent functionality is
replaced with create_react_agent for the google tools. Also noticed a
potential issue with the non-existent "google-drive-search" which was
used in the old `google-drive.ipynb`. If this should be a by default
available tool, an issue should be opened to modify
langchain-community's `load_tools` accordingly.
- **Issue:**  #29277
- **Dependencies:** No added dependencies
- **Twitter handle:** No Twitter account
2025-05-03 15:20:07 -04:00
Stefano Lottini
325f729a92 docs: improvements to Astra DB pages, especially modernize Vector DB example notebook (#30961)
This PR brings several improvements and modernizations to the
documentation around the Astra DB partner package.

- language alignment for better matching with the terms used in the
Astra DB docs
- updated several links to pages on said documentation
- for the `AstraDBVectorStore`, added mentions of the new features in
the overall `astra.mdx`
- for the vector store, rewritten/upgraded most of the usage example
notebook for a more straightforward experience able to highlight the
main usage patterns (including new ones such as the newly-introduced
"autodetect feature")

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-03 14:26:52 -04:00
Asif Mehmood
00ac49dd3e Replace deprecated .dict() with .model_dump() for Pydantic v2 compatibility (#31107)
**What does this PR do?**
This PR replaces deprecated usages of ```.dict()``` with
```.model_dump()``` to ensure compatibility with Pydantic v2 and prepare
for v3, addressing the deprecation warning
```PydanticDeprecatedSince20``` as required in [Issue#
31103](https://github.com/langchain-ai/langchain/issues/31103).

**Changes made:**
* Replaced ```.dict()``` with ```.model_dump()``` in multiple locations
* Ensured consistency with Pydantic v2 migration guidelines
* Verified compatibility across affected modules

**Notes**
* This is a code maintenance and compatibility update
* Tested locally with Pydantic v2.11
* No functional logic changes; only internal method replacements to
prevent deprecation issues
2025-05-03 13:40:54 -04:00
ccurme
6268ae8db0 langchain: release 0.3.25 (#31101) 2025-05-02 17:42:32 +00:00
ccurme
77ecf47f6d openai: release 0.3.16 (#31100) 2025-05-02 13:14:46 -04:00
ccurme
ff41f47e91 core: release 0.3.58 (#31099) 2025-05-02 12:46:32 -04:00
Eugene Yurtsev
4da525bc63 langchain[patch]: Remove beta decorator from init_embeddings (#31098)
Remove beta decorator from init_embeddings.
2025-05-02 11:52:50 -04:00
ccurme
94139ffcd3 openai[patch]: format system content blocks for Responses API (#31096)
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)

messages = [
    SystemMessage("test"),                                   # Works
    HumanMessage("test"),                                    # Works
    SystemMessage([{"type": "text", "text": "test"}]),       # Bug in this case
    HumanMessage([{"type": "text", "text": "test"}]),        # Works
    SystemMessage([{"type": "input_text", "text": "test"}])  # Works
]

llm._get_request_payload(messages)
```
2025-05-02 15:22:30 +00:00
ccurme
26ad239669 core, openai[patch]: prefer provider-assigned IDs when aggregating message chunks (#31080)
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.

For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
2025-05-02 11:18:18 -04:00
ccurme
72f905a436 infra: fix notebook tests (#31097) 2025-05-02 14:33:11 +00:00
William FH
b5bf2d6218 0.3.57 (#31095) 2025-05-01 23:42:26 -07:00
William FH
167afa5102 Enable run mutation (#31090)
This lets you more easily modify a run in-flight
2025-05-01 17:00:51 -07:00
Simonas Jakubonis
0b79fc1733 docs: Pinecone Sparse vectorstore example (#31066)
Description: Pinecone SparseVectorStore example
cc @jamescalam

---------

Co-authored-by: James Briggs <35938317+jamescalam@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-01 18:08:10 -04:00
ccurme
c51eadd54f openai[patch]: propagate service_tier to response metadata (#31089) 2025-05-01 13:50:48 -04:00
ccurme
6110c3ffc5 openai[patch]: release 0.3.15 (#31087) 2025-05-01 09:22:30 -04:00
Ben Gladwell
da59eb7eb4 anthropic: Allow kwargs to pass through when counting tokens (#31082)
- **Description:** `ChatAnthropic.get_num_tokens_from_messages` does not
currently receive `kwargs` and pass those on to
`self._client.beta.messages.count_tokens`. This is a problem if you need
to pass specific options to `count_tokens`, such as the `thinking`
option. This PR fixes that.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @bengladwell

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-30 17:56:22 -04:00
Really Him
918c950737 DOCS: partners/chroma: Fix documentation around chroma query filter syntax (#31058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

**Description**:
* Starting to put together some PR's to fix the typing around
`langchain-chroma` `filter` and `where_document` query filtering, as
mentioned:

https://github.com/langchain-ai/langchain/issues/30879
https://github.com/langchain-ai/langchain/issues/30507

The typing of `dict[str, str]` is on the one hand too restrictive (marks
valid filter expressions as ill-typed) and also too permissive (allows
illegal filter expressions). That's not what this PR addresses though.
This PR just removes from the documentation some examples of filters
that are illegal, and also syntactically incorrect: (a) dictionaries
with keys like `$contains` but the key is missing quotation marks; (b)
dictionaries with multiple entries - this is illegal in Chroma filter
syntax and will raise an exception. (`{"foo": "bar", "qux": "baz"}`).
Filter dictionaries in Chroma must have one and one key only. Again this
is just the documentation issue, which is the lowest hanging fruit. I
also think we need to update the types for `filter` and `where_document`
to be (at the very least `dict[str, Any]`), or, since we have access to
Chroma's types, they should be `Where` and `WhereDocument` types. This
has a wider blast radius though, so I'm starting small.

This PR does not fix the issues mentioned above, it's just starting to
get the ball rolling, and cleaning up the documentation.



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-04-30 17:51:07 -04:00
Mateus Scheper
ed7cd3c5c4 docs: Fixing typo: "chocalate" -> "chocolate". (#31078)
Just fixing a typo that was making me anxious: "chocalate" ->
"chocolate".
2025-04-30 15:09:36 -04:00
yberber-sap
952a0b7b40 Docs: Fix SAP HANA Cloud docs - remove pip output, update vectorstore link, rename provider (#31077)
This PR includes the following documentation fixes for the SAP HANA
Cloud vector store integration:
- Removed stale output from the `%pip install` code cell.
- Replaced an unrelated vectorstore documentation link on the provider
overview page.
- Renamed the provider from "SAP HANA" to "SAP HANA Cloud"
2025-04-30 08:57:40 -04:00
Akshay Dongare
0b8e9868e6 docs: update LiteLLM integration docs for router migration to langchain-litellm (#31063)
# What's Changed?
- [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated
with docs for litellm_router since it has been moved into the
[langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm)
package along with ChatLiteLLM

- [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** :
Deleted to avoid redundancy

- [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** :
Updated to reflect inclusion of ChatLiteLLMRouter class

- [x] Lint and test: Done

# Issue:
- [x] Related to the issue
https://github.com/langchain-ai/langchain/issues/30368

# About me
- [x] 🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-29 17:48:11 -04:00
Lukas Scheucher
275ba2ec37 Add Compass Labs toolkits to langchain docs (#30794)
- **Description**: Adding documentation notebook for [compass-langchain
toolkit](https://pypi.org/project/langchain-compass/).
- **Issue**: N/a
- **Dependencies**: langchain-compass  
- **Twitter handle**: @labs_compass

---------

Co-authored-by: ccosnett <conor142857@icloud.com>
2025-04-29 17:42:43 -04:00
ccurme
bdb7c4a8b3 huggingface: fix embeddings return type (#31072)
Integration tests failing

cc @hanouticelina
2025-04-29 18:45:04 +00:00
célina
868f07f8f4 partners: (langchain-huggingface) Chat Models - Integrate Hugging Face Inference Providers and remove deprecated code (#30733)
Hi there, I'm Célina from 🤗,
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers for chat completion and text
generation tasks.

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpoint`, in favor of the task-specific `text_generation`
method. `InferenceClient.post()` is deprecated and will be removed in
`huggingface_hub v0.31.0`.

---
## Changes made
- bumped the minimum required version of the `huggingface-hub` package
to ensure compatibility with the latest API usage.
- added a `provider` field to `HuggingFaceEndpoint`, enabling users to
select the inference provider (e.g., 'cerebras', 'together',
'fireworks-ai'). Defaults to `hf-inference` (HF Inference API).
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpoint` with the task-specific `text_generation` method
for future-proofing, `post()` will be removed in huggingface-hub
v0.31.0.
- updated the `ChatHuggingFace` component:
    - added async and streaming support.
    - added support for tool calling.
- exposed underlying chat completion parameters for more granular
control.
- Added integration tests for `ChatHuggingFace` and updated the
corresponding unit tests.

  All changes are backward compatible.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-29 09:53:14 -04:00
ccurme
3072e4610a community: move to separate repo (continued) (#31069)
Missed these after merging
2025-04-29 09:25:32 -04:00
ccurme
9ff5b5d282 community: move to separate repo (#31060)
langchain-community is moving to
https://github.com/langchain-ai/langchain-community
2025-04-29 09:22:04 -04:00
Sydney Runkle
7e926520d5 packaging: remove Python upper bound for langchain and co libs (#31025)
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.

Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
2025-04-28 14:44:28 -04:00
Sydney Runkle
d614842d23 ci: temporarily run chroma on 3.12 for CI (#31056)
Waiting on a fix for https://github.com/chroma-core/chroma/issues/4382
2025-04-28 13:20:37 -04:00
Michael Li
ff1602f0fd docs: replace initialize_agent with create_react_agent in bash.ipynb (part of #29277) (#31042)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 13:07:22 -04:00
Christophe Bornet
aee7988a94 community: add mypy warn_unused_ignores rule (#30816) 2025-04-28 11:54:12 -04:00
Bae-ChangHyun
a2863f8757 community: add 'get_col_comments' option for retrieve database columns comments (#30646)
## Description
Added support for retrieving column comments in the SQL Database
utility. This feature allows users to see comments associated with
database columns when querying table information. Column comments
provide valuable metadata that helps LLMs better understand the
semantics and purpose of database columns.

A new optional parameter `get_col_comments` was added to the
`get_table_info` method, defaulting to `False` for backward
compatibility. When set to `True`, it retrieves and formats column
comments for each table.

Currently, this feature is supported on PostgreSQL, MySQL, and Oracle
databases.

## Implementation
You should create Table with column comments before.

```python
db = SQLDatabase.from_uri("YOUR_DB_URI")
print(db.get_table_info(get_col_comments=True)) 
```
## Result
```
CREATE TABLE test_table (
	name VARCHAR
        school VARCHAR)
/*
Column Comments: {'name': person name, 'school":school_name}
*/

/*
3 rows from test_table:
name
a
b
c
*/
```

## Benefits
1. Enhances LLM's understanding of database schema semantics
2. Preserves valuable domain knowledge embedded in database design
3. Improves accuracy of SQL query generation
4. Provides more context for data interpretation

Tests are available in
`langchain/libs/community/tests/test_sql_get_table_info.py`.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 15:19:46 +00:00
yberber-sap
3fb0a55122 Deprecate HanaDB, HanaTranslator and update example notebook to use new implementation (#30896)
- **Description:**  
This PR marks the `HanaDB` vector store (and related utilities) in
`langchain_community` as deprecated using the `@deprecated` annotation.
  - Set `since="0.1.0"` and `removal="1.0"`  
- Added a clear migration path and a link to the SAP-maintained
replacement in the
[`langchain_hana`](https://github.com/SAP/langchain-integration-for-sap-hana-cloud)
package.
Additionally, the example notebook has been updated to use the new
`HanaDB` class from `langchain_hana`, ensuring users follow the
recommended integration moving forward.

- **Issue:** None 

- **Dependencies:**  None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 16:37:35 -04:00
湛露先生
5fb8fd863a langchain_openai: clean duplicate code for openai embedding. (#30872)
The `_chunk_size` has not changed by method `self._tokenize`, So i think
these is duplicate code.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-27 15:07:41 -04:00
Philipp Schmid
79a537d308 Update Chat and Embedding guides (#31017)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 18:06:59 +00:00
ccurme
ba2518995d standard-tests: add condition for image tool message test (#31041)
Require support for [standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/).
2025-04-27 17:24:43 +00:00
ccurme
04a899ebe3 infra: support third-party integration packages in API ref build (#31021) 2025-04-25 16:02:27 -04:00
Stefano Lottini
a82d987f09 docs: Astra DB, replace leftover links to "community" legacy package + modernize doc loader signature (#30969)
This PR brings some much-needed updates to some of the Astra DB shorter
example notebooks,

- ensuring imports are from the partner package instead of the
(deprecated) community legacy package
- improving the wording in a few related places
- updating the constructor signature introduced with the latest partner
package's AstraDBLoader
- marking the community package counterpart of the LLM caches as
deprecated in the summary table at the end of the page.
2025-04-25 15:45:24 -04:00
ccurme
a60fd06784 docs: document OpenAI flex processing (#31023)
Following https://github.com/langchain-ai/langchain/pull/31005
2025-04-25 15:10:25 -04:00
ccurme
629b7a5a43 openai[patch]: add explicit attribute for service tier (#31005) 2025-04-25 18:38:23 +00:00
ccurme
ab871a7b39 docs: enable milvus in API ref build (#31016)
Reverts langchain-ai/langchain#30996

Should be fixed following
https://github.com/langchain-ai/langchain-milvus/pull/68
2025-04-25 12:48:10 +00:00
Georgi Stefanov
d30c56a8c1 langchain: return attachments in _get_response (#30853)
This is a PR to return the message attachments in _get_response, as when
files are generated these attachments are not returned thus generated
files cannot be retrieved

Fixes issue: https://github.com/langchain-ai/langchain/issues/30851
2025-04-24 21:39:11 -04:00
Abderrahmane Gourragui
09c1991e96 docs: update document examples (#31006)
## Description:

As I was following the docs I found a couple of small issues on the
docs.

this fixes some unused imports on the [extraction
page](https://python.langchain.com/docs/tutorials/extraction/#the-extractor)
and updates the examples on [classification
page](https://python.langchain.com/docs/tutorials/classification/#quickstart)
to be independent from the chat model.
2025-04-24 18:07:55 -04:00
ccurme
a7903280dd openai[patch]: delete redundant tests (#31004)
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966 [docs/community]: langchain docs + browserbaseloader fix (#30973)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec core: release 0.3.56 (#31000) 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6 docs: Update Google Analytics tag in docs (#31001) 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae openai: fix streaming reasoning without summaries (#30999)
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9 core: release 0.3.56rc1 (#30998) 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2 core[patch]: fix edge cases for _is_openai_data_block (#30997) 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9 Documentation: Add Google Gemini dropdown (#30995)
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a Documentation: Improve structure of Google integrations page (#30992)
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5 feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format (#30984)
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201 docs: temporarily disable milvus in API ref build (#30996) 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa community: add indexname to other functions in opensearch (#30987)
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240 community: Strip URLs from sitemap. (#30830)
Fixes #30829

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes #30829
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50 core, standard-tests: support PDF and audio input in Chat Completions format (#30979)
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
Bagatur
d4fc734250 core[patch]: update dict prompt template (#30967)
Align with JS changes made in
https://github.com/langchain-ai/langchainjs/pull/8043
2025-04-23 10:04:50 -07:00
ccurme
4bc70766b5 core, openai: support standard multi-modal blocks in convert_to_openai_messages (#30968) 2025-04-23 11:20:44 -04:00
ccurme
e4877e5ef1 fireworks: release 0.3.0 (#30977) 2025-04-23 10:08:38 -04:00
Christophe Bornet
8c5ae108dd text-splitters: Set strict mypy rules (#30900)
* Add strict mypy rules
* Fix mypy violations
* Add error codes to all type ignores
* Add ruff rule PGH003
* Bump mypy version to 1.15
2025-04-22 20:41:24 -07:00
ccurme
eedda164c6 fireworks[minor]: remove default model and temperature (#30965)
`mixtral-8x-7b-instruct` was recently retired from Fireworks Serverless.

Here we remove the default model altogether, so that the model must be
explicitly specified on init:
```python
ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")  # for example
```

We also set a null default for `temperature`, which previously defaulted
to 0.0. This parameter will no longer be included in request payloads
unless it is explicitly provided.
2025-04-22 15:58:58 -04:00
Grant
4be55f7c89 docs: fix typo at 175 (#30966)
**Description:** Corrected pre-buit to pre-built.
**Issue:** Little typo.
2025-04-22 18:13:07 +00:00
CLOVA Studio 개발
577cb53a00 community: update Naver integration to use langchain-naver package and improve documentation (#30956)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were completed.
We build our package as requested in [this
comment](https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791)
and the initial version is now uploaded to
[pypi](https://pypi.org/project/langchain-naver/).
So we've updated some our documents with the additional changed features
and how to download our partner-managed package.

## **Dependencies:** 

https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-22 12:00:10 -04:00
ccurme
a7c1bccd6a openai[patch]: remove xfails from image token counting tests (#30963)
These appear to be passing again.
2025-04-22 15:55:33 +00:00
ccurme
25d77aa8b4 community: release 0.3.22 (#30962) 2025-04-22 15:34:47 +00:00
ccurme
59fd4cb4c0 docs: update package registry sort order (#30960) 2025-04-22 15:27:32 +00:00
ccurme
b8c454b42b langchain: release 0.3.24 (#30959) 2025-04-22 11:23:34 -04:00
Dmitrii Rashchenko
a43df006de Support of openai reasoning summary streaming (#30909)
**langchain_openai: Support of reasoning summary streaming**

**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries

It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:

```
llm = ChatOpenAI(
    model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
    use_responses_api=True,  # reasoning streaming works only with responses api, not completion api
    model_kwargs={
        "reasoning": {
            "effort": "high",  # also "low" and "medium" supported
            "summary": "auto"  # some models support "concise" summary, some "detailed", but auto will always work
        }
    }
)
```

Now, if you stream events from llm:

```
async for event in llm.astream_events(prompt, version="v2"):
    print(event)
```

or

```
for chunk in llm.stream(prompt):
    print (chunk)
```

OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`

These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.

Example of how this reasoning summary may be printed:

```
    async for event in llm.astream_events(prompt, version="v2"):
        if event["event"] == "on_chat_model_stream":
            chunk: AIMessageChunk = event["data"]["chunk"]
            if "reasoning_summary_chunk" in chunk.additional_kwargs:
                print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
            elif "reasoning_summary" in chunk.additional_kwargs:
                print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
            elif chunk.content and chunk.content[0]["type"] == "text":
                print(chunk.content[0]["text"], end="")
```

or

```
    for chunk in llm.stream(prompt):
        if "reasoning_summary_chunk" in chunk.additional_kwargs:
            print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
        elif "reasoning_summary" in chunk.additional_kwargs:
            print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
        elif chunk.content and chunk.content[0]["type"] == "text":
            print(chunk.content[0]["text"], end="")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-22 14:51:13 +00:00
Alexander Ng
0f6fa34372 Community: Valyu Integration docs (#30926)
PR title:
docs: add Valyu integration documentation
Description:
This PR adds documentation and example notebooks for the Valyu
integration, including retriever and tool usage.
Issue:
N/A
Dependencies:
No new dependencies.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-21 17:43:00 -04:00
Jacob Mansdorfer
e8a84b05a4 Community: Adding tool calling and some new parameters to the langchain-predictionguard docs. (#30953)
- [x] **PR message**: 
- **Description:** Updates the documentation for the
langchain-predictionguard package, adding tool calling functionality and
some new parameters.
2025-04-21 17:01:57 -04:00
ccurme
8574442c57 core[patch]: release 0.3.55 (#30952) 2025-04-21 17:56:24 +00:00
ccurme
920d504e47 fireworks[patch]: update model in LLM integration tests (#30951)
`mixtral-8x7b-instruct` has been retired.
2025-04-21 17:53:27 +00:00
Anton Masalovich
1f3054502e community: fix cost calculations for 4.1 and o4 in OpenAI callback (#30899)
**Issue:** #30898
2025-04-21 10:59:47 -04:00
Ahmed Tammaa
589bc19890 anthropic[patch]: make description optional on AnthropicTool (#30935)
PR Summary

This change adds a fallback in ChatAnthropic.with_structured_output() to
handle Pydantic models that don’t include a docstring. Without it,
calling:
```py
from pydantic import BaseModel
from langchain_anthropic import ChatAnthropic

class SampleModel(BaseModel):
    sample_field: str

llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```
will raise a
```
KeyError: 'description'
```
because Pydantic omits the description field when no docstring is
present.

This issue doesn’t occur when using ChatOpenAI or if you add a docstring
to the model:
```py
from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class SampleModel(BaseModel):
    """Schema for sample_field output."""
    sample_field: str

llm = ChatOpenAI(
    model="gpt-4o-mini"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:44:39 -04:00
Nuno Campos
27296bdb0c core: Make Graph.Node.data optional (#30943)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-21 07:18:36 -07:00
Pushpa Kumar
0e9d0dbc10 docs: dynamic copyright year in API ref (#30944)
- [x] **PR title:**  
`docs: make footer year dynamic in API reference docs`

- [x] **PR message:**

  - **Description:**  
Update `docs/api_reference/conf.py` to make the copyright year dynamic
(on
[https://python.langchain.com/api_reference/](https://python.langchain.com/api_reference/)).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 14:10:14 +00:00
Ahmed Tammaa
de56c31672 core: Improve OutputParser error messaging when model output is truncated (max_tokens) (#30936)
Addresses #30158
When using the output parser—either in a chain or standalone—hitting
max_tokens triggers a misleading “missing variable” error instead of
indicating the output was truncated. This subtle bug often surfaces with
Anthropic models.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:06:18 -04:00
xsai9101
335f089d6a Community: Add bind variable support for oracle adb docloader (#30937)
PR title:
Community: Add bind variable support for oracle adb docloader
Description:
This PR adds support of using bind variable to oracle adb doc loader
class, including minor document change.
Issue:
N/A
Dependencies:
No new dependencies.
2025-04-21 08:47:33 -04:00
Ikko Eltociear Ashimine
9418c0d8a5 docs: update tableau.ipynb (#30938)
Initalize -> Initialize
2025-04-21 08:43:29 -04:00
Aubrey Ford
23f701b08e langchain_community: OpenAIEmbeddings not respecting chunk_size argument (#30946)
This is a follow-on PR to go with the identical changes that were made
in parters/openai.

Previous PR:  https://github.com/langchain-ai/langchain/pull/30757

When calling embed_documents and providing a chunk_size argument, that
argument is ignored when OpenAIEmbeddings is instantiated with its
default configuration (where check_embedding_ctx_length=True).

_get_len_safe_embeddings specifies a chunk_size parameter but it's not
being passed through in embed_documents, which is its only caller. This
appears to be an oversight, especially given that the
_get_len_safe_embeddings docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.
2025-04-21 08:39:07 -04:00
Aubrey Ford
b344f34635 partners/openai: OpenAIEmbeddings not respecting chunk_size argument (#30757)
When calling `embed_documents` and providing a `chunk_size` argument,
that argument is ignored when `OpenAIEmbeddings` is instantiated with
its default configuration (where `check_embedding_ctx_length=True`).

`_get_len_safe_embeddings` specifies a `chunk_size` parameter but it's
not being passed through in `embed_documents`, which is its only caller.
This appears to be an oversight, especially given that the
`_get_len_safe_embeddings` docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.

This bug also exists in langchain_community package. I can add that to
this PR if requested otherwise I will create a new one once this passes.
2025-04-18 15:27:27 -04:00
Konsti-s
017c8079e1 partners: ChatAnthropic supports urls (#30809)
**Description:**
partners-anthropic: ChatAnthropic supports b64 and urls in the
part[image_url][url] message variable

**Issue**:
ChatAnthropic right now only supports b64 encoded images in the
part[image_url][url] message variable. This PR enables ChatAnthropic to
also accept image urls in said variable and makes it compatible with
OpenAI messages to make model switching easier.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 15:15:45 -04:00
Volodymyr Tkachuk
d0cd115356 community: Add deprecation decorator to SingleStore community integrations (#30846)
SingleStore integration now has its package `langchain-singlestore', so
the community implementation will no longer be maintained.

Added `deprecated` decorator to `SingleStoreDBChatMessageHistory`,
`SingleStoreDBSemanticCache`, and `SingleStoreDB` classes in the
community package.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/30841

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-18 12:58:39 -04:00
Alejandro Rodríguez
34ddfba76b community: support usage_metadata for litellm streaming calls (#30683)
Support "usage_metadata" for LiteLLM streaming calls.

This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-18 12:50:32 -04:00
Volodymyr Tkachuk
5ffcd01c41 docs: Register langchain-singlestore integration (#30841)
I created and published `langchain-singlestoe` integration package that
should replace SingleStoreDB community implementation.
2025-04-18 12:11:33 -04:00
ccurme
096f0e5966 core[patch]: de-beta usage callback (#30928) 2025-04-18 15:45:09 +00:00
ccurme
46de0866db infra: add langchain-google-genai to monorepo test deps and update notebook cassettes (#30925)
Following https://github.com/langchain-ai/langchain/pull/30880
2025-04-18 11:16:12 -04:00
Behrad Hemati
d624a475e4 community: change metadata in opensearch mmr (#30921)
- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
2025-04-18 10:10:23 -04:00
rylativity
dbf9986d44 langchain-ollama (partners) / langchain-core: allow passing ChatMessages to Ollama (including arbitrary roles) (#30411)
Replacement for PR #30191 (@ccurme)

**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.

Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.

This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.

As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.

Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)

**Issue**: resolves #30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)

**Dependencies**: no new dependencies

[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 10:07:07 -04:00
Christophe Bornet
0c723af4b0 langchain[lint]: fix mypy type ignores (#30894)
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003

NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
2025-04-17 17:54:34 -04:00
ccurme
f14bcee525 docs: update multi-modal docs (#30880)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-17 16:03:05 -04:00
Sydney Runkle
98c357b3d7 core: release 0.3.54 (#30911) 2025-04-17 14:27:06 -04:00
Vadym Barda
d2cbfa379f core[patch]: add retries and better messages to draw_mermaid_png (#30881) 2025-04-17 18:25:37 +00:00
Sydney Runkle
75e50a3efd core[patch]: Raise AttributeError (instead of ModuleNotFoundError) in custom __getattr__ (#30905)
Follow up to https://github.com/langchain-ai/langchain/pull/30769,
fixing the regression reported
[here](https://github.com/langchain-ai/langchain/pull/30769#issuecomment-2807483610),
thanks @krassowski for the report!

Fix inspired by https://github.com/PrefectHQ/prefect/pull/16172/files

Other changes:
* Using tuples for `__all__`, except in `output_parsers` bc of a list
namespace conflict
* Using a helper function for imports due to repeated logic across
`__init__.py` files becoming hard to maintain.

Co-authored-by: Michał Krassowski < krassowski 5832902+krassowski@users.noreply.github.com>"
2025-04-17 14:15:28 -04:00
ccurme
61d2dc011e openai: release 0.3.14 (#30908) 2025-04-17 10:49:14 -04:00
ccurme
f0f90c4d88 anthropic: release 0.3.12 (#30907) 2025-04-17 14:45:12 +00:00
ccurme
f01b89df56 standard-tests: release 0.3.19 (#30906) 2025-04-17 10:37:44 -04:00
ccurme
add6a78f98 standard-tests, openai[patch]: add support standard audio inputs (#30904) 2025-04-17 10:30:57 -04:00
ccurme
2c2db1ab69 core: release 0.3.53 (#30901) 2025-04-17 13:10:32 +00:00
ccurme
86d51f6be6 multiple: permit optional fields on multimodal content blocks (#30887)
Instead of stuffing provider-specific fields in `metadata`, they can go
directly on the content block.
2025-04-17 12:48:46 +00:00
湛露先生
83b66cb916 doc: clean doc word description. (#30895)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:04:37 -04:00
湛露先生
ff2930c119 partners: bug fix check_imports.py exit code. (#30897)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:02:23 -04:00
ccurme
b36c2bf833 docs: update Bedrock chat model page (#30883)
- document prompt caching
- feature ChatBedrockConverse throughout
2025-04-16 16:55:14 -04:00
ccurme
9e82f1df4e docs: minor clean up in ChatOpenAI docs (#30884) 2025-04-16 16:08:43 -04:00
ccurme
fa362189a1 docs: document OpenAI reasoning summaries (#30882) 2025-04-16 19:21:14 +00:00
Sydney Runkle
88fce67724 core: Removing unnecessary pydantic core schema rebuilds (#30848)
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.

Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
2025-04-16 12:00:08 -04:00
rrozanski-smabbler
60d8ade078 Galaxia integration (#30792)
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"

- [ ] **PR message**:  **Twitter handle:** @Galaxia_graph

I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-16 10:39:04 -04:00
ccurme
ca39680d2a ollama: release 0.3.2 (#30865) 2025-04-16 09:14:57 -04:00
Sydney Runkle
4af3f89a3a docs: enforce newlines when signature exceeds char threshold (#30866)
Below is an example of the single line vs new multiline approach.

Before this PR:

<img width="831" alt="Screenshot 2025-04-15 at 8 56 26 PM"
src="https://github.com/user-attachments/assets/0c0277bd-2441-4b22-a536-e16984fd91b7"
/>

After this PR:

<img width="829" alt="Screenshot 2025-04-15 at 8 56 13 PM"
src="https://github.com/user-attachments/assets/e16bfe38-bb17-48ba-a642-e8ff6b48e841"
/>
2025-04-16 08:45:40 -04:00
milosz-l
4ff576e37d langchain: infer Perplexity provider for sonar model prefix (#30861)
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).

This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.

The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
2025-04-15 18:17:21 -04:00
ccurme
085baef926 ollama[patch]: support standard image format (#30864)
Following https://github.com/langchain-ai/langchain/pull/30746
2025-04-15 22:14:50 +00:00
ccurme
47ded80b64 ollama[patch]: fix generation info (#30863)
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
2025-04-15 19:22:58 +00:00
Sydney Runkle
cf2697ec53 chroma: release 0.2.3 (#30860) 2025-04-15 14:11:23 -04:00
ccurme
8e9569cbc8 perplexity: release 0.1.1 (#30859) 2025-04-15 18:02:15 +00:00
ccurme
dd5f5902e3 openai: release 0.3.13 (#30858) 2025-04-15 17:58:12 +00:00
ccurme
3382ee8f57 anthropic: release 0.3.11 (#30857) 2025-04-15 17:57:00 +00:00
Sydney Runkle
ef5aff3b6c core[fix]: Fix __dir__ in __init__.py for output_parsers module (#30856)
We have a `list.py` file which causes a namespace conflict with `list`
from stdlib, unfortunately.

`__all__` is already a list, so no need to coerce.
2025-04-15 13:09:13 -04:00
Christophe Bornet
a4ca1fe0ed core: Remove some noqa (#30855) 2025-04-15 13:08:40 -04:00
ccurme
6baf5c05a6 standard-tests: release 0.3.18 (#30854) 2025-04-15 16:56:54 +00:00
ccurme
c6a8663afb infra: run old standard-tests on core releases (#30852)
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.

Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
2025-04-15 16:04:08 +00:00
Sydney Runkle
1f5e207379 core[fix]: remove load from dynamic imports dict (#30849) 2025-04-15 12:02:46 -04:00
ccurme
7240458619 core: release 0.3.52 (#30850) 2025-04-15 15:28:31 +00:00
Sydney Runkle
6aa5494a75 Fix from langchain_core.load.load import load import (#30843)
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.

This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.

In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.

So, either of these imports should work:

```py
from langchain_core.load import load
from langchain_core.load.load import load
```

As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.

<details> More on why the lazy imports PR broke this support...

All was well, except when the absolute import was run first, see the
last snippet:

```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```

```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```

In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.

</details>

New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
2025-04-15 11:06:13 -04:00
Bagatur
7262de4217 core[patch]: dict chat prompt template support (#25674)
- Support passing dicts as templates to chat prompt template
- Support making *any* attribute on a message a runtime variable
- Significantly simpler than trying to update our existing prompt
template classes

```python
    template = ChatPromptTemplate(
        [
            {
                "role": "assistant",
                "content": [
                    {
                        "type": "text",
                        "text": "{text1}",
                        "cache_control": {"type": "ephemeral"},
                    },
                    {"type": "image_url", "image_url": {"path": "{local_image_path}"}},
                ],
                "name": "{name1}",
                "tool_calls": [
                    {
                        "name": "{tool_name1}",
                        "args": {"arg1": "{tool_arg1}"},
                        "id": "1",
                        "type": "tool_call",
                    }
                ],
            },
            {
                "role": "tool",
                "content": "{tool_content2}",
                "tool_call_id": "1",
                "name": "{tool_name1}",
            },
        ]
    )

```

will likely close #25514 if we like this idea and update to use this
logic

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-15 11:00:49 -04:00
ccurme
9cfe6bcacd multiple: multi-modal content blocks (#30746)
Introduces standard content block format for images, audio, and files.

## Examples

Image from url:
```
{
    "type": "image",
    "source_type": "url",
    "url": "https://path.to.image.png",
}
```


Image, in-line data:
```
{
    "type": "image",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "image/png",
}
```


PDF, in-line data:
```
{
    "type": "file",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "application/pdf",
}
```


File from ID:
```
{
    "type": "file",
    "source_type": "id",
    "id": "file-abc123",
}
```


Plain-text file:
```
{
    "type": "file",
    "source_type": "text",
    "text": "foo bar",
}
```
2025-04-15 09:48:06 -04:00
湛露先生
09438857e8 docs: fix tools_human.ipynb url 404. (#30831)
Fix the 404 pages.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-15 09:22:13 -04:00
Sydney Runkle
e3b6cddd5e core: codspeed tweak to make sure it runs on master (#30845) 2025-04-15 13:03:44 +00:00
Sydney Runkle
59f2c9e737 Tinkering with CodSpeed (#30824)
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition

Reduce scope of async benchmark to save time on CI

Waiting to merge this PR until we figure out how to use walltime on
local runners.
2025-04-15 08:49:09 -04:00
William FH
ed5c4805f6 Consistent docstring indentation (#30834)
Should be 4 spaces instead of 3.
2025-04-14 19:04:35 -07:00
Joey Constantino
2282762528 docs: small Tableau docs update (#30827)
Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
2025-04-14 15:34:54 -04:00
ccurme
f7c4965fb6 openai[patch]: update imports in test (#30828)
Quick fix to unblock CI, will need to address in core separately.
2025-04-14 19:33:38 +00:00
Sydney Runkle
edb6a23aea core[lint]: fix issue with unused ignore in __init__.py files (#30825)
Fixing a race condition between
https://github.com/langchain-ai/langchain/pull/30769 and
https://github.com/langchain-ai/langchain/pull/30737
2025-04-14 17:57:00 +00:00
湛露先生
3a64c7195f community: redis tool typos fix (#30811) 2025-04-14 09:01:36 -04:00
Sydney Runkle
4f69094b51 core[performance]: use custom __getattr__ in __init__.py files for lazy imports (#30769)
Most easily reviewed with the "hide whitespace" option toggled.

Seeing 10-50% speed ups in import time for common structures 🚀 

The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).

Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.

There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.

<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>

I've tracked the improvements with some local benchmarks:

## `pytest-benchmark` results

| Name | Before (s) | After (s) | Delta (s) | % Change |

|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |

Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.

## Local CodSpeed results

We're still working on configuring CodSpeed on CI. The local usage
produced similar results.
2025-04-14 08:57:54 -04:00
Christophe Bornet
ada740b5b9 community: Add ruff rule PGH003 (#30812)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-14 02:32:13 +00:00
ccurme
f005988e31 community[patch]: fix cost calculations for o3 in OpenAI callback (#30807)
Resolves https://github.com/langchain-ai/langchain/issues/30795
2025-04-13 15:20:46 +00:00
BoyuHu
446361a0d3 docs: fix typo (#30800)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-13 10:55:30 -04:00
Marina Gómez
afd457d8e1 perplexity[patch]: Fix #30767: Handle missing citations attribute in ChatPerplexity (#30805)
This PR fixes an issue where ChatPerplexity would raise an
AttributeError when the citations attribute was missing from the model
response (e.g., when using offline models like r1-1776).

The fix checks for the presence of citations, images, and
related_questions before attempting to access them, avoiding crashes in
models that don't provide these fields.

Tested locally with models that omit citations, and the fix works as
expected.
2025-04-13 09:24:05 -04:00
Christophe Bornet
42944f3499 core: Improve mypy config (#30737)
* Cleanup mypy config
* Add mypy `strict` rules except `disallow_any_generics`,
`warn_return_any` and `strict_equality` (TODO)
* Add mypy `strict_byte` rule
* Add mypy support for PEP702 `@deprecated` decorator
* Bump mypy version to 1.15

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 16:35:13 -04:00
mpb159753
bb2c2fd885 docs: Add openGauss vector store documentation (#30742)
Hey LangChain community! 👋 Excited to propose official documentation for
our new openGauss integration that brings powerful vector capabilities
to the stack!

### What's Inside 📦  
1. **Full Integration Guide**  
Introducing
[langchain-opengauss](https://pypi.org/project/langchain-opengauss/) on
PyPI - your new toolkit for:
   🔍 Native hybrid search (vectors + metadata)  
   🚀 Production-grade connection pooling  
   🧩 Automatic schema management  

2. **Rigorous Testing Passed**   
![Benchmark
Results](https://github.com/user-attachments/assets/ae3b21f7-aeea-4ae7-a142-f2aec57936a0)
   - 100% non-async test coverage  

ps: Current implementation resides in my personal repository:
https://github.com/mpb159753/langchain-opengauss, How can I transfer
process to langchain-ai org?? *Keen to hear your thoughts and make this
integration shine!* 

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 20:31:39 +00:00
Christophe Bornet
913c896598 core: Add ruff rules FBT001 and FBT002 (#30695)
Add ruff rules
[FBT001](https://docs.astral.sh/ruff/rules/boolean-type-hint-positional-argument/)
and
[FBT002](https://docs.astral.sh/ruff/rules/boolean-default-value-positional-argument/).
Mostly `noqa`s to not introduce breaking changes and possible
non-breaking fixes have already been done in a [previous
PR](https://github.com/langchain-ai/langchain/pull/29424).
These rules will prevent new violations to happen.
2025-04-11 16:26:33 -04:00
William FH
2803a48661 core[patch]: Share executor for async callbacks run in sync context (#30779)
To avoid having to create ephemeral threads, grab the thread lock, etc.
2025-04-11 10:34:43 -07:00
Sydney Runkle
fdc2b4bcac core[lint]: Use 3.9 formatting for docs and tests (#30780)
Looks like `pyupgrade` was already used here but missed some docs and
tests.

This helps to keep our docs looking professional and up to date.
Eventually, we should lint / format our inline docs.
2025-04-11 10:39:25 -04:00
Sydney Runkle
48affc498b langchain[lint]: use pyupgrade to get to 3.9 standards (#30782) 2025-04-11 10:33:26 -04:00
ccurme
d9b628e764 xai: release 0.2.3 (#30790) 2025-04-11 14:05:11 +00:00
ccurme
9cfb95e621 xai[patch]: support reasoning content (#30758)
https://docs.x.ai/docs/guides/reasoning

```python
from langchain.chat_models import init_chat_model

llm = init_chat_model(
    "xai:grok-3-mini-beta",
    reasoning_effort="low"
)
response = llm.invoke("Hello, world!")
```
2025-04-11 14:00:27 +00:00
Christophe Bornet
89f28a24d3 core[lint]: Fix typing in test_async_callbacks (#30788) 2025-04-11 07:26:38 -04:00
Sydney Runkle
8c6734325b partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781)
Using `pyupgrade` to get all `partners` code up to 3.9 standards
(mostly, fixing old `typing` imports).
2025-04-11 07:18:44 -04:00
Jacob Lee
e72f3c26a0 fix(ollama): Remove redundant message from response_metadata (#30778) 2025-04-10 23:12:57 -07:00
Jannik Maierhöfer
f3c3ec9aec docs: add langfuse integration to provider list (#30573)
This PR adds the Langfuse integration to the provider list.
2025-04-10 22:25:42 -04:00
Christophe Bornet
dc19d42d37 core: Specify code when ignoring type issue (ruff PGH003) (#30675)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/
2025-04-10 22:23:52 -04:00
Paul Czarkowski
68d16d8a07 Community: Add Managed Identity support for Azure AI Search (#30730)
Add Managed Identity support for Azure AI Search

---------

Signed-off-by: Paul Czarkowski <username.taken@gmail.com>
2025-04-10 22:22:58 -04:00
CtrlMj
5103594a2c replace the deprecated initialize_agent in playwright.ipynb with create_react_agent (#30734)
**Description:** Replaced the example with the deprecated
`intialize_agent` function with `create_react_agent` from
`langgraph.prebuild`

**Issue:** #29277 

**Dependencies:** N/A
**Twitter handle:** N/A
2025-04-10 22:12:12 -04:00
Eugene Yurtsev
e42b3d285a langchain: remove langchain-server script (#30755)
Has been replaced by langsmith a long long time ago
2025-04-10 22:11:42 -04:00
Pol de Font-Réaulx
48cf7c838d feat(community): add oauth2 support for Jira toolkit (#30684)
**Description:** add support for oauth2 in Jira tool by adding the
possibility to pass a dictionary with oauth parameters. I also adapted
the documentation to show this new behavior
2025-04-10 22:04:09 -04:00
Oleg Ovcharuk
b6fe7e8c10 docs: YDB Vector Store docs (#30636)
This PR adds docs about how to use YDB as a vector store

[YDB](https://ydb.tech/) is a versatile open-source distributed SQL
database. It supports [vector
search](https://ydb.tech/docs/en/yql/reference/udf/list/knn) which means
it can be used as a vector store with langchain.

YDB vectore store comes with
[langchain-ydb](https://pypi.org/project/langchain-ydb/) pypi package.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-10 21:33:56 -04:00
湛露先生
7a4ae6fbff community[patch]: simplify cache logic (#30760)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 19:20:57 -04:00
ccurme
8e053ac9d2 core[patch]: support customization of backoff parameters in with_retries (#30773)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-10 19:18:36 -04:00
amohan
e981a9810d docs: update links in cloudflare docs (#30776)
Thanks for reviewing again. I was notified of some
[links](https://python.langchain.com/api_reference/cloudflare/) not
being correct in the default integrations so updating them in this PR.
2025-04-10 19:08:18 -04:00
William FH
70532a65f8 Async callback benchmark (#30777) 2025-04-10 15:47:19 -07:00
Sydney Runkle
c6172d167a Only run CodSpeed benchmarks with run-codspeed-benchmarks label (#30774) 2025-04-10 15:48:14 -04:00
amohan
f70df01e01 docs: Update ordering of cloudflare integration examples in providers page (#30768)
Updated the ordering of cloudflare integrations and updated import
examples. Follow up from
https://github.com/langchain-ai/langchain/pull/30749
2025-04-10 15:34:58 -04:00
Sydney Runkle
8f8fea2d7e [performance]: Use hard coded langchain-core version to avoid importlib import (#30744)
This PR aims to reduce import time of `langchain-core` tools by removing
the `importlib.metadata` import previously used in `__init__.py`. This
is the first in a sequence of PRs to reduce import time delays for
`langchain-core` features and structures 🚀.

Because we're now hard coding the version, we need to make sure
`version.py` and `pyproject.toml` stay in sync, so I've added a new CI
job that runs whenever either of those files are modified. [This
run](https://github.com/langchain-ai/langchain/actions/runs/14358012706/job/40251952044?pr=30744)
demonstrates the failure that occurs whenever the version gets out of
sync (thus blocking a PR).

Before, note the ~15% of time spent on the `importlib.metadata` /related
imports

<img width="1081" alt="Screenshot 2025-04-09 at 9 06 15 AM"
src="https://github.com/user-attachments/assets/59f405ec-ee8d-4473-89ff-45dea5befa31"
/>

After (note, lack of `importlib.metadata` time sink):

<img width="1245" alt="Screenshot 2025-04-09 at 9 01 23 AM"
src="https://github.com/user-attachments/assets/9c32e77c-27ce-485e-9b88-e365193ed58d"
/>
2025-04-10 14:15:02 -04:00
Sydney Runkle
cd6a83117c Adding more import time benchmarks for langchain-core (#30770)
Plus minor typo fix in `ChatPromptTemplate` case id.
2025-04-10 11:50:12 -04:00
Chamath K.B. Attanayaka
6c45c9efc3 docs: update clickhouse version in notebook example (#30754)
update clickhouse docker version tag in notebook example to avoid
compatibility issues with clickhouse-connect.
2025-04-10 09:51:54 -04:00
amohan
44b83460b2 docs: Add Cloudflare integrations (#30749)
Description:
This PR adds documentation for the langchain-cloudflare integration
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
langchain-cloudflare package, located in docs/docs/integrations.
Added a new package to libs/packages.yml.

Lint and Format:

Successfully ran make format and make lint.

---------

Co-authored-by: Collier King <collier@cloudflare.com>
Co-authored-by: Collier King <collierking99@gmail.com>
2025-04-10 09:27:23 -04:00
湛露先生
c87a270e5f cookbook: Fix docs typos. (#30763)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 09:13:24 -04:00
ccurme
63c16f5ca8 community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation (#30756) 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93 core: Add ruff rules PLR (#30696)
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code (#30735)
Hi there, This is a complementary PR to #30733.
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2 core: Add ruff rules ARG (#30732)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
66758599a9 [ci]: Quick codspeed.yml tweaks to enable comparisons with master (#30752)
* Only run codspeed logic when `libs/core` is changed (for now, we'll
want to add other benchmarks later
* Also run on `master` so that we can get a reference :)
2025-04-09 13:13:49 -04:00
theosaurus
d47d6ecbc3 dosc: Fix typo in get_separators_for_language method section (#30748)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 13:03:01 -04:00
Sydney Runkle
78ec7d886d [performance]: Adding benchmarks for common langchain-core imports (#30747)
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27 community: Google Vertex AI Search now returns the website title as part of the document metadata (#30688)
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
theosaurus
636d831d27 docs: Fix typo in 'Query re-writing" section (#30736)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 08:50:14 -04:00
giulia_p_lib
deec538335 docs: fix small typo in map_rerank_docs_chain.ipynb (#30738)
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fixed a minor typo in map_rerank_docs_chain.ipynb
2025-04-09 08:49:37 -04:00
Akshay Dongare
164e606cae docs: fix import path and update LiteLLM integration docs (#30685)
- [x] **PR title**: "docs: Update import path and LiteLLM integration
docs"
- Update the old import path for `ChatLiteLLM` to reflect the new export
from
[`__init__.py`](https://github.com/Akshay-Dongare/langchain-litellm/blob/main/langchain_litellm/__init__.py)
in
[`langchain-litellm`](https://github.com/Akshay-Dongare/langchain-litellm)
package

- [x] **PR message**:
    - **Description:** 
    - 🔗 **Follow-up to**: PR #30637
    - 🔧 **Fixes**: #30368
- 💬 **Based on this comment from** @ccurme:
https://github.com/langchain-ai/langchain/pull/30637#discussion_r2029084320
 


- [x] **About me**
🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-08 13:04:17 -04:00
Ikko Eltociear Ashimine
5686fed40b docs: update yellowbrick.ipynb (#30729)
retreival -> retrieval
2025-04-08 11:56:35 -04:00
Sydney Runkle
4556b81b1d Clean up numpy dependencies and speed up 3.13 CI with numpy>=2.1.0 (#30714)
Generally, this PR is CI performance focused + aims to clean up some
dependencies at the same time.

1. Unpins upper bounds for `numpy` in all `pyproject.toml` files where
`numpy` is specified
2. Requires `numpy >= 2.1.0` for Python 3.13 and `numpy > v1.26.0` for
Python 3.12, plus a `numpy` min version bump for `chroma`
3. Speeds up CI by minutes - linting on Python 3.13, installing `numpy <
2.1.0` was taking [~3
minutes](https://github.com/langchain-ai/langchain/actions/runs/14316342925/job/40123305868?pr=30713),
now the entire env setup takes a few seconds
4. Deleted the `numpy` test dependency from partners where that was not
used, specifically `huggingface`, `voyageai`, `xai`, and `nomic`.

It's a bit unfortunate that `langchain-community` depends on `numpy`, we
might want to try to fix that in the future...

Closes https://github.com/langchain-ai/langchain/issues/26026
Fixes https://github.com/langchain-ai/langchain/issues/30555
2025-04-08 09:45:07 -04:00
ccurme
163730aef4 docs: update SQL QA prompt (#30728)
Resolves https://github.com/langchain-ai/langchain/issues/30724

The [prompt in
langchain-hub](https://smith.langchain.com/hub/langchain-ai/sql-query-system-prompt)
used in this guide was composed of just a system message, but the guide
did not add a human message to it. This was incompatible with some
providers (and is generally not a typical usage pattern).

The prompt in prompt hub has been updated to split the question into a
separate HumanMessage. Here we update the guide to reflect this.
2025-04-08 09:42:49 -04:00
湛露先生
9cbe91896e Fix deepseek release tag, as it is update name. (#30717)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-08 08:43:16 -04:00
Nithish Raghunandanan
893942651b docs: Update couchbase vector store docs (#30710)
-  **Update LangChain-Couchbase documentation**
- Rename `CouchbaseVectorStore` in favor of `CouchbaseSearchVectorStore`

- [x] **Lint and test**
2025-04-07 18:45:14 -04:00
Eugene Yurtsev
3ce0587199 ci: remove unused debug action (#30713)
Removing an unused action
2025-04-07 22:32:37 +00:00
ccurme
a2bec5f2e5 ollama: release 0.3.1 (#30716) 2025-04-07 20:31:25 +00:00
ccurme
e3f15f0a47 ollama[patch]: add model_name to response metadata (#30706)
Fixes [this standard
test](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
2025-04-07 16:27:58 -04:00
ccurme
e106e9602f groq[patch]: add retries to integration tests (#30707)
Tool-calling tests started intermittently failing with
> groq.APIError: Failed to call a function. Please adjust your prompt.
See 'failed_generation' for more details.
2025-04-07 12:45:53 -04:00
aaronlaitner
4f9f97bd12 docs: replaced initialize_agent with create_react_agent in dalle_image_generator.ipynb (#30697)
## Description:

Replaced deprecated 'initialize_agent' with 'create_react_agent' in
dalle_image_generator.ipynb
## Issue:
#29277

## Dependencies:
None

## Twitter handle:
@Thatopman

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-07 13:33:52 +00:00
Mohammad Mohtashim
e935da0b12 ChatTongyi reasoning_content fix (#30694)
- **Description:** Small fix for `reasoning_content` key
- **Issue:** #30689
2025-04-07 09:27:33 -04:00
Tin Lai
4d03ba4686 langchain_qdrant: fix showing the missing sparse vector name (#30701)
**Description:** The error message was supposed to display the missing
vector name, but instead, it includes only the existing collection
configs.

This simple PR just includes the correct variable name, so that the user
knows the requested vector does not exist in the collection.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Tin Lai <tin@tinyiu.com>
2025-04-07 09:19:08 -04:00
Eugene Yurtsev
30af9b8166 Update bug-report.yml (#30680)
Update bug report template!
2025-04-04 16:47:50 -04:00
jessicaou
2712ecffeb Update Contributor Docs (#30679)
Deletes statement on integration marketing
2025-04-04 16:35:11 -04:00
Ninad Sinha
a3671ceb71 docs: Add tools for hyperbrowser (#30606)
Description

This PR updates the docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It adds a few tools
 - Scrape Tool
 - Crawl Tool
 - Extract Tool
 - Browser Agents
   - Claude Computer Use
   - OpenAI CUA
   - Browser Use 

[Hyperbrowser](https://hyperbrowser.ai/) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

Issue
None

Dependencies
None

Twitter Handle
`@hyperbrowser`
2025-04-04 16:02:47 -04:00
Christophe Bornet
6650b94627 core: Add ruff rules PYI (#29335)
See https://docs.astral.sh/ruff/rules/#flake8-pyi-pyi

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 19:59:44 +00:00
Philippe PRADOS
d8e3b7667f community[patch]: Fix empty producer in PDF Parsers (#30620)
Fix an issue where if a pdf file doesn't have a “producer” in metadata, it generates an exception.
2025-04-04 15:53:49 -04:00
Christophe Bornet
f0159c7125 core: Add ruff rules PGH (except PGH003) (#30656)
Add ruff rules PGH: https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
Except PGH003 which will be dealt in a dedicated PR.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-04 19:53:27 +00:00
Jorge Ángel Juárez Vázquez
2491237473 docs: Add Google Calendar documentation (#30633)
## Docs: Add Google Calendar Toolkit Documentation

### Description:
This PR adds documentation for the Google Calendar Toolkit as part of
the `langchain-google` repository. Refer to the related PR: [community:
Add Google Calendar
Toolkit](https://github.com/langchain-ai/langchain-google/pull/688).

### Issue:
N/A 

### Twitter handle:
@jorgejrzz
2025-04-04 15:53:03 -04:00
Armaanjeet Singh Sandhu
7c2468f36b core: Fix handler removal in BaseCallbackManager (Fixes #30640) (#30659)
**Description:**  
Fixed a bug in `BaseCallbackManager.remove_handler()` that caused a
`ValueError` when removing a handler added via the constructor's
`handlers` parameter. The issue occurred because handlers passed to the
constructor were added only to the `handlers` list and not automatically
to `inheritable_handlers` unless explicitly specified. However,
`remove_handler()` attempted to remove the handler from both lists
unconditionally, triggering a `ValueError` when it wasn't in
`inheritable_handlers`.

The fix ensures the method checks for the handler’s presence in each
list before attempting removal, making it more robust while preserving
its original behavior.

**Issue:** Fixes #30640

**Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 15:45:15 -04:00
Mohammad Mohtashim
bff56c5fa6 community[patch]: Redundant Parser checker for Webbaseloader (#30632)
- **Description:** We do not need to set parser in `scrape` since it is
already been done in `_scrape`
- **Issue:** #30629, not directly related but makes sure xml parser is
used
2025-04-04 14:11:26 -04:00
Christophe Bornet
150ac0cb79 core: Add ruff rules DTZ (#30657)
Add ruff rules DTZ:
https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-04-04 13:43:47 -04:00
Christophe Bornet
5e418c2666 core: Rework pydantic version checks (#30653)
This pull request includes various changes to the `langchain_core`
library, focusing on improving compatibility with different versions of
Pydantic. The primary change involves replacing checks for Pydantic
major versions with boolean flags, which simplifies the code and
improves readability.
This also solves ruff rule checks for
[RUF048](https://docs.astral.sh/ruff/rules/map-int-version-parsing/) and
[PLR2004](https://docs.astral.sh/ruff/rules/magic-value-comparison/).

Key changes include:

### Compatibility Improvements:
*
[`libs/core/langchain_core/output_parsers/json.py`](diffhunk://#diff-5add0cf7134636ae4198a1e0df49ee332ae0c9123c3a2395101e02687c717646L22-R24):
Replaced `PYDANTIC_MAJOR_VERSION` with `IS_PYDANTIC_V1` to check for
Pydantic version 1.
*
[`libs/core/langchain_core/output_parsers/pydantic.py`](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14):
Updated version checks from `PYDANTIC_MAJOR_VERSION` to `IS_PYDANTIC_V2`
in the `PydanticOutputParser` class.
[[1]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14)
[[2]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L27-R27)

### Utility Enhancements:
*
[`libs/core/langchain_core/utils/pydantic.py`](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23):
Introduced `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` flags and deprecated
the `get_pydantic_major_version` function. Updated various functions to
use these flags instead of version numbers.
[[1]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23)
[[2]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R42-R78)
[[3]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L90-R89)
[[4]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L104-R101)
[[5]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L120-R122)
[[6]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L135-R132)
[[7]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L149-R151)
[[8]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L164-R161)
[[9]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L248-R250)
[[10]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L330-R335)
[[11]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L356-R357)
[[12]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L393-R390)
[[13]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L403-R400)

### Test Updates:
*
[`libs/core/tests/unit_tests/output_parsers/test_openai_tools.py`](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22):
Updated tests to use `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` for version
checks.
[[1]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22)
[[2]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L532-R535)
[[3]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L567-R570)
[[4]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L602-R605)
*
[`libs/core/tests/unit_tests/prompts/test_chat.py`](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7):
Replaced version tuple checks with `PYDANTIC_VERSION` comparisons.
[[1]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7)
[[2]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L35-R38)
[[3]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L924-R927)
[[4]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L935-R938)
*
[`libs/core/tests/unit_tests/runnables/test_graph.py`](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3):
Simplified version checks using `PYDANTIC_VERSION`.
[[1]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3)
[[2]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL15-R18)
[[3]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL234-L239)
*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20):
Introduced `PYDANTIC_VERSION_AT_LEAST_29` and
`PYDANTIC_VERSION_AT_LEAST_210` for more readable version checks.
[[1]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20)
[[2]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L92-R99)
[[3]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L230-R233)
[[4]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L652-R655)
2025-04-04 13:42:30 -04:00
Christophe Bornet
43b5dc7191 core: Add ruff rules TD and FIX (#30654)
Add ruff rules:
* FIX: https://docs.astral.sh/ruff/rules/#flake8-fixme-fix
* TD: https://docs.astral.sh/ruff/rules/#flake8-todos-td

Code cleanup:

*
[`libs/core/langchain_core/outputs/chat_generation.py`](diffhunk://#diff-a1017ee46f58fa4005b110ffd4f8e1fb08f6a2a11d6ca4c78ff8be641cbb89e5L56-R56):
Removed the "HACK" prefix from a comment in the `set_text` method.

Configuration adjustments:

*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537R85-R93):
Added new rules `FIX002`, `TD002`, and `TD003` to the ignore list.
*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537L102-L108):
Removed the `FIX` and `TD` rules from the ignore list.

Test refinement:

*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L3231-R3232):
Updated a TODO comment to improve clarity in the `test_map_stream`
function.
2025-04-04 13:40:42 -04:00
ccurme
a007c57285 docs: update package registry sort order (#30677) 2025-04-04 13:12:39 -04:00
Sydney Runkle
33ed7c31da docs: fix perplexity install instructions in ChatPerplexity docstring (#30676)
* `openai` install no longer needs to be done manually
2025-04-04 12:58:18 -04:00
Dhruvajyoti Sarma
f9bb5ec5d0 feature: removed pandas dataframe dependency for similary_search when using DuckDB as vector store (#30445)
- [ ] **PR title**: "community: Removes pandas dependency for using
DuckDB for similarity search"


- [ ] **PR message**: 
- **Description:** Removes pandas dependency for using DuckDB for
similarity search. The old function still exists as
`similarity_search_pd`, while the new one is at `similarity_search` and
requires no code changes. Return format remains the same.
    - **Issue:** Issue #29933 and update on PR #30435 
    - **Dependencies:** No dependencies
2025-04-04 12:19:18 -04:00
Akshay Dongare
f79473b752 Solved issue Implement langchain-litellm #30368 (#30637)
**PR title**: 
- [x] 1. docs: docs/docs/integrations/providers/LiteLLM.md
- [x] 2. docs: docs/docs/integrations/chat/litellm.ipynb
- [x] 3. libs: libs/packages.yml

- [x] **PR message**:
    - **Description:** Implement langchain-litellm
    - **Issue:** the issue #30368 
    - **Twitter handle:** akshay_d02
    - **LinkedIn Handle** https://linkedin.com/in/akshay-dongare 

- [x] **Add tests and docs**: Done

- [x] **Lint and test**: Done

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-04 16:12:10 +00:00
Yiğit Bekir Kaya, PhD
87e82fe1e8 Added langchain-qwq package documentation (Alibaba Cloud) (#30628)
LangChain QwQ allows non-Tongyi users to access thinking models with
extra capabilities which serve as an extension to Alibaba Cloud.

Hi @ccurme I'm back with the updated PR this time with documentation and
a finished package.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"



- **Description:** adds documentation of `langchain-qwq` integration
package. Also adds it to Alibaba Cloud provider
- **Issue:** #30580 #30317 #30579
- **Dependencies:** openai, json-repair
- **Twitter handle:** YigitBekir


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-04 11:47:14 -04:00
Andrew Benton
4e7a9a7014 community: Add support for custom runtimes to Riza tools (#30664)
**Description:**
Adds support for Riza custom runtimes to the two Riza code interpreter
tools, allowing users to run LLM-generated code that depends on
libraries outside stdlib.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @rizaio
2025-04-04 11:03:14 -04:00
diego dupin
aa37893c00 MariaDB vector store documentation addition (#30229)
### New Feature

Since version 11.7.1, MariaDB support vector. This is a super fast
implementation (see [some perf
blog](https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html)
The goal is to support MariaDB with langchain

Implementation is done in
https://github.com/mariadb-corporation/langchain-mariadb, published in
https://pypi.org/project/langchain-mariadb/

This concerns the doc addition
 

(initial PR https://github.com/langchain-ai/langchain/pull/29989)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Oskar Stark <oskarstark@googlemail.com>
2025-04-04 14:56:25 +00:00
Sydney Runkle
1cdea6ab07 langchain-community: release 0.3.21 (#30673) 2025-04-04 14:14:50 +00:00
Sydney Runkle
901dffe06b langchain: release 0.3.23 (#30670)
* Bump `text-splitters` min version
* Bump `langchain-core` min version
* Bump `langchain` version 🚀
2025-04-04 10:06:29 -04:00
ccurme
0c2c8c36c1 text-splitters: release 0.3.8 (#30671) 2025-04-04 09:58:45 -04:00
ccurme
59d508a2ee openai[patch]: make computer test more reliable (#30672) 2025-04-04 13:53:59 +00:00
Sydney Runkle
c235328b39 Revert "update langchain version and bump min core v"
This reverts commit d0f154dbaa.
2025-04-04 09:31:51 -04:00
Sydney Runkle
d0f154dbaa update langchain version and bump min core v 2025-04-04 09:27:49 -04:00
Sydney Runkle
32cd70d7d2 release: bump core to v0.3.51 (#30668) 2025-04-04 13:23:09 +00:00
Max Forsey
18cf457eec langchain-runpod integration (#30648)
## Description:

This PR adds the necessary documentation for the `langchain-runpod`
partner package integration. It includes:

* A provider page (`docs/docs/integrations/providers/runpod.ipynb`)
explaining the overall setup.
* An LLM component page (`docs/docs/integrations/llms/runpod.ipynb`)
detailing the `RunPod` class usage.
* A Chat Model component page
(`docs/docs/integrations/chat/runpod.ipynb`) detailing the `ChatRunPod`
class usage, including a feature support table.

These documentation files reflect the latest features of the
`langchain-runpod` package (v0.2.0+) such as async support and API
polling logic.

This work also addresses the review feedback provided on the previous
attempt in PR #30246 by:
*   Removing all TODOs from documentation.
*   Adding the required links between provider and component pages.
*   Completing the feature support table in the chat documentation.
*   Linking to the source code on GitHub for API reference.

Finally, it registers the `langchain-runpod` package in
`libs/packages.yml`.

## Dependencies:

None added to the core LangChain repository by these documentation
changes. The required dependency (`langchain-runpod`) is managed as a
separate package.

## Twitter handle:

@runpod_io

---------

Co-authored-by: Max Forsey <maxpod@maxpod.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:57:06 +00:00
NikeHop
9c03cd5775 Fix tool description in serpapi.ipynb (#30660)
Thank you for contributing to LangChain!

- [x] Fix Tool description of SerpAPI tool: "docs: Fix SerpAPI tool
description"



- [ ] Fix SerpAPI tool description: 
- Tool description + name in example initialization of the SerpAPI tool
was still that of the python repl tool.
    - @RLHoeppi

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:36:29 +00:00
Sydney Runkle
af66ab098e Adding Perplexity extra and deprecating the community version of ChatPerplexity (#30649)
Plus, some accompanying docs updates

Some compelling usage:

```py
from langchain_perplexity import ChatPerplexity


chat = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
response = chat.invoke(
    "What were the most significant newsworthy events that occurred in the US recently?",
    extra_body={"search_recency_filter": "week"},
)
print(response.content)
# > Here are the top significant newsworthy events in the US recently: ...
```

Also, some confirmation of structured outputs:

```py
from langchain_perplexity import ChatPerplexity
from pydantic import BaseModel


class AnswerFormat(BaseModel):
    first_name: str
    last_name: str
    year_of_birth: int
    num_seasons_in_nba: int


messages = [
    {"role": "system", "content": "Be precise and concise."},
    {
        "role": "user",
        "content": (
            "Tell me about Michael Jordan. "
            "Please output a JSON object containing the following fields: "
            "first_name, last_name, year_of_birth, num_seasons_in_nba. "
        ),
    },
]

llm = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
structured_llm = llm.with_structured_output(AnswerFormat)
response = structured_llm.invoke(messages)
print(repr(response))
#> AnswerFormat(first_name='Michael', last_name='Jordan', year_of_birth=1963, num_seasons_in_nba=15)
```
2025-04-03 14:29:17 -04:00
ccurme
b8929e3d5f docs: add image generation example to Google genai docs (#30650) 2025-04-03 14:21:54 -04:00
ccurme
374769e8fe core[patch]: log information from certain errors (#30626)
Some exceptions raised by SDKs include information in httpx responses
(see for example
[OpenAI](https://github.com/openai/openai-python/blob/main/src/openai/_exceptions.py)).
Here we trace information from those exceptions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-04-03 16:45:19 +00:00
Sydney Runkle
17a9cd61e9 Bump langchain-core version in perplexity's pyproject.toml (#30647)
Blocking v0.1.0 release of `langchain-perplexity`
2025-04-03 16:19:10 +00:00
Sydney Runkle
3814bd1ea7 partners: Add Perplexity Chat Integration (#30618)
Perplexity's importance in the space has been growing, so we think it's
time to add an official integration!

Note: following the release of `langchain-perplexity` to `pypi`, we
should be able to add `perplexity` as an extra in
`libs/langchain/pyproject.toml`, but we're blocked by a circular import
for now.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 16:09:14 +00:00
vgrfl
87c02a1aff docs: Fixed a typo in 'Google AI vs Google Cloud Vertex AI' section (#30642)
**Description:** Corrected 'encription' spelling to 'encryption'
2025-04-03 09:04:29 -04:00
Alejandro Rodríguez
884125e129 community: support usage_metadata for litellm (#30625)
Support "usage_metadata" for LiteLLM. 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-02 19:45:15 -04:00
Jacob Lee
01d0cfe450 docs: Remove TODO from Ollama docs page (#30627) 2025-04-02 22:59:15 +00:00
Christophe Bornet
f241fd5c11 core: Add ruff rules RET (#29384)
See https://docs.astral.sh/ruff/rules/#flake8-return-ret
All auto-fixes
2025-04-02 16:59:56 -04:00
Eugene Yurtsev
9ae792f56c core: 0.3.50 release (#30623)
0.3.50 release
2025-04-02 14:46:23 -04:00
Christophe Bornet
ccc3d32ec8 core: Add ruff rules for Pylint PLC (Convention) and PLE (Errors) (#29286)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-04-02 10:58:03 -04:00
ccurme
fe0fd9dd70 openai[patch]: upgrade tiktoken and fix test (#30621)
Related to https://github.com/langchain-ai/langchain/issues/30344

https://github.com/langchain-ai/langchain/pull/30542 introduced an
erroneous test for token counts for o-series models. tiktoken==0.8 does
not support o-series models in
`tiktoken.encoding_for_model(model_name)`, and this is the version of
tiktoken we had in the lock file. So we would default to `cl100k_base`
for o-series, which is the wrong encoding model. The test tested against
this wrong encoding (so it passed with tiktoken 0.8).

Here we update tiktoken to 0.9 in the lock file, and fix the expected
counts in the test. Verified that we are pulling
[o200k_base](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py#L8),
as expected.
2025-04-02 10:44:48 -04:00
oxy-tg
38807871ec docs: Add Oxylabs integration (#30591)
Description:
This PR adds documentation for the langchain-oxylabs integration
package.

The documentation includes instructions for configuring Oxylabs
credentials and provides example code demonstrating how to use the
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
Langchain-Oxylabs package, located in docs/docs/integrations.
Added a provider page in docs/docs/providers.
Added a new package to libs/packages.yml.

Lint and Test:

Successfully ran make format, make lint, and make test.
2025-04-02 14:40:32 +00:00
ccurme
816492e1d3 openai: release 0.3.12 (#30616) 2025-04-02 13:20:15 +00:00
Bagatur
111dd90a46 openai[patch]: support structured output and tools (#30581)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-02 09:14:02 -04:00
Karol Zmorski
32f7695809 docs: Little update in sample notebook with WatsonxToolkit (#30614)
**Description:**

- Updated sample notebook with valid tools.
2025-04-02 09:08:29 -04:00
Mahir Shah
9d3262c7aa core: Propagate config_factories in RunnableBinding (#30603)
- **Description:** Propagates config_factories when calling decoration
methods for RunnableBinding--e.g. bind, with_config, with_types,
with_retry, and with_listeners. This ensures that configs attached to
the original RunnableBinding are kept when creating the new
RunnableBinding and the configs are merged during invocation. Picks up
where #30551 left off.
  - **Issue:** #30531

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-01 18:03:58 -04:00
ccurme
8a69de5c24 openai[patch]: ignore file blocks when counting tokens (#30601)
OpenAI does not appear to document how it transforms PDF pages to
images, which determines how tokens are counted:
https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#usage-considerations

Currently these block types raise ValueError inside
`get_num_tokens_from_messages`. Here we update to generate a warning and
continue.
2025-04-01 15:29:33 -04:00
Christophe Bornet
558191198f core: Add ruff rule FBT003 (boolean-trap) (#29424)
See
https://docs.astral.sh/ruff/rules/boolean-positional-value-in-call/#boolean-positional-value-in-call-fbt003
This PR also fixes some FBT001/002 in private methods but does not
enforce these rules globally atm.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 17:40:12 +00:00
Christophe Bornet
4f8ea13cea core: Add ruff rules PERF (#29375)
See https://docs.astral.sh/ruff/rules/#perflint-perf
2025-04-01 13:34:56 -04:00
Christophe Bornet
8a33402016 core: Add ruff rules PT (pytest) (#29381)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-04-01 13:31:07 -04:00
Ben Faircloth
6896c863e8 docs: add seekrflow chat model integration docs (#30596)
### **PR title**  
`docs: add SeekrFlow integration notebook`

---

### 💬 **PR message**  
- **Description:**  
This PR adds an integration notebook for
[`[ChatSeekrFlow](https://pypi.org/project/langchain-seekrflow/)`](https://pypi.org/project/langchain-seekrflow/)
under `docs/docs/integrations/chat/`. Per LangChain’s guidance,
SeekrFlow has been published as a standalone OSS package
(`langchain-seekrflow`) rather than as a direct community integration.
This notebook ensures discoverability, demonstration, and testability of
the integration within LangChain’s documentation structure.

- **Issue:**  
N/A – this is a new integration contribution aligned with LangChain’s
external package policy.

- **Dependencies:**  
-
[`[langchain-seekrflow](https://pypi.org/project/langchain-seekrflow)`](https://pypi.org/project/langchain-seekrflow)
(published to PyPI)
-
[`[seekrai](https://pypi.org/project/seekrai/)`](https://pypi.org/project/seekrai/)
(SeekrFlow client SDK)

- **Twitter handle (optional):**  
  @seekrtechnology

---------

Co-authored-by: Ben Faircloth <bfaircloth@seekr.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 13:18:01 -04:00
Christophe Bornet
768e4f695a core: Add ruff rules S110 and S112 (#30599) 2025-04-01 13:17:22 -04:00
Christophe Bornet
88b4233fa1 core: Add ruff rules D (docstring) (#29406)
This ensures that the code is properly documented:
https://docs.astral.sh/ruff/rules/#pydocstyle-d

Related to #21983
2025-04-01 13:15:45 -04:00
Andras L Ferenczi
64df60e690 community[minor]: Add custom sitemap URL parameter to GitbookLoader (#30549)
## Description
This PR adds a new `sitemap_url` parameter to the `GitbookLoader` class
that allows users to specify a custom sitemap URL when loading content
from a GitBook site. This is particularly useful for GitBook sites that
use non-standard sitemap file names like `sitemap-pages.xml` instead of
the default `sitemap.xml`.
The standard `GitbookLoader` assumes that the sitemap is located at
`/sitemap.xml`, but some GitBook instances (including GitBook's own
documentation) use different paths for their sitemaps. This parameter
makes the loader more flexible and helps users extract content from a
wider range of GitBook sites.
## Issue
Fixes bug
[30473](https://github.com/langchain-ai/langchain/issues/30473) where
the `GitbookLoader` would fail to find pages on GitBook sites that use
custom sitemap URLs.
## Dependencies
No new dependencies required.
*I've added*:
* Unit tests to verify the parameter works correctly
* Integration tests to confirm the parameter is properly used with real
GitBook sites
* Updated docstrings with parameter documentation
The changes are fully backward compatible, as the parameter is optional
with a sensible default.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-01 16:17:21 +00:00
Christophe Bornet
fdda1aaea1 core: Accept ALL ruff rules with exclusions (#30595)
This pull request updates the `pyproject.toml` configuration file to
modify the linting rules and ignored warnings for the project. The most
important changes include switching to a more comprehensive selection of
linting rules and updating the list of ignored rules to better align
with the project's requirements.

Linting rules update:

* Changed the `select` option to include all available linting rules by
setting it to `["ALL"]`.

Ignored rules update:

* Updated the `ignore` option to include specific rules that interfere
with the formatter, are incompatible with Pydantic, or are temporarily
excluded due to project constraints.
2025-04-01 11:17:51 -04:00
Kacper Włodarczyk
26a3256fc6 community[major]: DynamoDBChatMessageHistory bulk add messages, raise errors (#30572)
This PR addresses two key issues:

- **Prevent history errors from failing silently**: Previously, errors
in message history were only logged and not raised, which can lead to
inconsistent state and downstream failures (e.g., ValidationError from
Bedrock due to malformed message history). This change ensures that such
errors are raised explicitly, making them easier to detect and debug.
(Side note: I’m using AWS Lambda Powertools Logger but hadn’t configured
it properly with the standard Python logger—my bad. If the error had
been raised, I would’ve seen it in the logs 😄) This is a **BREAKING
CHANGE**

- **Add messages in bulk instead of iteratively**: This introduces a
custom add_messages method to add all messages at once. The previous
approach failed silently when individual messages were too large,
resulting in partial history updates and inconsistent state. With this
change, either all messages are added successfully, or none are—helping
avoid obscure history-related errors from Bedrock.

---------

Co-authored-by: Kacper Wlodarczyk <kacper.wlodarczyk@chaosgears.com>
2025-04-01 11:13:32 -04:00
Olexandr88
8c8bca68b2 docs: edited the badge to an acceptable size (#30586)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-01 07:17:12 -04:00
Armaanjeet Singh Sandhu
4bbc249b13 community: Fix attribute access for transcript text in YoutubeLoader (Fixes #30309) (#30582)
**Description:** 
Fixes a bug in the YoutubeLoader where FetchedTranscript objects were
not properly processed. The loader was only extracting the 'text'
attribute from FetchedTranscriptSnippet objects while ignoring 'start'
and 'duration' attributes. This would cause a TypeError when the code
later tried to access these missing keys, particularly when using the
CHUNKS format or any code path that needed timestamp information.

This PR modifies the conversion of FetchedTranscriptSnippet objects to
include all necessary attributes, ensuring that the loader works
correctly with all transcript formats.

**Issue:** Fixes #30309

**Dependencies:** None

**Testing:**
- Tested the fix with multiple YouTube videos to confirm it resolves the
issue
- Verified that both regular loading and CHUNKS format work correctly
2025-04-01 07:13:06 -04:00
Ivan Brko
ecff055096 community[minor]: Improve Brave Search Tool, allow api key in env var (#30364)
- **Description:** 

- Make Brave Search Tool consistent with other tools and allow reading
its api key from `BRAVE_SEARCH_API_KEY` instead of having to pass the
api key manually (no breaking changes)
- Improve Brave Search Tool by storing api key in `SecretStr` instead of
plain `str`.
    - Add unit test for `BraveSearchWrapper`
    - Reflect the changes in the documentation
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** ivan_brko
2025-03-31 14:48:52 -04:00
ccurme
0c623045b5 core[patch]: pydantic 2.11 compat (#30554)
Release notes: https://pydantic.dev/articles/pydantic-v2-11-release

Covered here:

- We no longer access `model_fields` on class instances (that is now
deprecated);
- Update schema normalization for Pydantic version testing to reflect
changes to generated JSON schema (addition of `"additionalProperties":
True` for dict types with value Any or object).

## Considerations:

### Changes to JSON schema generation

#### Tool-calling / structured outputs

This may impact tool-calling + structured outputs for some providers,
but schema generation only changes if you have parameters of the form
`dict`, `dict[str, Any]`, `dict[str, object]`, etc. If dict parameters
are typed my understanding is there are no changes.

For OpenAI for example, untyped dicts work for structured outputs with
default settings before and after updating Pydantic, and error both
before/after if `strict=True`.

### Use of `model_fields`

There is one spot where we previously accessed `super(cls,
self).model_fields`, where `cls` is an object in the MRO. This was done
for the purpose of tracking aliases in secrets. I've updated this to
always be `type(self).model_fields`-- see comment in-line for detail.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-03-31 14:22:57 -04:00
keshavshrikant
e8be3cca5c fix huggingface tokenizer default length function (#30185)
#30184
2025-03-31 11:54:30 -04:00
Fai LAW
4419340039 docs: add pre_filter usage in similarity_search_with_score (Azure Cosmos DB No SQL) (#30508)
`pre_filter` should be passed in the `Hybrid Search with filtering`
example. Otherwise, it is just an unused variable.
2025-03-31 11:33:00 -04:00
Wenqi Li
64f97e707e ollama[patch]: Support seed param for OllamaLLM (#30553)
**Description:** a description of the change
add the seed param for OllamaLLM client reproducibility

**Issue:** the issue # it fixes, if applicable
follow up of a similar issue
https://github.com/langchain-ai/langchain/issues/24703
see also https://github.com/langchain-ai/langchain/pull/24782

**Dependencies:** any dependencies required for this change
n/a
2025-03-31 11:28:49 -04:00
Christophe Bornet
8395abbb42 core: Fix test_stream_error_callback (#30228)
Fixes #29436

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-31 10:37:22 -04:00
Jorge Piedrahita Ortiz
b9e19c5f97 Docs: Add sambanova cloud embeddings docs (#30525)
- **Description:** Add samba nova cloud embeddings docs, only
samabastudio embeddings were supported, now in the latest release of
langchan_sambanova sambanova cloud embeddings is also available
2025-03-31 10:16:15 -04:00
Augusto César Perin
f4d1df1b2d docs: add missing with_config method to Runnable templates API reference (#30560)
Broken source/docs links for Runnable methods

### What was changed
Added the `with_config` method to the method lists in both Runnable
template files:
- docs/api_reference/templates/runnable_non_pydantic.rst
- docs/api_reference/templates/runnable_pydantic.rst
2025-03-31 10:08:02 -04:00
Christophe Bornet
026de908eb core: Add ruff rules G, FA, INP, AIR and ISC (#29334)
Fixes mostly for rules G. See
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-03-31 10:05:23 -04:00
Brayden Zhong
e4515f308f community: update RankLLM integration and fix LangChain deprecation (#29931)
# Community: update RankLLM integration and fix LangChain deprecation

- [x] **Description:**  
- Removed `ModelType` enum (`VICUNA`, `ZEPHYR`, `GPT`) to align with
RankLLM's latest implementation.
- Updated `chain({query})` to `chain.invoke({query})` to resolve
LangChain 0.1.0 deprecation warnings from
https://github.com/langchain-ai/langchain/pull/29840.

- [x] **Dependencies:** No new dependencies added.  

- [x] **Tests and Docs:**  
- Updated RankLLM documentation
(`docs/docs/integrations/document_transformers/rankllm-reranker.ipynb`).
  - Fixed LangChain usage in related code examples.  

- [x] **Lint and Test:**  
- Ran `make format`, `make lint`, and verified functionality after
updates.
  - No breaking changes introduced.  

```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.

If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-31 09:50:00 -04:00
ccurme
b4fe1f1ec0 groq: release 0.3.2 (#30570) 2025-03-31 13:29:45 +00:00
Karol Zmorski
c1acf6f756 docs: Add docs for WatsonxToolkit from langchain-ibm (#30340)
**Description:**

Added docs for `WatsonxToolkit` from `langchain-ibm`:
- Sample notebook

Updated provider file: `ibm.mdx`.
2025-03-31 09:18:37 -04:00
ccurme
9213d94057 docs: update cassettes for chat token usage tracking guide (#30558) 2025-03-30 14:57:15 -04:00
ccurme
9c682af8f3 langchain: release 0.3.22 (#30557)
Closes https://github.com/langchain-ai/langchain/issues/30536
2025-03-30 14:48:22 -04:00
ccurme
08796802ca docs: keep tutorial runnable in CI (#30556) 2025-03-30 18:34:05 +00:00
William FH
b075eab3e0 Include delayed inputs in langchain tracer (#30546) 2025-03-28 16:07:22 -07:00
Thommy257
372dc7f991 core[patch]: fix loss of partially initialized variables during prompt composition (#30096)
**Description:**
This PR addresses the loss of partially initialised variables when
composing different prompts. I.e. it allows the following snippet to
run:

```python
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([('system', 'Prompt {x} {y}')]).partial(x='1')
appendix = ChatPromptTemplate.from_messages([('system', 'Appendix {z}')])

(prompt + appendix).invoke({'y': '2', 'z': '3'})
```

Previously, this would have raised a `KeyError`, stating that variable
`x` remains undefined.

**Issue**
References issue #30049

**Todo**
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 20:41:57 +00:00
Koshik Debanath
e7883d5b9f langchain-openai: Support token counting for o-series models in ChatOpenAI (#30542)
Related to #30344

Add support for token counting for o-series models in
`test_token_counts.py`.

* **Update `_MODELS` and `_CHAT_MODELS` dictionaries**
- Add "o1", "o3", and "gpt-4o" to `_MODELS` and `_CHAT_MODELS`
dictionaries.

* **Update token counts**
  - Add token counts for "o1", "o3", and "gpt-4o" models.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30542?shareId=ab208bf7-80a3-4b8d-80c4-2287486fedae).
2025-03-28 16:02:09 -04:00
Eugene Yurtsev
d075ad21a0 core[patch]: specify default event loop scope in pyproject.toml (#30543)
Specify default event loop scope
2025-03-28 19:51:19 +00:00
Ahmed Tammaa
f23c3e2444 text-splitters[patch]: Refactor HTMLHeaderTextSplitter for Enhanced Maintainability and Readability (#29397)
Please see PR #27678 for context

## Overview

This pull request presents a refactor of the `HTMLHeaderTextSplitter`
class aimed at improving its maintainability and readability. The
primary enhancements include simplifying the internal structure by
consolidating multiple private helper functions into a single private
method, thereby reducing complexity and making the codebase easier to
understand and extend. Importantly, all existing functionalities and
public interfaces remain unchanged.

## PR Goals

1. **Simplify Internal Logic**:
- **Consolidation of Private Methods**: The original implementation
utilized multiple private helper functions (`_header_level`,
`_dom_depth`, `_get_elements`) to manage different aspects of HTML
parsing and document generation. This fragmentation increased cognitive
load and potential maintenance overhead.
- **Streamlined Processing**: By merging these functionalities into a
single private method (`_generate_documents`), the class now offers a
more straightforward flow, making it easier for developers to trace and
understand the processing steps. (Thanks to @eyurtsev)

2. **Enhance Readability**:
- **Clearer Method Responsibilities**: With fewer private methods, each
method now has a more focused responsibility. The primary logic resides
within `_generate_documents`, which handles both HTML traversal and
document creation in a cohesive manner.
- **Reduced Redundancy**: Eliminating redundant checks and consolidating
logic reduces the code's verbosity, making it more concise without
sacrificing clarity.

3. **Improve Maintainability**:
- **Easier Debugging and Extension**: A simplified internal structure
allows for quicker identification of issues and easier implementation of
future enhancements or feature additions.
- **Consistent Header Management**: The new implementation ensures that
headers are managed consistently within a single context, reducing the
likelihood of bugs related to header scope and hierarchy.

4. **Maintain Backward Compatibility**:
- **Unchanged Public Interface**: All public methods (`split_text`,
`split_text_from_url`, `split_text_from_file`) and their signatures
remain unchanged, ensuring that existing integrations and usage patterns
are unaffected.
- **Preserved Docstrings**: Comprehensive docstrings are retained,
providing clear documentation for users and developers alike.

## Detailed Changes

1. **Removed Redundant Private Methods**:
- **Eliminated `_header_level`, `_dom_depth`, and `_get_elements`**:
These methods were merged into the `_generate_documents` method,
centralizing the logic for HTML parsing and document generation.

2. **Consolidated Document Generation Logic**:
- **Single Private Method `_generate_documents`**: This method now
handles the entire process of parsing HTML, tracking active headers,
managing document chunks, and yielding `Document` instances. This
consolidation reduces the number of moving parts and simplifies the
overall processing flow.

3. **Simplified Header Management**:
- **Immediate Header Scope Handling**: Headers are now managed within
the traversal loop of `_generate_documents`, ensuring that headers are
added or removed from the active headers dictionary in real-time based
on their DOM depth and hierarchy.
- **Removed `chunk_dom_depth` Attribute**: The need to track chunk DOM
depth separately has been eliminated, as header scopes are now directly
managed within the traversal logic.

4. **Streamlined Chunk Finalization**:
- **Enhanced `finalize_chunk` Function**: The chunk finalization process
has been simplified to directly yield a single `Document` when needed,
without maintaining an intermediate list. This change reduces
unnecessary list operations and makes the logic more straightforward.

5. **Improved Variable Naming and Flow**:
- **Descriptive Variable Names**: Variables such as `current_chunk` and
`node_text` provide clear insights into their roles within the
processing logic.
- **Direct Header Removal Logic**: Headers that are out of scope are
removed immediately during traversal, ensuring that the active headers
dictionary remains accurate and up-to-date.

6. **Preserved Comprehensive Docstrings**:
- **Unchanged Documentation**: All existing docstrings, including
class-level and method-level documentation, remain intact. This ensures
that users and developers continue to have access to detailed usage
instructions and method explanations.

## Testing

All existing test cases from `test_html_header_text_splitter.py` have
been executed against the refactored code. The results confirm that:

- **Functionality Remains Intact**: The splitter continues to accurately
parse HTML content, respect header hierarchies, and produce the expected
`Document` objects with correct metadata.
- **Backward Compatibility is Maintained**: No changes were required in
the test cases, and all tests pass without modifications, demonstrating
that the refactor does not introduce any regressions or alter existing
behaviors.


This example remains fully operational and behaves as before, returning
a list of `Document` objects with the expected metadata and content
splits.

## Conclusion

This refactor achieves a more maintainable and readable codebase by
simplifying the internal structure of the `HTMLHeaderTextSplitter`
class. By consolidating multiple private methods into a single, cohesive
private method, the class becomes easier to understand, debug, and
extend. All existing functionalities are preserved, and comprehensive
tests confirm that the refactor maintains the expected behavior. These
changes align with LangChain’s standards for clean, maintainable, and
efficient code.

---

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:36:00 -04:00
Christophe Bornet
86beb64b50 docs: Add doc for Vectorize provider (#30436)
This pull request adds documentation and a tutorial for integrating the
[Vectorize](https://vectorize.io/) service with LangChain. The most
important changes include adding a new documentation page for Vectorize
and creating a Jupyter notebook that demonstrates how to use the
Vectorize retriever.

The source code for the langchain-vectorize package can be found
[here](https://github.com/vectorize-io/integrations-python/tree/main/langchain).

Previews:
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/providers/vectorize/
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/retrievers/vectorize/

Documentation updates:

*
[`docs/docs/integrations/providers/vectorize.mdx`](diffhunk://#diff-7e00d4ce4768f73b4d381a7c7b1f94d138f1b27ebd08e3666b942630a0285606R1-R40):
Added a new documentation page for Vectorize, including an overview of
its features, installation instructions, and a basic usage example.

Tutorial updates:

*
[`docs/docs/integrations/retrievers/vectorize.ipynb`](diffhunk://#diff-ba5bb9a1b4586db7740944b001bcfeadc88be357640ded0c82a329b11d8d6e29R1-R294):
Created a Jupyter notebook tutorial that shows how to set up the
Vectorize environment, create a RAG pipeline, and use the LangChain
Vectorize retriever. The notebook includes steps for account creation,
token generation, environment setup, and pipeline deployment.
2025-03-28 15:25:21 -04:00
omahs
6f8735592b docs,langchain-community: Fix typos in docs and code (#30541)
Fix typos
2025-03-28 19:21:16 +00:00
Agus
47d50f49d9 docs: Add GOAT integration to docs (#30478)
This PR adds:
1. Docs for the GOAT integration 
2. An "Agentic Finance" table to the Tools page that includes GOAT

**Twitter handle**: @0xaguspunk
2025-03-28 15:19:37 -04:00
Shixian Sheng
94a7fd2497 docs: fix broken hyperlinks in fireworks integration package README (#30538)
Fix two broken hyperlinks
2025-03-28 15:18:44 -04:00
Oskar Stark
0d2cea747c docs: streamline LangSmith teasing (#30302)
This can only be reviewed by [hiding
whitespaces](https://github.com/langchain-ai/langchain/pull/30302/files?diff=unified&w=1).

The motivation behind this PR is to get my hands on the docs and make
the LangSmith teasing short and clear.

Right now I don't know how to do it, but this could be an include in the
future.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:13:22 -04:00
Eugene Yurtsev
dd0faab07e fix types 2025-03-28 14:23:50 -04:00
Eugene Yurtsev
21ab1dc675 Merge branch 'master' of github.com:xzq-xu/langchain into xzq-xu/master 2025-03-28 13:56:49 -04:00
Eugene Yurtsev
22cee5d983 x 2025-03-28 13:56:10 -04:00
Eugene Yurtsev
a14d8b103b Merge branch 'master' into master 2025-03-28 13:53:58 -04:00
Eugene Yurtsev
6d22f40a0b x 2025-03-28 13:51:06 -04:00
Philippe PRADOS
92189c8b31 community[patch]: Handle gray scale images in ImageBlobParser (Fixes 30261 and 29586) (#30493)
Fix [29586](https://github.com/langchain-ai/langchain/issues/29586) and
[30261](https://github.com/langchain-ai/langchain/pull/30261)
2025-03-28 10:15:40 -04:00
小豆豆学长
1f0686db80 community: add netmind integration (#30149)
Co-authored-by: yanrujing <rujing.yan@protagonist-ai.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-27 15:27:04 -04:00
Kyungho Byoun
e6b6c07395 community: add HANA dialect to SQLDatabase (#30475)
This PR includes support for HANA dialect in SQLDatabase, which is a
wrapper class for SQLAlchemy.

Currently, it is unable to set schema name when using HANA DB with
Langchain. And, it does not show any message to user so that it makes
hard for user to figure out why the SQL does not work as expected.

Here is the reference document for HANA DB to set schema for the
session.

- [SET SCHEMA Statement (Session
Management)](https://help.sap.com/docs/SAP_HANA_PLATFORM/4fe29514fd584807ac9f2a04f6754767/20fd550375191014b886a338afb4cd5f.html)
2025-03-27 15:19:50 -04:00
Eugene Yurtsev
1cf91a2386 docs: fix llms-txt (#30528)
* Fix trailing slashes
* Fix chat model integration links
2025-03-27 19:02:44 +00:00
Christophe Bornet
e181d43214 core: Bump ruff version to 0.11 (#30519)
Changes are from the new TC006 rule:
https://docs.astral.sh/ruff/rules/runtime-cast-value/
TC006 is auto-fixed.
2025-03-27 13:01:49 -04:00
ccurme
59908f04d4 fireworks: release 0.2.9 (#30527) 2025-03-27 16:04:20 +00:00
ccurme
05482877be mistralai: release 0.2.10 (#30526) 2025-03-27 16:01:40 +00:00
Andras L Ferenczi
63673b765b Fix: Enable max_retries Parameter in ChatMistralAI Class (#30448)
**partners: Enable max_retries in ChatMistralAI**

**Description**

- This pull request reactivates the retry logic in the
completion_with_retry method of the ChatMistralAI class, restoring the
intended functionality of the previously ineffective max_retries
parameter. New unit test that mocks failed/successful retry calls and an
integration test to confirm end-to-end functionality.

**Issue**
- Closes #30362

**Dependencies**
- No additional dependencies required

Co-authored-by: andrasfe <andrasf94@gmail.com>
2025-03-27 11:53:44 -04:00
Lakindu Boteju
3aa080c2a8 Fix typos in pdfminer and pymupdf documentations (#30513)
This pull request includes fixes in documentation for PDF loaders to
correct the names of the loaders and the required installations. The
most important changes include updating the loader names and
installation instructions in the Jupyter notebooks.

Documentation fixes:

*
[`docs/docs/integrations/document_loaders/pdfminer.ipynb`](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34):
Changed references from `PyMuPDFLoader` to `PDFMinerLoader` and updated
the installation instructions to replace `pymupdf` with `pdfminer`.
[[1]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34)
[[2]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL63-R63)
[[3]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL330-R330)

*
[`docs/docs/integrations/document_loaders/pymupdf.ipynb`](diffhunk://#diff-8487995f457e33daa2a08fdcff3b42e144eca069eeadfad5651c7c08cce7a5cdL292-R292):
Corrected the loader name from `PDFPlumberLoader` to `PyMuPDFLoader`.
2025-03-27 11:29:11 -04:00
Miguel Grinberg
14b7d790c1 docs: Restore accidentally deleted docs on Elasticsearch strategies (#30521)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding back a section of the Elasticsearch
vectorstore documentation that was deleted in [this
commit]([a72fddbf8d (diff-4988344c6ccc08191f89ac1ebf1caab5185e13698d7567fde5352038cd950d77))).
The only change I've made is to update the example RRF request, which
was out of date.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-27 11:27:20 -04:00
ccurme
0b2244ea88 Revert "docs: restore some content to Elasticsearch integration page" (#30523)
Reverts langchain-ai/langchain#30522 in favor of
https://github.com/langchain-ai/langchain/pull/30521.
2025-03-27 15:12:36 +00:00
ccurme
80064893c1 docs: restore some content to Elasticsearch integration page (#30522)
https://github.com/langchain-ai/langchain/pull/24858 standardized vector
store integration pages, but deleted some content.

Here we merge some of the old content back in. We use this version as a
reference:
2c798622cd/docs/docs/integrations/vectorstores/elasticsearch.ipynb
2025-03-27 11:07:19 -04:00
Keiichi Hirobe
956b09f468 core[patch]: stop deleting records with "scoped_full" when doc is empty (#30520)
Fix a bug that causes `scoped_full` in index to delete records when there are no input docs.
2025-03-27 11:04:34 -04:00
Christophe Bornet
b28a474e79 core[patch]: Add ruff rules for PLW (Pylint Warnings) (#29288)
See https://docs.astral.sh/ruff/rules/#warning-w_1

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-27 10:26:12 +00:00
xzq.xu
92dc3f7341 format test lint passed 2025-03-27 13:44:59 +08:00
xzq.xu
d0a9808148 modify test name 2025-03-27 13:34:51 +08:00
xzq.xu
ed2428f902 add a unit test 2025-03-27 12:43:16 +08:00
David Sánchez Sánchez
75823d580b community: fix perplexity response parameters not being included in model response (#30440)
This pull request includes enhancements to the `perplexity.py` file in
the `chat_models` module, focusing on improving the handling of
additional keyword arguments (`additional_kwargs`) in message processing
methods. Additionally, new unit tests have been added to ensure the
correct inclusion of citations, images, and related questions in the
`additional_kwargs`.

Issue: resolves https://github.com/langchain-ai/langchain/issues/30439

Enhancements to `perplexity.py`:

*
[`libs/community/langchain_community/chat_models/perplexity.py`](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212):
Modified the `_convert_delta_to_message_chunk`, `_stream`, and
`_generate` methods to handle `additional_kwargs`, which include
citations, images, and related questions.
[[1]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212)
[[2]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL277-L286)
[[3]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fR324-R331)

New unit tests:

*
[`libs/community/tests/unit_tests/chat_models/test_perplexity.py`](diffhunk://#diff-dab956d79bd7d17a0f5dea3f38ceab0d583b43b63eb1b29138ee9b6b271ba1d9R119-R275):
Added new tests `test_perplexity_stream_includes_citations_and_images`
and `test_perplexity_stream_includes_citations_and_related_questions` to
verify that the `stream` method correctly includes citations, images,
and related questions in the `additional_kwargs`.
2025-03-26 22:28:08 -04:00
Eugene Yurtsev
7664874a0d docs: llms-txt (#30506)
First just verifying it's included in the manifest
2025-03-26 22:21:59 -04:00
Adeel Ehsan
d7d0bca2bc docs: add vectara to libs package yml (#30504) 2025-03-26 16:47:53 -04:00
ccurme
3781144710 docs: update doc on token usage tracking (#30505) 2025-03-26 16:13:45 -04:00
ccurme
a9b1e1b177 openai: release 0.3.11 (#30503) 2025-03-26 19:24:37 +00:00
ccurme
8119a7bc5c openai[patch]: support streaming token counts in AzureChatOpenAI (#30494)
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.

Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.

Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.

---

Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.

Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.

To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.

--- 

Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-26 15:16:37 -04:00
Adeel Ehsan
56629ed87b docs: updated the docs for vectara (#30398)
Thank you for contributing to LangChain!

**PR title**: Docs Update for vectara
**Description:** Vectara is moved as langchain partner package and
updating the docs according to that.
2025-03-26 15:02:21 -04:00
ccurme
f68eaab44f tests: release 0.3.17 (#30502) 2025-03-26 18:56:54 +00:00
Louis Auneau
0b532a4ed0 community: Azure Document Intelligence parser features not available fixed (#30370)
Thank you for contributing to LangChain!

- **Description:** Azure Document Intelligence OCR solution has a
*feature* parameter that enables some features such as high-resolution
document analysis, key-value pairs extraction, ... In langchain parser,
you could be provided as a `analysis_feature` parameter to the
constructor that was passed on the `DocumentIntelligenceClient`.
However, according to the `DocumentIntelligenceClient` [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-documentintelligence/azure.ai.documentintelligence.documentintelligenceclient?view=azure-python),
this is not a valid constructor parameter. It was therefore remove and
instead stored as a parser property that is used in the
`begin_analyze_document`'s `features` parameter (see [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentanalysisclient?view=azure-python#azure-ai-formrecognizer-documentanalysisclient-begin-analyze-document)).
I also removed the check for "Supported features" since all features are
supported out-of-the-box. Also I did not check if the provided `str`
actually corresponds to the Azure package enumeration of features, since
the `ValueError` when creating the enumeration object is pretty
explicit.
Last caveat, is that some features are not supported for some kind of
documents. This is documented inside Microsoft documentation and
exception are also explicit.
- **Issue:** N/A
- **Dependencies:** No
- **Twitter handle:** @Louis___A

---------

Co-authored-by: Louis Auneau <louis@handshakehealth.co>
2025-03-26 14:40:14 -04:00
Really Him
fbd2e10703 docs: hide jsx in llm chain tutorial (#30187)
## **Description:** 
The Jupyter notebooks in the docs section are extremely useful and
critical for widespread adoption of LangChain amongst new developers.
However, because they are also converted to MDX and used to build the
HTML for the Docusaurus site, they contain JSX code that degrades
readability when opened in a "notebook" setting (local notebook server,
google colab, etc.). For instance, here we see the website, with a nice
React tab component for installation instructions (`pip` vs `conda`):

![Screenshot 2025-03-07 at 2 07
15 PM](https://github.com/user-attachments/assets/a528d618-f5a0-4d2e-9aed-16d4b8148b5a)

Now, here is the same notebook viewed in colab:

![Screenshot 2025-03-07 at 2 08
41 PM](https://github.com/user-attachments/assets/87acf5b7-a3e0-46ac-8126-6cac6eb93586)

Note that the text following "To install LangChain run:" contains
snippets of JSX code that is (i) confusing, (ii) bad for readability,
(iii) potentially misleading for a novice developer, who might take it
literally to mean that "to install LangChain I should run `import Tabs
from...`" and then an ill-formed command which mixes the `pip` and
`conda` installation instructions.

Ideally, we would like to have a system that presents a
similar/equivalent UI when viewing the notebooks on the documentation
site, or when interacting with them in a notebook setting - or, at a
minimum, we should not present ill-formed JSX snippets to someone trying
to execute the notebooks. As the documentation itself states, running
the notebooks yourself is a great way to learn the tools. Therefore,
these distracting and ill-formed snippets are contrary to that goal.

## **Fixes:**
* Comment out the JSX code inside the notebook
`docs/tutorials/llm_chain` with a special directive `<!-- HIDE_IN_NB`
(closed with `HIDE_IN_NB -->`). This makes the JSX code "invisible" when
viewed in a notebook setting.
* Add a custom preprocessor that runs process_cell and just erases these
comment strings. This makes sure they are rendered when converted to
MDX.
* Minor tweak: Refactor some of the Markdown instructions into an
executable codeblock for better experience when running as a notebook.
* Minor tweak: Optionally try to get the environment variables from a
`.env` file in the repo so the user doesn't have to enter it every time.
Depends on the user installing `python-dotenv` and adding their own
`.env` file.
* Add an environment variable for "LANGSMITH_PROJECT"
(default="default"), per the LangSmith docs, so a local user can target
a specific project in their LangSmith account.

**NOTE:** If this PR is approved, and the maintainers agree with the
general goal of aligning the notebook execution experience and the doc
site UI, I would plan to implement this on the rest of the JSX snippets
that are littered in the notebooks.

**NOTE:** I wasn't able to/don't know how to run the linkcheck Makefile
commands.

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-03-26 14:22:33 -04:00
Philippe PRADOS
8e5d2a44ce community[patch]: update PyPDFParser to take into account filters returned as arrays (#30489)
The image parsing is generating a bug as the the extracted objects for
the /Filter returns sometimes an array, sometimes a string.

Fix [Issue
30098](https://github.com/langchain-ai/langchain/issues/30098)
2025-03-26 14:16:54 -04:00
ccurme
422ba4cde5 infra: handle flaky tests (#30501) 2025-03-26 13:28:56 -04:00
ccurme
9a80be7bb7 core[patch]: release 0.3.49 (#30500) 2025-03-26 13:26:32 -04:00
ccurme
299b222c53 mistral[patch]: check types in adding model_name to response_metadata (#30499) 2025-03-26 16:30:09 +00:00
ccurme
22d1a7d7b6 standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)
We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.

To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.

More context: https://github.com/langchain-ai/langchain/pull/30487
2025-03-26 12:20:53 -04:00
Ante Javor
20f82502e5 Community: Add Memgraph integration docs (#30457)
Thank you for contributing to LangChain!

**Description:** 
Since we just implemented
[langchain-memgraph](https://pypi.org/project/langchain-memgraph/)
integration, we are adding basic docs to [your site based on this
comment](https://github.com/langchain-ai/langchain/pull/30197#pullrequestreview-2671616410)
from @ccurme .
   
 **Twitter handle:**
 [@memgraphdb](https://x.com/memgraphdb)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-26 11:58:09 -04:00
xzq.xu
913c8b71d9 format import 2025-03-26 23:34:38 +08:00
xzq.xu
7e3dea5db8 add a new-line 2025-03-26 23:32:07 +08:00
xzq.xu
d602141ab1 remove unused e 2025-03-26 23:10:41 +08:00
xzq.xu
dd9031fc82 _prep_run_args,tool_input copy, Exception 2025-03-26 23:06:43 +08:00
xzq.xu
3382b0d8ea _prep_run_args,tool_input copy 2025-03-26 22:56:32 +08:00
xzq.xu
e90abce577 Merge remote-tracking branch 'origin/master' 2025-03-26 22:42:15 +08:00
xzq.xu
c127ae9d26 fix the format 2025-03-26 22:41:58 +08:00
xzq.xu
65ecc22606 # Fix: Prevent run_manager from being added to state object 2025-03-26 22:36:31 +08:00
ccurme
7e62e3a137 core[patch]: store model names on usage callback handler (#30487)
So we avoid mingling tokens from different models.
2025-03-25 21:26:09 -04:00
ccurme
32827765bf core[patch]: mark usage callback handler as beta (#30486) 2025-03-25 23:25:57 +00:00
Eugene Yurtsev
9f345d64fd core[patch]: Remove old accidental commit (#30483)
Remove commented out file that was accidentally added

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-25 15:37:20 -07:00
ccurme
4b9e2e51f3 core[patch]: add token counting callback handler (#30481)
Stripped-down version of
[OpenAICallbackHandler](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/openai_info.py)
that just tracks `AIMessage.usage_metadata`.

```python
from langchain_core.callbacks import get_usage_metadata_callback
from langgraph.prebuilt import create_react_agent

def get_weather(location: str) -> str:
    """Get the weather at a location."""
    return "It's sunny."

tools = [get_weather]
agent = create_react_agent("openai:gpt-4o-mini", tools)

with get_usage_metadata_callback() as cb:
    result = await agent.ainvoke({"messages": "What's the weather in Boston?"})
    print(cb.usage_metadata)
```
2025-03-25 18:16:39 -04:00
pulvedu
1d2b1d8e5e docs: fix typos in Tavily Docs (#30484)
Thank you for contributing to LangChain!
Small changes to docs

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-03-25 18:16:09 -04:00
Christian Jung
19104db7c5 Docs: Fix typo in cookbook (#30485)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fix typo
    - **Issue:** -
    - **Dependencies:** -
    - **Twitter handle:** -


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-25 18:15:29 -04:00
Eugene Yurtsev
0acca6b9c8 core[patch]: Fix handling of title when tool schema is specified manually via JSONSchema (#30479)
Fix issue: https://github.com/langchain-ai/langchain/issues/30456
2025-03-25 15:15:24 -04:00
Ben Chambers
c5e42a4027 community: deprecate graph vector store (#30328)
- **Description:** mark GraphVectorStore `@deprecated`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-25 13:52:54 +00:00
Ian Muge
a8ce63903d community: Add edge properties to the gremlin graph schema (#30449)
Description: Extend the gremlin graph schema to include the edge
properties, grouped by its triples; i.e: `inVLabel` and `outVLabel`.
This should give more context when crafting queries to run against a
gremlin graph db
2025-03-24 19:03:01 -04:00
ccurme
b60e6f6efa community[patch]: update API ref for AmazonTextractPDFParser (#30468) 2025-03-24 23:02:52 +00:00
David Sánchez Sánchez
3ba0d28d8e community: update perplexity docstring (#30451)
This pull request includes extensive documentation updates for the
`ChatPerplexity` class in the
`libs/community/langchain_community/chat_models/perplexity.py` file. The
changes provide detailed setup instructions, key initialization
arguments, and usage examples for various functionalities of the
`ChatPerplexity` class.

Documentation improvements:

* Added setup instructions for installing the `openai` package and
setting the `PPLX_API_KEY` environment variable.
* Documented key initialization arguments for completion parameters and
client parameters, including `model`, `temperature`, `max_tokens`,
`streaming`, `pplx_api_key`, `request_timeout`, and `max_retries`.
* Provided examples for instantiating the `ChatPerplexity` class,
invoking it with messages, using structured output, invoking with
perplexity-specific parameters, streaming responses, and accessing token
usage and response metadata.Thank you for contributing to LangChain!
2025-03-24 15:01:02 -04:00
Vadym Barda
97dec30eea docs[patch]: update trim_messages doc (#30462) 2025-03-24 18:50:48 +00:00
ccurme
c2dd8d84ff infra[patch]: remove pyspark from langchain-community extended testing requirements (#30466) 2025-03-24 14:41:54 -04:00
ccurme
aa30d2d57f standard-tests: release 0.3.16 (#30464) 2025-03-24 18:35:12 +00:00
ccurme
b09e7c125c cli: use pytest-watcher (#30465)
pytest-watch is no longer maintained.
2025-03-24 18:06:31 +00:00
David Sánchez Sánchez
d7b13e12ee community: update perplexity documentation (#30450)
This pull request includes updates to the
`docs/docs/integrations/chat/perplexity.ipynb` file to enhance the
documentation for `ChatPerplexity`. The changes focus on demonstrating
the use of Perplexity-specific parameters and supporting structured
outputs for Tier 3+ users.

Enhancements to documentation:

* Added a new markdown cell explaining the use of Perplexity-specific
parameters through the `ChatPerplexity` class, including parameters like
`search_domain_filter`, `return_images`, `return_related_questions`, and
`search_recency_filter` using the `extra_body` parameter.
* Added a new code cell demonstrating how to invoke `ChatPerplexity`
with the `extra_body` parameter to filter search recency.

Support for structured outputs:

* Added a new markdown cell explaining that `ChatPerplexity` supports
structured outputs for Tier 3+ users.
* Added a new code cell demonstrating how to use `ChatPerplexity` with
structured outputs by defining a `BaseModel` class and invoking the chat
with structured output.[Copilot is generating a summary...]Thank you for
contributing to LangChain!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-24 13:49:59 -04:00
ccurme
50ec4a1a4f openai[patch]: attempt to make test less flaky (#30463) 2025-03-24 17:36:36 +00:00
ccurme
8486e0ae80 openai[patch]: bump openai sdk (#30461)
[New required
field](https://github.com/openai/openai-python/pull/2223/files#diff-530fd17eb1cc43440c82630df0ddd9b0893cf14b04065a95e6eef6cd2f766a44R26)
for `ResponseUsage` released in 1.66.5.
2025-03-24 12:10:00 -04:00
ccurme
cbbc968903 openai: release 0.3.10 (#30460) 2025-03-24 15:37:53 +00:00
ccurme
ed5e589191 openai[patch]: support multi-turn computer use (#30410)
Here we accept ToolMessages of the form
```python
ToolMessage(
    content=<representation of screenshot> (see below),
    tool_call_id="abc123",
    additional_kwargs={"type": "computer_call_output"},
)
```
and translate them to `computer_call_output` items for the Responses
API.

We also propagate `reasoning_content` items from AIMessages.

## Example

### Load screenshots
```python
import base64

def load_png_as_base64(file_path):
    with open(file_path, "rb") as image_file:
        encoded_string = base64.b64encode(image_file.read())
        return encoded_string.decode('utf-8')

screenshot_1_base64 = load_png_as_base64("/path/to/screenshot/of/application.png")
screenshot_2_base64 = load_png_as_base64("/path/to/screenshot/of/desktop.png")
```

### Initial message and response
```python
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="computer-use-preview",
    model_kwargs={"truncation": "auto"},
)

tool = {
    "type": "computer_use_preview",
    "display_width": 1024,
    "display_height": 768,
    "environment": "browser"
}
llm_with_tools = llm.bind_tools([tool])

input_message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": (
                "Click the red X to close and reveal my Desktop. "
                "Proceed, no confirmation needed."
            )
        },
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_1_base64}",
        }
    ]
)

response = llm_with_tools.invoke(
    [input_message],
    reasoning={
        "generate_summary": "concise",
    },
)
response.additional_kwargs["tool_outputs"]
```

### Construct ToolMessage
```python
tool_call_id = response.additional_kwargs["tool_outputs"][0]["call_id"]

tool_message = ToolMessage(
    content=[
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_2_base64}"
        }
    ],
    #  content=f"data:image/png;base64,{screenshot_2_base64}",  # <-- also acceptable
    tool_call_id=tool_call_id,
    additional_kwargs={"type": "computer_call_output"},
)
```

### Invoke again
```python
messages = [
    input_message,
    response,
    tool_message,
]

response_2 = llm_with_tools.invoke(
    messages,
    reasoning={
        "generate_summary": "concise",
    },
)
```
2025-03-24 15:25:36 +00:00
Vadym Barda
7bc50730aa core[patch]: release 0.3.48 (#30458) 2025-03-24 09:48:03 -04:00
Mohammad Mohtashim
33f1ab1528 Youtube Loader load method Fixed (#30314)
- **Description:** Fixed the `YoutubeLoader` loading method not
returning the correct object
- **Issue:** #30309

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-23 14:48:03 -04:00
Simon Paredes
df4448dfac langchain-groq: Add response metadata when streaming (#30379)
- **Description:** Add missing `model_name` and `system_fingerprint`
metadata when streaming.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:34:41 -04:00
Changyong Um
e2d9fe766f community[tool]: Integrate a tool for the naver_search (#30392)
Hello!
I have reopened a pull request for tool integration.
Please refer to the previous
[PR](https://github.com/langchain-ai/langchain/pull/30248).

I understand that for the tool integration, a separate package should be
created, and only the documentation should be added under docs/docs/. If
there are any other procedures, please let me know.


[langchain-naver-community](https://github.com/e7217/langchain-naver-community)

cc: @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:05:24 -04:00
Jonathan Feng
3848a1371d langchain-contextual: update provider documentation and add reranker documentation (#30415)
Hi @ccurme!

Thanks so much for helping with getting the Contextual documentation
merged last time. We added the reranker to our provider's documentation!
Please let me know if there's any issues with it! Would love to also
work with your team on an announcement for this! 🙏

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** updates contextual provider documentation to include
information about our reranker, also includes documentation for
contextual's reranker in the retrievers section
    - **Twitter handle:** https://x.com/ContextualAI/highlights


docs have been added


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-22 18:09:09 -04:00
ccurme
d867afff1c docs: update package table ordering (#30437)
Update download counts (only impacts ordering, counts in rendered page
are updated automatically).
2025-03-22 18:07:08 -04:00
Brandon Luu
bbbd4e1db8 docs: Update VectorStoreTab vector store initializations (#30413)
Description: Update vector store tab inits to match either the docs or
api_ref (whichever was more comprehensive)

List of changes per vector stores:

- In-memory
  - no change
- AstraDB
  - match to docs - docs/api_refs match (excluding embeddings)
- Chroma
  - match to docs - api_refs is less descriptive
- FAISS
  - match to docs - docs/api_refs match (excluding embeddings)
- Milvus
- match to docs to use Milvus Lite with Flat index - api_refs does not
have index_param for generalization
- MongoDB
  - match to docs - api_refs are sparser
- PGVector
  - match to api_ref
  - changed to include docker cmd directly in code
- docs/api_ref has comment to view docker command in separate code block
- Pinecone
  - match to api_refs - docs have code dispersed
- Qdrant
  - match to api_ref - docs has size=3072, api_ref has size=1536

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-22 17:29:45 -04:00
Matthew Farrellee
e7032901c3 langchain-tests: allow test_serdes for packages outside the default valid namespaces (#30343)
**Description:**

a third party package not listed in the default valid namespaces cannot
pass test_serdes because the load() does not allow for extending the
valid_namespaces.

test_serdes will fail with -
ValueError: Invalid namespace: {'lc': 1, 'type': 'constructor', 'id':
['langchain_other', 'chat_models', 'ChatOther'], 'kwargs':
{'model_name': '...', 'api_key': '...'}, 'name': 'ChatOther'}

this change has test_serdes automatically extend valid_namespaces based
off the ChatModel under test's namespace.
2025-03-22 17:27:39 -04:00
Jiwon Kang
699475a01d community: uuidv1 is unsafe (#30432)
this_row_id previously used UUID v1. However, since UUID v1 can be
predicted if the MAC address and timestamp are known, it poses a
potential security risk. Therefore, it has been changed to UUID v4.
2025-03-22 15:27:49 -04:00
Dhruvajyoti Sarma
31551dab40 feature: added warning when duckdb is used as a vectorstore without pandas (#30435)
added warning when duckdb is used as a vectorstore without pandas being
installed (currently used for similarity search result processing)

Thank you for contributing to LangChain!

- [ ] **PR title**: "community: added warning when duckdb is used as a
vectorstore without pandas"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** displays a warning when using duckdb as a vector
store without pandas being installed, as it is used by the
`similarity_search` function
    - **Issue:** #29933 
    - **Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-22 19:27:21 +00:00
ccurme
e81b82ee0b docs: update cassettes (#30434)
Following updates to `draw_mermaid_png`
2025-03-22 12:57:36 -04:00
ccurme
6484635ac3 docs: update cassettes for response metadata guide (#30431)
As of langchain-groq 0.3 ChatGroq requires a model name.

Also update other models.
2025-03-22 07:52:08 -04:00
Cesar Sanz
5383abfeee Fix incorrect import path for AzureAIChatCompletionsModel (#30417)
Fixes #30416

Correct the import path for `AzureAIChatCompletionsModel` in the
`_init_chat_model_helper` function.

* Update the import statement in
`libs/langchain/langchain/chat_models/base.py` to `from
langchain_azure_ai.chat_models import AzureAIChatCompletionsModel`.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30417?shareId=6ff6d5de-e3d1-4972-8d24-5e74838e9945).
2025-03-22 07:44:51 -04:00
Misakar
7750ad588b community:ChatLiteLLM support output reasoning content (#30430) 2025-03-22 07:43:33 -04:00
Adrián Panella
b75573e858 core: add tool_call exclusion in filter_message (#30289)
Extend functionallity to allow to filter pairs of tool calls (ai +
tool).

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 23:05:29 +00:00
Vadym Barda
673ec00030 docs[patch]: add warning to token counter docstring (#30426) 2025-03-21 18:59:40 -04:00
Adrián Panella
3933a4abc3 core(mermaid): allow greater customization (#29939)
Adds greater style customization by allowing a custom frontmatter
config. This allows to set a `theme` and `look` or to adjust theme by
setting `themeVariables`

Example:

```python

node_colors = NodeStyles(
    default="fill:#e2e2e2,line-height:1.2,stroke:#616161",
    first="fill:#cfeab8,fill-opacity:0",
    last="fill:#eac3b8",
)

frontmatter_config = {
    "config": {
        "theme": "neutral",
        "look": "handDrawn"
    }
}

graph.get_graph().draw_mermaid_png(node_colors=node_colors, frontmatter_config=frontmatter_config)
```


![image](https://github.com/user-attachments/assets/11b56d30-3be2-482f-8432-3ce704a09552)

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 18:25:26 -04:00
Vadym Barda
07823cd41c core[patch]: optimize trim_messages (#30327)
Refactored w/ Claude

Up to 20x speedup! (with theoretical max improvement of `O(n / log n)`)
2025-03-21 17:08:26 -04:00
ccurme
b78ae7817e openai[patch]: trace strict in structured_output_kwargs (#30425) 2025-03-21 14:37:28 -04:00
axiangcoding
428de88398 docs: Update a note about how to track azure openai's token usage when streaming (#30409)
- **Description:** Update a note about how to track azure openai's token
usage when streaming
  - **Issue:** #30390 
  - **Dependencies:** None
  - **Twitter handle:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 14:18:50 -04:00
ccurme
1de7fa8f3a Revert "deepseek: temporarily bypass tests" (#30424)
Reverts langchain-ai/langchain#30423
2025-03-21 17:14:31 +00:00
ccurme
c74dfff836 deepseek: temporarily bypass tests (#30423)
Deepseek infra is not stable enough to get through integration tests.

Previous two attempts had two tests time out, they both pass locally.
2025-03-21 17:08:35 +00:00
ccurme
7147903724 deepseek: release 0.1.3 (#30422) 2025-03-21 16:39:50 +00:00
Andras L Ferenczi
b5f49df86a partner: ChatDeepSeek on openrouter not returning reasoning (#30240)
Deepseek model does not return reasoning when hosted on openrouter
(Issue [30067](https://github.com/langchain-ai/langchain/issues/30067))

the following code did not return reasoning:

```python
llm = ChatDeepSeek( model = 'deepseek/deepseek-r1:nitro', api_base="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) 
messages = [
    {"role": "system", "content": "You are an assistant."},
    {"role": "user", "content": "9.11 and 9.8, which is greater? Explain the reasoning behind this decision."}
]
response = llm.invoke(messages, extra_body={"include_reasoning": True})
print(response.content)
print(f"REASONING: {response.additional_kwargs.get('reasoning_content', '')}")
print(response)
```

The fix is to extract reasoning from
response.choices[0].message["model_extra"] and from
choices[0].delta["reasoning"]. and place in response additional_kwargs.
Change is really just the addition of a couple one-sentence if
statements.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 16:35:37 +00:00
Vadym Barda
4852ab8d0a core[patch]: more tests for trim_messages (#30421) 2025-03-21 16:19:52 +00:00
ccurme
e8e3b2bfae ollama: release 0.3.0 (#30420) 2025-03-21 15:50:08 +00:00
Jojo
8f300740ed docs: fix several typos in docs/docs/how_to/split_html.ipynb (#30407)
Fix several typos in docs/docs/how_to/split_html.ipynb
* `structered` should be `structured`
* `signifcant` should be `significant`
* `seperator` should be `separator`
2025-03-21 11:46:26 -04:00
Jojo
c77ee99980 docs: fix typo in chat_history.ipynb (#30406)
`peristence` should be `persistence`
2025-03-21 11:45:52 -04:00
Jojo
f657b19a24 docs: Fix typo in chat_history.ipynb (#30405)
`repsonse` should be `response`
2025-03-21 11:45:31 -04:00
Bob Merkus
5700646cc5 ollama: add reasoning model support (e.g. deepseek) (#29689)
# Description
This PR adds reasoning model support for `langchain-ollama` by
extracting reasoning token blocks, like those used in deepseek. It was
inspired by
[ollama-deep-researcher](https://github.com/langchain-ai/ollama-deep-researcher),
specifically the parsing of [thinking
blocks](6d1aaf2139/src/assistant/graph.py (L91)):
```python
  # TODO: This is a hack to remove the <think> tags w/ Deepseek models 
  # It appears very challenging to prompt them out of the responses 
  while "<think>" in running_summary and "</think>" in running_summary:
      start = running_summary.find("<think>")
      end = running_summary.find("</think>") + len("</think>")
      running_summary = running_summary[:start] + running_summary[end:]
```

This notes that it is very hard to remove the reasoning block from
prompting, but we actually want the model to reason in order to increase
model performance. This implementation extracts the thinking block, so
the client can still expect a proper message to be returned by
`ChatOllama` (and use the reasoning content separately when desired).

This implementation takes the same approach as
[ChatDeepseek](5d581ba22c/libs/partners/deepseek/langchain_deepseek/chat_models.py (L215)),
which adds the reasoning content to
chunk.additional_kwargs.reasoning_content;
```python
  if hasattr(response.choices[0].message, "reasoning_content"):  # type: ignore
      rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
          response.choices[0].message.reasoning_content  # type: ignore
      )
```

This should probably be handled upstream in ollama + ollama-python, but
this seems like a reasonably effective solution. This is a standalone
example of what is happening;

```python
async def deepseek_message_astream(
    llm: BaseChatModel,
    messages: list[BaseMessage],
    config: RunnableConfig | None = None,
    *,
    model_target: str = "deepseek-r1",
    **kwargs: Any,
) -> AsyncIterator[BaseMessageChunk]:
    """Stream responses from Deepseek models, filtering out <think> tags.

    Args:
        llm: The language model to stream from
        messages: The messages to send to the model

    Yields:
        Filtered chunks from the model response
    """
    # check if the model is deepseek based
    if (llm.name and model_target not in llm.name) or (hasattr(llm, "model") and model_target not in llm.model):
        async for chunk in llm.astream(messages, config=config, **kwargs):
            yield chunk
        return

    # Yield with a buffer, upon completing the <think></think> tags, move them to the reasoning content and start over
    buffer = ""
    async for chunk in llm.astream(messages, config=config, **kwargs):
        # start or append
        if not buffer:
            buffer = chunk.content
        else:
            buffer += chunk.content if hasattr(chunk, "content") else chunk

        # Process buffer to remove <think> tags
        if "<think>" in buffer or "</think>" in buffer:
            if hasattr(chunk, "tool_calls") and chunk.tool_calls:
                raise NotImplementedError("tool calls during reasoning should be removed?")
            if "<think>" in chunk.content or "</think>" in chunk.content:
                continue
            chunk.additional_kwargs["reasoning_content"] = chunk.content
            chunk.content = ""
        # upon block completion, reset the buffer
        if "<think>" in buffer and "</think>" in buffer:
            buffer = ""
        yield chunk

```

# Issue
Integrating reasoning models (e.g. deepseek-r1) into existing LangChain
based workflows is hard due to the thinking blocks that are included in
the message contents. To avoid this, we could match the `ChatOllama`
integration with `ChatDeepseek` to return the reasoning content inside
`message.additional_arguments.reasoning_content` instead.

# Dependenices
None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 15:44:54 +00:00
ccurme
d8145dda95 xai: release 0.2.2 (#30403) 2025-03-20 20:25:16 +00:00
ccurme
e194902994 mistral: release 0.2.9 (#30402) 2025-03-20 20:22:24 +00:00
ccurme
49466ec9ca groq: release 0.3.1 (#30401) 2025-03-20 20:19:49 +00:00
ccurme
db1e340387 fireworks: release 0.2.8 (#30400) 2025-03-20 16:15:51 -04:00
ccurme
238f7fb345 docs: add links in Writer provider page (#30399) 2025-03-20 16:13:48 -04:00
ccurme
785a8e7d45 tests: release 0.3.15 (#30397) 2025-03-20 15:38:40 -04:00
ccurme
5588ca4cfb core: release 0.3.47 (#30396) 2025-03-20 18:52:53 +00:00
ccurme
de3960d285 multiple: enforce standards on tool_choice (#30372)
- Test if models support forcing tool calls via `tool_choice`. If they
do, they should support
  - `"any"` to specify any tool
  - the tool name as a string to force calling a particular tool
- Add `tool_choice` to signature of `BaseChatModel.bind_tools` in core
- Deprecate `tool_choice_value` in standard tests in favor of a boolean
`has_tool_choice`

Will follow up with PRs in external repos (tested in AWS and Google
already).
2025-03-20 17:48:59 +00:00
ccurme
b86cd8270c multiple: support strict and method in with_structured_output (#30385) 2025-03-20 13:17:07 -04:00
Mohammad Mohtashim
1103bdfaf1 (Ollama) Fix String Value parsing in _parse_arguments_from_tool_call (#30154)
- **Description:** Fix String Value parsing in
_parse_arguments_from_tool_call
- **Issue:** #30145

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 21:47:18 -04:00
Daniel Liden
c0ffc9aa29 Update MLflow integration docs with concise examples and external links (#30082)
- **Description:** This PR updates the [MLflow
integration](https://python.langchain.com/docs/integrations/providers/mlflow_tracking/)
docs. This PR is based on feedback and suggestions from @efriis on
#29612 . This proposed revision is much shorter, does not contain
images, and links out to the MLflow docs rather than providing lengthy
descriptions directly within these docs. Thank you for taking another
look!

- **Issue:** NA
- **Dependencies:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-20 00:25:10 +00:00
Tim König
b5992695ae community: add ZoteroRetriever (#30270)
**Description** 
This contribution adds a retriever for the Zotero API.
[Zotero](https://www.zotero.org/) is an open source reference management
for bibliographic data and related research materials. A retriever will
allow langchain applications to retrieve relevant documents from
personal or shared group libraries, which I believe will be helpful for
numerous applications, such as RAG systems, personal research
assistants, etc. Tests and docs were added.

The documentation provided assumes the retriever will be part of the
langchain-community package, as this seemed customary. Please let me
know if this is not the preferred way to do it. I also uploaded the
implementation to PyPI.

**Dependencies**
The retriever requires the `pyzotero` package for API access. This
dependency is stated in the docs, and the retriever will return an error
if the package is not found. However, this dependency is not added to
the langchain package itself.

**Twitter handle**
I'm no longer using Twitter, but I'd appreciate a shoutout on
[Bluesky](https://bsky.app/profile/koenigt.bsky.social) or
[LinkedIn](https://www.linkedin.com/in/dr-tim-k%C3%B6nig-534aa2324/)!


Let me know if there are any issues, I'll gladly try and sort them out!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 20:19:32 -04:00
ccurme
aa5ac9279a docs: update tavily guides (#30387)
AgentExecutor -> langgraph
2025-03-19 19:29:57 -04:00
pulvedu
4346aca5cf Integration update (#30381)
This pull request includes a change to the following
- docs/docs/integrations/tools/tavily_search.ipynb 
- docs/docs/integrations/tools/tavily_extract.ipynb
- added docs/docs/integrations/providers/tavily.mdx

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-03-19 17:58:25 -04:00
Daniel Rauber
9b687d7fbd community[minor]: PlaywrightURLLoader can take stored session file (#30152)
**Description:**
Implements an additional `browser_session` parameter on
PlaywrightURLLoader which can be used to initialize the browser context
by providing a stored playwright context.
2025-03-19 16:29:07 -04:00
Ikko Eltociear Ashimine
bffa530816 docs: update contextual.ipynb (#30384)
intialize -> initialize
2025-03-19 15:48:58 -04:00
Yeonseolee
65b16d3200 Docs: Fix deprecated initialize agent in ainetwork (#30355)
## Description
- Replaced `initialize_agent`, `AgentType` usage in ainetwork
integration
- Updated usage example to `create_react_agent` in langgraph

## Issue
- #29277

## Dependencies
- N/A

## Twitter handler
- I don't use Twitter
2025-03-19 15:20:21 -04:00
Vadym Barda
73c04f4707 core[patch]: release 0.3.46 (#30383) 2025-03-19 15:09:08 -04:00
William FH
ce84f8ba7e Dereference run tree (#30377) 2025-03-19 19:05:06 +00:00
William FH
8265be4d3e Unset context to None in var (#30380) 2025-03-19 18:53:17 +00:00
William FH
4130e6476b Unset context after step (#30378)
While we are already careful to copy before setting the config, if other
objects hold a reference to the config or context, it wouldn't be
cleared.
2025-03-19 11:46:23 -07:00
Vadym Barda
37190881d3 core[patch]: add util for approximate token counting (#30373) 2025-03-19 17:48:38 +00:00
Brandon Luu
5ede4248ef docs: Update Vector Store docs formatting (#30359)
Description: Fix formatting in Vector Stores docs.

- astradb: fix API ref spacing
- milvus, pgvector, pinecone, qdrant: removed % in cmds for docs
consistency
- pgvector: removed redundant code and reorganized imports

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 15:54:18 +00:00
Matthew Farrellee
5f812f5968 langchain-tests: skip instead of passing image message tests (#30375)
**Description:** use skip for image message tests
2025-03-19 15:35:32 +00:00
ccurme
aae8306d6c groq: release 0.3.0 (#30374) 2025-03-19 15:23:30 +00:00
Ashwin
83cfb9691f Fix typo: change 'ben' to 'be' in comment (#30358)
**Description:**  
This PR fixes a minor typo in the comments within
`libs/partners/openai/langchain_openai/chat_models/base.py`. The word
"ben" has been corrected to "be" for clarity and professionalism.

**Issue:**  
N/A

**Dependencies:**  
None
2025-03-19 10:35:35 -04:00
pudongair
4d1d726e61 docs: fix some typos (#30367)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: pudongair <744355276@qq.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 13:26:07 +00:00
Florian Chappaz
07cb41ea9e community: aligning ChatLiteLLM default parameters with litellm (#30360)
**Description:**
Since `ChatLiteLLM` is forwarding most parameters to
`litellm.completion(...)`, there is no reason to set other default
values than the ones defined by `litellm`.

In the case of parameter 'n', it also provokes an issue when trying to
call a serverless endpoint on Azure, as it is considered an extra
parameter. So we need to keep it optional.

We can debate about backward compatibility of this change: in my
opinion, there should not be big issues since from my experience,
calling `litellm.completion()` without these parameters works fine.

**Issue:** 
- #29679 

**Dependencies:** None
2025-03-19 09:07:28 -04:00
Hodory
57ffacadd0 community: add keep_newlines parameter to process_pages method (#30365)
- **Description:** Adding keep_newlines parameter to process_pages
method with page_ids on Confluence document loader
- **Issue:** N/A (This is an enhancement rather than a bug fix)
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-19 08:57:59 -04:00
ccurme
0ba03d8f3a Revert "docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent " (#30357)
Reverts langchain-ai/langchain#30267

Code is broken.
2025-03-19 03:17:47 +00:00
William FH
f5a0092551 Rm test for parent_run presence (#30356) 2025-03-18 19:44:19 -07:00
Mark Perfect
38b48d257d docs: Fix Qdrant sparse and hybrid vector search (#30208)
- [x] **PR title**


- [x] **PR message**:
- **Description:** Updated the sparse and hybrid vector search due to
changes in the Qdrant API, and cleaned up the notebook
  

- [x] **Add tests and docs**:
    - N/A


- [x] **Lint and test**

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
2025-03-18 22:44:12 -04:00
Adam Brenner
f949d9a3d3 docs: Add Dell PowerScale Document Loader (#30209)
# Description
Adds documentation on LangChain website for a Dell specific document
loader for on-prem storage devices. Additional details on what the
document loader is described in the PR as well as on our github repo:
[https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)

This PR also creates a category on the document loader webpage as no
existing category exists for on-prem. This follows the existing pattern
already established as the website has a category for cloud providers.

# Issue:
New release, no issue.

# Dependencies:

None

# Twitter handle:

DellTech

---------

Signed-off-by: Adam Brenner <adam@aeb.io>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 22:39:21 -04:00
ccurme
9fb0db6937 community: release 0.3.20 (#30354) 2025-03-18 21:57:12 +00:00
ccurme
168f1dfd93 langchain[patch]: update text-splitters min bound (#30352) 2025-03-18 20:53:43 +00:00
ccurme
f6cf2ce2ad langchain[patch]: lock with latest text-splitters (#30350) 2025-03-18 19:29:11 +00:00
ccurme
2909b49045 langchain: release 0.3.21 (#30348) 2025-03-18 19:13:20 +00:00
ccurme
958f85d541 text-splitters: release 0.3.7 (#30347) 2025-03-18 19:11:37 +00:00
Aniket kadukar
36412c02b6 docs: Fix typo "tall" → "tool" in tools_human.ipynb (#30345)
This PR fixes a minor typo.

The word "tall" was mistakenly used instead of "tool." 

I have corrected it to "tool" for better clarity and accuracy.
2025-03-18 13:12:56 -04:00
Lance Martin
46d6bf0330 ollama[minor]: update default method for structured output (#30273)
From function calling to Ollama's [dedicated structured output
feature](https://ollama.com/blog/structured-outputs).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 12:44:22 -04:00
Marlene
ff8ce60dcc Core: Adding Azure AI to Supported Chat Models (#30342)
- **Description:** I was testing out `init_chat` and saw that chat
models can now be inferred. Azure OpenAI is currently only supported but
we would like to add support for Azure AI which is a different package.
This PR edits the `base.py` file to add the chat implementation.
- I don't think this adds any additional dependencies 
- Will add a test and lint, but starting an initial draft PR. 

cc @santiagxf

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 11:53:20 -04:00
TheSongg
251551ccf1 doc: Implement langchain-xinference (#30296)
- [ ] **PR title**: Implement langchain-xinference

- [ ] **PR message**: 
Implement a standalone package for Xinference chat models and llm
models.

https://github.com/langchain-ai/langchain/issues/30045#issue-2887214214
2025-03-18 11:50:16 -04:00
Oskar Stark
492b4c1604 docs(readthedocs): streamline config (#30307) 2025-03-18 11:47:45 -04:00
wenmeng zhou
5a6e1254a7 support return reasoning content for models like qwq in dashscope (#30317)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

here is an example
```python
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_core.messages import HumanMessage

chatLLM = ChatTongyi(
    model="qwq-32b",   # refer to  https://help.aliyun.com/zh/model-studio/getting-started/models for more models
)
res = chatLLM.stream([HumanMessage(content="how much is 1 plus 1")])
for r in res:
    print(r)
```

```shell
content='' additional_kwargs={'reasoning_content': 'Okay, so the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' user is asking "'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': 'how much is 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1." Let me think'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' about this. Hmm'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1... That's a pretty"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' basic math question. I'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' remember from arithmetic that when'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you add 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 together, the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' result is 2.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' But wait, maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' I should double-check to be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' sure. Let me visualize it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. If I have one apple'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' and someone gives me another'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' apple, I have'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' two apples total. Yeah,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that makes sense. Or'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' on a number line'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', starting at 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' moving 1 step forward lands'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you at 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nIs there any'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might not equal'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2? Like in different'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' number bases? Let'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s see. In base"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 10, which'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is standard,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1+1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. But if'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' we were in binary'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' (base 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '), 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 would be 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '0. But the question'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " doesn't specify a base,"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' default is base 10'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nAlternatively, could'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' this be a trick'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question? Maybe they'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'re referring to something else"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', like in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' algebra where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might still'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be 1 in'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' some contexts? Wait'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', no, in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' addition, 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' + 1 is typically'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1 because it's logical"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' OR. But the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question just says "1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' plus 1," which is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' more arithmetic than Boolean.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' \n\nOr maybe in some other'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' mathematical structure like modular arithmetic?'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' For example, modulo'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 0. But again'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', unless specified, it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s probably standard addition"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nThe user might be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' testing if I know basic'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' math, or maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " they're a student just"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' starting out. Either way,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' the straightforward answer is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. I should also'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " consider if there's any cultural"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' references or jokes where'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 equals'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' something else, but I can'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'t think of any common"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' ones. \n\nAlternatively'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', in some contexts like'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' in chemistry,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 could refer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' to mixing solutions, but that'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s not standard. The question"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is pretty simple,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is 2. To'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be thorough, maybe mention'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that in standard arithmetic it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s 2, but if"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " there's a different"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context, the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' might vary. But since'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' no context is given'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 2 is the safest'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' answer.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='The result' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' of 1 plus' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' \n\nIn standard arithmetic (base' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 10), adding' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 and 1 together' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' yields 2. This is' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' a fundamental mathematical principle. If' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' the question involves a different context' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' (e.g., binary' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=', modular arithmetic, or a' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' metaphorical meaning), it' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' would need clarification,' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' but under typical circumstances, the' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' answer is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ''} response_metadata={'finish_reason': 'stop', 'request_id': '4738c641-6bd8-9efc-a4fe-d929d4e62bef', 'token_usage': {'input_tokens': 16, 'output_tokens': 560, 'total_tokens': 576}} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'

```

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-18 11:43:10 -04:00
ccurme
b91daf06eb groq[minor]: remove default model (#30341)
The default model for `ChatGroq`, `"mixtral-8x7b-32768"`, is being
retired on March 20, 2025. Here we remove the default, such that model
names must be explicitly specified (being explicit is a good practice
here, and avoids the need for breaking changes down the line). This
change will be released in a minor version bump to 0.3.

This follows https://github.com/langchain-ai/langchain/pull/30161
(released in version 0.2.5), where we began generating warnings to this
effect.

![Screenshot 2025-03-18 at 10 33
27 AM](https://github.com/user-attachments/assets/f1e4b302-c62a-43b0-aa86-eaf9271e86cb)
2025-03-18 10:50:34 -04:00
amuwall
f6a17fbc56 community: fix import exception too constrictive (#30218)
Fix this issue #30097
2025-03-17 22:09:02 -04:00
Aryan Agarwal
7ff7c4f81b docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent (#30267)
## Description:
- Removed deprecated `initialize_agent()` usage in AWS Lambda
integration.
- Replaced it with `AgentExecutor` for compatibility with LangChain
v0.3.
- Fixed documentation linting errors.

## Issue:
- No specific issue linked, but this resolves the use of deprecated
agent initialization.

## Dependencies:
- No new dependencies added.

## Request for Review:
- Please verify if the implementation is correct.
- If approved and merged, I will proceed with updating other related
files.

## Twitter Handle (Optional):
I don't have a Twitter but here is my LinkedIn instead
(https://www.linkedin.com/in/aryan1227/)
2025-03-17 22:04:13 -04:00
qonnop
036f00dc92 community: support in-memory data (Blob.from_data) in all audio parsers (#30262)
OpenAIWhisperParser, OpenAIWhisperParserLocal, YandexSTTParser do not
handle in-memory audio data (loaded via Blob.from_data) correctly. They
require Blob.path to be set and AudioSegment is always read from the
file system. In-memory data is handled correctly only for
FasterWhisperParser so far. I changed OpenAIWhisperParser,
OpenAIWhisperParserLocal, YandexSTTParser accordingly to match
FasterWhisperParser.
Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-17 19:52:33 -04:00
Ke Liu
98a9ef19ec fix typo (#30298)
fix typo in "environment"
2025-03-17 23:37:43 +00:00
Matthew Farrellee
1985aaf095 langchain-tests: allow subclasses to add addition, non-standard tests (#30204)
**description:** the ChatModel[Integration]Tests classes are powerful
and helpful, this change allows sub-classes to add additional tests.

for instance,

```
class TestChatMyServiceIntegration(ChatModelIntegrationTests):
    ...
    def test_myservice(self, model: BaseChatModel) -> None:
        ...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:37:16 +00:00
Ben
789db7398b text-splitters: Add JSFrameworkTextSplitter for Handling JavaScript Framework Code (#28972)
## Description
This pull request introduces a new text splitter,
`JSFrameworkTextSplitter`, to the Langchain library. The
`JSFrameworkTextSplitter` extends the `RecursiveCharacterTextSplitter`
to handle JavaScript framework code effectively, including React (JSX),
Vue, and Svelte. It identifies and utilizes framework-specific component
tags and syntax elements as splitting points, alongside standard
JavaScript syntax. This ensures that code is divided at natural
boundaries, enhancing the parsing and processing of JavaScript and
framework-specific code.

### Key Features
- Supports React (JSX), Vue, and Svelte frameworks.
- Identifies and uses framework-specific tags and syntax elements as
natural splitting points.
- Extends the existing `RecursiveCharacterTextSplitter` for seamless
integration.

## Issue
No specific issue addressed.

## Dependencies
No additional dependencies required.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:32:33 +00:00
Bagatur
9b48e4c2b0 docs: update langsmith env vars (#30331) 2025-03-17 14:35:22 -07:00
ccurme
54eab796ab docs: update chat model tabs (#30330) 2025-03-17 15:39:10 -04:00
Oskar Stark
4c68749e38 docs(openai): use bold over ticks (#30303)
We are not talking about code here
2025-03-17 18:53:16 +00:00
Oskar Stark
531319f65b infra(GHA): description is required based on schema definition (#30305) 2025-03-17 18:42:42 +00:00
Oskar Stark
192035f8c0 docs: fix typo (#30310) 2025-03-17 16:54:32 +00:00
César García
a5eca20e1b docs: Fix for broken links for Docling project sites (#30313)
**Description:**: Updated Docling project URLs from ds4sd.github.io to
docling-project.github.io/docling/
 **Issue:** #30312
 **Dependencies:** None
 **Twitter handle**: @lahoramaker
2025-03-17 16:47:09 +00:00
Oskar Stark
620d723fbf infra(GHA): remove unused --- at the beginning of some workflows (#30306) 2025-03-17 16:46:32 +00:00
César García
2c8b8114fa docs: Updated link to current Unstructured docs (#30316)
The former link led to a site that explains that the docs have moved,
but did not redirect the user to the actual site automatically. I just
copied the provided url, checked that it works and updated the link to
the current version.


**Description:** Updated the link to Unstructured Docs at
https://docs.unstructured.io
**Issue:** #30315 
**Dependencies:** None
**Twitter handle:** @lahoramaker
2025-03-17 16:46:28 +00:00
ccurme
e2ab4ccab3 docs: update min langchain-openai version in integrations page (#30326)
No longer RC
2025-03-17 16:45:47 +00:00
ccurme
5684653775 openai[patch]: release 0.3.9 (#30325) 2025-03-17 16:08:41 +00:00
ccurme
eb9b992aa6 openai[patch]: support additional Responses API features (#30322)
- Include response headers
- Max tokens
- Reasoning effort
- Fix bug with structured output / strict
- Fix bug with simultaneous tool calling + structured output
2025-03-17 12:02:21 -04:00
Bae-ChangHyun
d8510270ee community: add 'extract' mode to FireCrawlLoader for structured data extraction (#30242)
**Description:** 
Added an 'extract' mode to FireCrawlLoader that enables structured data
extraction from web pages. This feature allows users to Extract
structured data from a single URLs, or entire websites using Large
Language Models (LLMs).
You can show more params and usage on [firecrawl
docs](https://docs.firecrawl.dev/features/extract-beta).
You can extract from only one url now.(it depends on firecrawl's extract
method)

**Dependencies:** 
No new dependencies required. Uses existing FireCrawl API capabilities.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 15:15:57 +00:00
qonnop
747efa16ec community: fix CPU support for FasterWhisperParser (implicit compute type for WhisperModel) (#30263)
FasterWhisperParser fails on a machine without an NVIDIA GPU: "Requested
float16 compute type, but the target device or backend do not support
efficient float16 computation." This problem arises because the
WhisperModel is called with compute_type="float16", which works only for
NVIDIA GPU.

According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#bit-floating-points-float16)
float16 is supported only on NVIDIA GPUs. Removing the compute_type
parameter solves the problem for CPUs. According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#quantize-on-model-loading)
setting compute_type to "default" (standard when omitting the parameter)
uses the original compute type of the model or performs implicit
conversion for the specific computation device (GPU or CPU). I suggest
to remove compute_type="float16".

@hulitaitai you are the original author of the FasterWhisperParser - is
there a reason for setting the parameter to float16?

Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-14 22:22:29 -04:00
ccurme
c74e7b997d openai[patch]: support structured output via Responses API (#30265)
Also runs all standard tests using Responses API.
2025-03-14 15:14:23 -04:00
Priyansh Agrawal
f54f14b747 community: cube document loader - do not load non-public dimensions and measures (#30286)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Do not load non-public dimensions and measures
(public: false) with Cube semantic loader

- **Issue:** Currently, non-public dimensions and measures are loaded by
the Cube document loader which leads to downstream applications using
these which is not allowed by Cube.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 15:07:56 -04:00
Stavros Kontopoulos
ac22cde130 langchain_ollama: Support keep_alive in embeddings (#30251)
- Description: Adds support for keep_alive in Ollama Embeddings see
https://github.com/ollama/ollama/issues/6401.
Builds on top of of
https://github.com/langchain-ai/langchain/pull/29296. I have this use
case where I want to keep the embeddings model in cpu forever.
- Dependencies: no deps are being introduced.
- Issue: haven't created an issue yet.
2025-03-14 14:56:50 -04:00
Matthew Farrellee
65a8f30729 docs: update json-mode docs to use with_structured_output(method="json_mode") (#30291)
**Description:** update the json-mode concepts doc to use
method="json_mode" instead of model_kwargs w/ response_format
**Issue:** https://github.com/langchain-ai/langchain/issues/30290
2025-03-14 14:54:57 -04:00
ccurme
18f9b5d8ab docs: update contributing doc (#30292) 2025-03-14 18:54:11 +00:00
homeffjy
2c99f12062 community[patch]: fix bilibili loader handling of multi-page content (#30283)
Previously the loader would only extract subtitles from the first page
of multi-page videos.
2025-03-14 14:53:03 -04:00
ccurme
0b80bec015 docs: fix typo (#30288) 2025-03-14 17:09:38 +00:00
ccurme
d5d0134e7b anthropic: release 0.3.10 (#30287) 2025-03-14 16:23:21 +00:00
ccurme
226f29bc96 anthropic: support built-in tools, improve docs (#30274)
- Support features from recent update:
https://www.anthropic.com/news/token-saving-updates (mostly adding
support for built-in tools in `bind_tools`
- Add documentation around prompt caching, token-efficient tool use, and
built-in tools.
2025-03-14 16:18:50 +00:00
Priyansh Agrawal
f27e2d7ce7 community: cube document loader - fix logging (#30285)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Fix bad log message on line#56 and replace f-string
logs with format specifiers

- **Issue:** Log messages such as this one
`INFO:langchain_community.document_loaders.cube_semantic:Loading
dimension values for: {dimension_name}...`

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 11:36:18 -04:00
ccurme
bbd4b36d76 mistralai[patch]: bump core (#30278) 2025-03-13 23:04:36 +00:00
ccurme
315bb17ef5 core: release 0.3.45 (#30277) 2025-03-13 22:44:23 +00:00
pulvedu
d0bfc7f820 community[fix] : Pass API_KEY as argument (#30272)
PR Title:
community: Fix Pass API_KEY as argument

PR Message:
Description:
This PR fixes validation error "Value error, Did not find
tavily_api_key, please add an environment variable `TAVILY_API_KEY`
which contains it, or pass `tavily_api_key` as a named parameter."

Dependencies:
No new dependencies introduced.

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-13 22:19:31 +00:00
ccurme
5e0fa2cce5 infra: update release pipeline (#30276)
Instead of attempting to conditionally `needs` job, always run job and
exit successfully if not needed.
2025-03-13 18:10:59 -04:00
ccurme
733abcc884 mistral: release 0.2.8 (#30275) 2025-03-13 21:54:34 +00:00
Jacob Lee
e9c1765967 fix(core): Ignore missing secrets on deserialization (#30252) 2025-03-13 12:27:03 -07:00
ccurme
ebea5e014d standard tests: test simple agent loop (#30268) 2025-03-13 16:34:12 +00:00
ccurme
5237987643 docs: update readme (#30239)
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2025-03-12 13:45:13 -04:00
ccurme
cd1ea8e94d openai[patch]: support Responses API (#30231)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-03-12 12:25:46 -04:00
Jason Zhang
49bdd3b6fe docs: Add AgentQL provider doc, tool/toolkit doc and documentloader doc (#30144)
- **Description:** Added AgentQL docs for the provider page, tools page
and documentloader page
- **Twitter handle:** @AgentQL

Repo:
https://github.com/tinyfish-io/agentql-integrations/tree/main/langchain
PyPI: https://pypi.org/project/langchain-agentql/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-11 21:57:40 -04:00
Vadym Barda
23fa70f328 core[patch]: release 0.3.44 (#30236) 2025-03-11 18:59:02 -04:00
Vadym Barda
c7842730ef core[patch]: support single-node subgraphs and put subgraph nodes under the respective subgraphs (#30234) 2025-03-11 18:55:45 -04:00
Dharshan A
81d1653a30 docs: Fix typo in Generating Examples section of few-shot prompting doc (#30219)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-11 09:44:20 -04:00
ccurme
27d86d7bc8 infra: update release workflow (#30207)
Fix condition
2025-03-10 17:53:03 -04:00
ccurme
70fc0b8363 infra: update release workflow (#30203) 2025-03-10 20:18:33 +00:00
ccurme
62c570dd77 standard-tests, openai: bump core (#30202) 2025-03-10 19:22:24 +00:00
ccurme
38420ee76e docs: add note on Deepseek R1 (#30201) 2025-03-10 15:17:20 -04:00
ccurme
f896e701eb deepseek: install local langchain-tests in test deps (#30198) 2025-03-10 16:58:17 +00:00
ccurme
7b8f266039 infra: additional testing on core release (#30180)
Here we add a job to the release workflow that, when releasing
`langchain-core`, tests prior published versions of select packages
against the new version of core. We limit the testing to the most recent
published versions of langchain-anthropic and langchain-openai.

This is designed to catch backward-incompatible updates to core. We
sometimes update core and downstream packages simultaneously, so there
may not be any commit in the history at which tests would fail. So
although core and latest downstream packages could be consistent, we can
benefit from testing prior versions of downstream packages against core.

I tested the workflow by simulating a [breaking
change](d7287248cf)
in core and running it with publishing steps disabled:
https://github.com/langchain-ai/langchain/actions/runs/13741876345. The
workflow correctly caught the issue.
2025-03-10 08:59:59 -04:00
Hugh Gao
aa6dae4a5b community: Remove the system message count limit for ChatTongyi. (#30192)
## Description
The models in DashScope support multiple SystemMessage. Here is the
[Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk),
and the example code on the document page:
```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),  # 如果您没有配置环境变量,请在此处替换您的API-KEY
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务base_url
)
# 初始化messages列表
completion = client.chat.completions.create(
    model="qwen-long",
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。
        {'role': 'system', 'content': 'fileid://file-fe-xxx'},
        {'role': 'user', 'content': '这篇文章讲了什么?'}
    ],
    stream=True,
    stream_options={"include_usage": True}
)

full_content = ""
for chunk in completion:
    if chunk.choices and chunk.choices[0].delta.content:
        # 拼接输出内容
        full_content += chunk.choices[0].delta.content
        print(chunk.model_dump())

print({full_content})
```
Tip: The example code is for OpenAI, but the document said that it also
supports the DataScope API, and I tested it, and it works.
```
Is the Dashscope SDK invocation method compatible?

Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation.
```
2025-03-10 08:58:40 -04:00
Dharshan A
34e94755af Fix typo in astream_events in streaming docs (#30195)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-10 08:56:07 -04:00
ccurme
67aff1648b community: Add OpenGradient integration (Toolkit) (#30190)
Commandeering https://github.com/langchain-ai/langchain/pull/30135

---------

Co-authored-by: kylexqian <kylexqian@gmail.com>
2025-03-09 18:08:07 -04:00
ccurme
b209d46eb3 mistral[patch]: set global ssl context (#30189) 2025-03-09 21:27:41 +00:00
Vijay Selvaraj
df459d0d5e community: add Valthera integration (#30105)
```markdown
**Description:**  
This PR integrates Valthera into LangChain, introducing an framework designed to send highly personalized nudges by an LLM agent. This is modeled after Dr. BJ Fogg's Behavior Model. This integration includes:

- Custom data connectors for HubSpot, PostHog, and Snowflake.
- A unified data aggregator that consolidates user data.
- Scoring configurations to compute motivation and ability scores.
- A reasoning engine that determines the appropriate user action.
- A trigger generator to create personalized messages for user engagement.

**Issue:**  
N/A

**Dependencies:**  
N/A

**Twitter handle:**  
- `@vselvarajijay`

**Tests and Docs:**  
- `docs/docs/integrations/tools/valthera` 
- `https://github.com/valthera/langchain-valthera/tree/main/tests`

```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 21:19:08 +00:00
ccurme
3823daa0b9 cli: update integration doc template for tools (#30188)
Chain example -> langgraph agent
2025-03-09 21:14:43 +00:00
David Skarbrevik
0d7cdf290b langchain: clean pyproject ruff section (#30070)
## Changes
- `/Makefile` - added extra step to `make format` and `make lint` to
ensure the lint dep-group is installed before running ruff (documented
in issue #30069)

- `/pyproject.toml` - removed ruff exceptions for files that no longer
exist or no longer create formatting/linting errors in ruff

## Testing

**running `make format` on this branch/PR**
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/82751788-f44e-4591-98ed-95ce893ce623"
/>

## Issue

fixes #30069

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 15:06:02 -04:00
Jonathan Feng
911accf733 docs: add contextualai documentation (#30050)
Thank you for contributing to LangChain!
 
**Description:** adds ContextualAI's `langchain-contextual` package's
documentation

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 02:43:13 +00:00
Bharat
b9746a6910 fixes#30182: update tool names to match OpenAI function name pattern (#30183)
The OpenAI API requires function names to match the pattern
'^[a-zA-Z0-9_-]+$'. This updates the JIRA toolkit's tool names to use
underscores instead of spaces to comply with this requirement and
prevent BadRequestError when using the tools with OpenAI functions.

Error fixed:
```
File "langgraph-bug-fix/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1023, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
During task with name 'agent' and id 'aedd7537-e8d5-6678-d0c5-98129586d3ac'
```

Issue:#30182
2025-03-08 20:48:25 -05:00
ccurme
cee0fecb08 docs: update package registry counts (#30181) 2025-03-08 20:37:59 -05:00
William FH
bac3a28e70 Flush (#30157) 2025-03-07 16:32:15 -08:00
ccurme
a7ab5e8372 community[patch]: ChatPerplexity: track usage metadata (#30175) 2025-03-07 23:25:05 +00:00
Vadym Barda
6c05d4b153 docs[patch]: update trim messages wording (#30174) 2025-03-07 17:05:51 -05:00
ccurme
1c993b921c core[patch]: release 0.3.43 (#30173) 2025-03-07 21:56:00 +00:00
ccurme
9893e5cb80 core[patch]: catch structured_output_format (#30172)
Change to `ls_structured_output_format` was not backward-compatible with
older versions of integration packages.
2025-03-07 16:50:06 -05:00
ccurme
88dc479c4a docs: update model used in ChatGroq (#30170)
`mixtral-8x7b-32768` is being retired on March 20.
2025-03-07 16:29:05 -05:00
ccurme
33a3510243 core[patch]: export ArgsSchema (#30169)
This is needed for type hints

see: https://github.com/langchain-ai/langchain/pull/30167
2025-03-07 20:43:05 +00:00
OysterMax
01317fde21 DOC: type checker complain on args_schema type hint when inheriting from BaseTool (#30167)
Thank you for contributing to LangChain!

- **Description:** update docs to suppress type checker complain on
args_schema type hint when inheriting from BaseTool
- **Issue:** #30142 
- **Dependencies:** N/A
- **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-07 15:41:53 -05:00
ccurme
17507c9ba6 groq[patch]: release 0.2.5 (#30168) 2025-03-07 20:25:51 +00:00
andyzhou1982
9e863c89d2 add JiebaLinkExtractor for chinese doc extracting (#30150)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: chinese doc extracting"


- [ ] **PR message**: 
- **Description:** add jieba_link_extractor.py for chinese doc
extracting
    - **Dependencies:** jieba


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
  /doc/doc/integrations/providers/jieba.md
  /doc/doc/integrations/vectorstores/jieba_link_extractor.ipynb
  /libs/packages.yml

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-07 20:21:46 +00:00
ccurme
74e7772a5f groq[patch]: warn if model is not specified (#30161)
Groq is retiring `mixtral-8x7b-32768`, which is currently the default
model for ChatGroq, on March 20. Here we emit a warning if the model is
not specified explicitly.

A version 0.3.0 will be released ahead of March 20 that removes the
default altogether.
2025-03-07 15:21:13 -05:00
Ioannis Bakagiannis
3444e587ee docs: Integration Update - ADS4GPTs (#30153)
docs: New integration for LangChain - ads4gpts-langchain

Description: Tools and Toolkit for Agentic integration natively within
LangChain with ADS4GPTs, in order to help applications monetize with
advertising.

Twitter handle: @ads4gpts

Co-authored-by: knitlydevaccount <loom+github@knitly.app>
2025-03-07 14:35:44 -05:00
ccurme
3c258194ae tests[patch]: release 0.3.14 (#30165) 2025-03-07 18:34:05 +00:00
ccurme
34638ccfae openai[patch]: release 0.3.8 (#30164) 2025-03-07 18:26:40 +00:00
ccurme
4e5058f29c core[patch]: release 0.3.42 (#30163) 2025-03-07 18:14:45 +00:00
Eugene Yurtsev
894fd63a61 cli: release 0.0.36 (#30159)
Bump for 0.0.36
2025-03-07 13:05:40 -05:00
ccurme
806211475a core[patch]: update structured output tracing (#30123)
- Trace JSON schema in `options`
- Rename to `ls_structured_output_format`
2025-03-07 13:05:25 -05:00
Jakub Kopecký
d0f5bcda29 docs: fix apify actors notebok main heading text (#30040)
- **Description:** Fix Apify Actors tool notebook main heading text so
there is an actual description instead of "Overview" in the tool
integration description on [LangChain tools integration
page](https://python.langchain.com/docs/integrations/tools/#all-tools).
2025-03-07 12:58:10 -05:00
ccurme
230876a7c5 anthropic[patch]: add PDF input example to API reference (#30156) 2025-03-07 14:19:08 +00:00
ccurme
5c7440c201 docs: update configuration how-to guide (#30139) 2025-03-06 11:51:48 -05:00
joeconstantino
022ff9eead Tableau docs for new datasource qa tool (#30125)
- **Description: a notebook showing langchain and langraph agents using
the new langchain_tableau tool
- **Twitter handle: @joe_constantin0

---------

Co-authored-by: Joe Constantino <joe@constantino.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-06 14:58:56 +00:00
ccurme
52b0570bec core, openai, standard-tests: improve OpenAI compatibility with Anthropic content blocks (#30128)
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI

---

Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.

Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.

That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
2025-03-06 09:53:14 -05:00
Pat Patterson
b3dc66f7a3 community: fix AttributeError when creating LanceDB vectorstore (#30127)
**Description:**

This PR adds a call to `guard_import()` to fix an AttributeError raised
when creating LanceDB vectorstore instance with an existing LanceDB
table.

**Issue:**

This PR fixes issue #30124.

**Dependencies:**

No additional dependencies.

**Twitter handle:**

[@metadaddy](https://x.com/metadaddy), but I spend more time at
[@metadaddy.net](https://bsky.app/profile/metadaddy.net) these days.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 23:04:38 +00:00
Hugh Gao
9b7b8e4a1a community: make DashScope models support Partial Mode for text continuation. (#30108)
## Description
make DashScope models support Partial Mode for text continuation.

For text continuation in ChatTongYi, it supports text continuation with
a prefix by adding a "partial" argument in AIMessage. The document is
[Partial Mode
](https://help.aliyun.com/zh/model-studio/user-guide/partial-mode?spm=a2c4g.11186623.help-menu-2400256.d_1_0_0_8.211e5b77KMH5Pn&scm=20140722.H_2862210._.OR_help-T_cn~zh-V_1).
The API example is:
```py
import os
import dashscope

messages = [{
    "role": "user",
    "content": "请对“春天来了,大地”这句话进行续写,来表达春天的美好和作者的喜悦之情"
},
{
    "role": "assistant",
    "content": "春天来了,大地",
    "partial": True
}]
response = dashscope.Generation.call(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    model='qwen-plus',
    messages=messages,
    result_format='message',  
)

print(response.output.choices[0].message.content)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 16:22:14 +00:00
黑牛
f0153414d5 Add request_id field to improve request tracking and debugging (for Tongyi model) (#30110)
- **Description**: Added the request_id field to the check_response
function to improve request tracking and debugging, applicable for the
Tongyi model.
- **Issue**: None
- **Dependencies**: None
- **Twitter handle**: None

- **Add tests and docs**: None

- **Lint and test**: Ran `make format`, `make lint`, and `make test` to
ensure the code meets formatting and testing requirements.
2025-03-05 11:03:47 -05:00
Manthan Surkar
1ee8aceaee community: fix Jira API wrapper failing initialization with cloud param (#30117)
### **Description**  
Converts the boolean `jira_cloud` parameter in the Jira API Wrapper to a
string before initializing the Jira Client. Also adds tests for the
same.

### **Issue**  
[Jira API Wrapper
Bug](8abb65e138/libs/community/langchain_community/utilities/jira.py (L47))

```python
jira_cloud_str = get_from_dict_or_env(values, "jira_cloud", "JIRA_CLOUD")
jira_cloud = jira_cloud_str.lower() == "true"
```

The above code has a bug where the value of `"jira_cloud"` is a boolean.
If it is passed, calling `.lower()` on a boolean raises an error.
Additionally, `False` cannot be passed explicitly since
`get_from_dict_or_env` falls back to environment variables.

Relevant code in `langchain_core`:  

[Source](https://github.com/thesmallstar/langchain/blob/master/.venv/lib/python3.13/site-packages/langchain_core/utils/env.py#L46)

```python
if isinstance(key, str) and key in data and data[key]:  # Here, data[key] is False
```

This PR fixes both issues.

### **Twitter Handle**  
[Manthan Surkar](https://x.com/manthan_surkar)
2025-03-05 10:49:25 -05:00
Adrián Panella
c599ba47d5 core(mermaid): fix error when 3+ subgraph levels (#29970) 2025-03-04 13:27:49 -05:00
Alexander Henlein
417efa30a6 docs: add Taiga Tool integration docs (#30042)
This PR adds documentation for the langchain-taiga Tool integration,
including an example notebook at
'docs/docs/integrations/tools/taiga.ipynb' and updates to
'libs/packages.yml' to track the new package.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-04 17:51:20 +00:00
Mathias Marciano
5f0102242a Fixed an issue with the OpenAI Assistant's 'retrieval' tool and adding support for the 'attachments' parameter (#30006)
PR Title:
langchain: add attachments support in OpenAIAssistantRunnable

PR Description:
This PR fixes an issue with the "retrieval" tool (internally named
"file_search") in the OpenAI Assistant by adding support for the
"attachments" parameter in the invoke method. This change allows files
to be linked to messages when they are inserted into threads, which is
essential for utilizing OpenAI's Retrieval Augmented Generation (RAG)
feature.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-04 17:34:11 +00:00
Philippe PRADOS
4710c1fa8c community[minor]: Fix regular expression in visualize and outlines modules. (#30002)
Fix invalid escape characteres
2025-03-04 12:23:48 -05:00
ccurme
577c0d0715 community[patch]: release 0.3.19 (#30104) 2025-03-04 16:12:03 +00:00
ccurme
ba5ddb218f anthropic[patch]: release 0.3.9 (#30103) 2025-03-04 10:53:55 -05:00
ccurme
9383a0536a tests[patch]: release 0.3.13 (#30102) 2025-03-04 10:53:43 -05:00
ccurme
fb16c25920 langchain[patch]: release 0.3.20 (#30101) 2025-03-04 15:47:27 +00:00
ccurme
692a68bf1c core[patch]: release 0.3.41 (#30100) 2025-03-04 15:08:57 +00:00
ccurme
484d945500 community[patch]: remove numpy cap for python < 3.12 (#30084) 2025-03-04 09:46:41 -05:00
Cheney Zhang
7eb6dde720 docs: refine milvus server description (#30071)
Document refinement: optimize milvus server description. The description
of "milvus standalone", and "milvus server" is confusing, so I clarify
it with a detailed description.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-03-04 09:39:54 -05:00
ZhangShenao
8575d7491f [Doc] Improve api doc (#30073)
- Update api_doc for `BaseMessage`
- add static method decorator for `retry_runnable`
2025-03-04 09:39:07 -05:00
Antonio Pisani
9a11e0edcd docs:Add SWI-Prolog for langchain-prolog (#30081)
Some users have complained that t is not clear that SWI-Prolog must be
installed before installing langchain-prolog.
2025-03-04 09:12:47 -05:00
Samuel Dion-Girardeau
ccb64e9f4f docs: Fix typo in code samples for max_tokens_for_prompt (#30088)
- **Description:** Fix typo in code samples for max_tokens_for_prompt.
Code blocks had singular "token" but the method has plural "tokens".
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-04 09:11:21 -05:00
ccurme
33354f984f docs: update contributing docs (#30064) 2025-03-01 17:36:35 -05:00
ccurme
c7cd666a17 docs: add to vercel overrides (#30063) 2025-03-01 17:21:15 -05:00
ArrayPD
c671d54c6f core: make with_alisteners() example workable. (#30059)
**Description:**

5 fix of example from function with_alisteners() in
libs/core/langchain_core/runnables/base.py
Replace incoherent example output with workable example's output.

1. SyntaxError: unterminated string literal
    print(f"on start callback starts at {format_t(time.time())}
    correct as
    print(f"on start callback starts at {format_t(time.time())}")

2. SyntaxError: unterminated string literal
    print(f"on end callback starts at {format_t(time.time())}
    correct as
    print(f"on end callback starts at {format_t(time.time())}")

3. NameError: name 'Runnable' is not defined
    Fix as
    from langchain_core.runnables import Runnable

4. NameError: name 'asyncio' is not defined
    Fix as
    import asyncio

5. NameError: name 'format_t' is not defined.
    Implement format_t() as
    from datetime import datetime, timezone

    def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
2025-03-01 15:39:02 -05:00
Chandra Nandan
eca8c5515d docs: sidebar-content-render (#30061) (#30062)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: added proper width to sidebar content"

- [x] **PR message**: added proper width to sidebar content
- **Description:** While accessing the [LangChain Python API
Reference](https://python.langchain.com/api_reference/index.html) the
sidebar content does not display correctly.
    - **Issue:** Follow-up to #30061
    - **Dependencies:** None
    - **Twitter handle:** https://x.com/implicitdefcnc


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 15:30:41 -05:00
cold-eye
7c175e3fda Update ascend.py (#30060)
add batch_size to fix oom when embed large amount texts

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 14:10:41 -05:00
ccurme
3b066dc005 anthropic[patch]: allow structured output when thinking is enabled (#30047)
Structured output will currently always raise a BadRequestError when
Claude 3.7 Sonnet's `thinking` is enabled, because we rely on forced
tool use for structured output and this feature is not supported when
`thinking` is enabled.

Here we:
- Emit a warning if `with_structured_output` is called when `thinking`
is enabled.
- Raise `OutputParserException` if no tool calls are generated.

This is arguably preferable to raising an error in all cases.

```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel


class Person(BaseModel):
    name: str
    age: int


llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    max_tokens=5_000,
    thinking={"type": "enabled", "budget_tokens": 2_000},
)
structured_llm = llm.with_structured_output(Person)  # <-- this generates a warning
```

```python
structured_llm.invoke("Alice is 30.")  # <-- works
```

```python
structured_llm.invoke("Hello!")  # <-- raises OutputParserException
```
2025-02-28 14:44:11 -05:00
ccurme
f8ed5007ea anthropic, mistral: return model_name in response metadata (#30048)
Took a "census" of models supported by init_chat_model-- of those that
return model names in response metadata, these were the only two that
had it keyed under `"model"` instead of `"model_name"`.
2025-02-28 18:56:05 +00:00
Christophe Bornet
9e6ffd1264 core: Add ruff rules PTH (pathlib) (#29338)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-28 13:22:20 -05:00
TheSongg
86b364de3b Add asynchronous generate interface (#30001)
- [ ] **PR title**: [langchain_community.llms.xinference]: Add
asynchronous generate interface

- [ ] **PR message**: The asynchronous generate interface support stream
data and non-stream data.
          
        chain = prompt | llm
        async for chunk in chain.astream(input=user_input):
            yield chunk


- [ ] **Add tests and docs**:

       from langchain_community.llms import Xinference
       from langchain.prompts import PromptTemplate

       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
           stream = True
            )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
       chain = prompt | llm
       async for chunk in chain.astream(input=user_input):
           yield chunk
2025-02-28 12:32:44 -05:00
Cheney Zhang
a1897ca621 docs: refine milvus doc with hybrid-search (#30037)
Milvus Document refinement: add more detailed hybrid search description
with full-text search introduction here.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-02-28 10:22:53 -05:00
Tiest van Gool
476cd26f57 Add xAI to ChatModelTabs drop down (#30028)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: add xAI to ChatModelTabs"

- [ ] **PR message**:
- **Description:** Added `ChatXAI` to `ChatModelTabs` dropdown to
improve visibility of xAI chat models (e.g., "grok-2", "grok-3").
    - **Issue:** Follow-up to #30010 
    - **Dependencies:** none
    - **Twitter handle:** @tiestvangool 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-28 09:08:12 -05:00
Fakai Zhao
f07338d2bf Implementing the MMR algorithm for OLAP vector storage (#30033)
Thank you for contributing to LangChain!

-  **Implementing the MMR algorithm for OLAP vector storage**: 
  - Support Apache Doris and StarRocks OLAP database.
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- **Implementing the MMR algorithm for OLAP vector storage**: 
    - **Apache Doris
    - **StarRocks
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- **Add tests and docs**: 
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: fakzhao <fakzhao@cisco.com>
2025-02-28 08:50:22 -05:00
Daniel Rauber
186cd7f1a1 community: PlaywrightURLLoader should wait for page load event before attempting to extract data (#30043)
## Description

The PlaywrightURLLoader should wait for a page to be loaded before
attempting to extract data.
2025-02-28 08:45:51 -05:00
Ikko Eltociear Ashimine
46908ee3da docs: update google_cloud_vertexai_rerank.ipynb (#30039)
recieve -> receive
2025-02-28 08:45:06 -05:00
ccurme
0dbcc1d099 docs: document anthropic features (#30030)
Update integrations page with extended thinking feature.

Update API reference with extended thinking and citations.
2025-02-27 19:37:04 -05:00
ccurme
6c7c8a164f openai[patch]: add unit test (#30022)
Test `max_completion_tokens` is propagated to payload for
AzureChatOpenAI.
2025-02-27 11:09:17 -05:00
DamonXue
156a60013a docs: fix tavily_search code-block format. (#30012)
This pull request includes a change to the `TavilySearchResults` class
in the `tool.py` file, which updates the code block format in the
documentation.

Documentation update:

*
[`libs/community/langchain_community/tools/tavily_search/tool.py`](diffhunk://#diff-e3b6a980979268b639c6a86e9b182756b0f7c7e9e5605e613bc0a72ea6aa5301L54-R59):
Changed the code block format from Python to JSON in the example
provided in the docstring.Thank you for contributing to LangChain!
2025-02-27 10:55:15 -05:00
kawamou
8977ac5ab0 community[fix]: Handle None value in raw_content from Tavily API response (#30021)
## **Description:**

When using the Tavily retriever with include_raw_content=True, the
retriever occasionally fails with a Pydantic ValidationError because
raw_content can be None.

The Document model in langchain_core/documents/base.py requires
page_content to be a non-None value, but the Tavily API sometimes
returns None for raw_content.

This PR fixes the issue by ensuring that even when raw_content is None,
an empty string is used instead:

```python
page_content=result.get("content", "")
            if not self.include_raw_content
            else (result.get("raw_content") or ""),
2025-02-27 10:53:53 -05:00
Yan
d0c9b98171 docs: writer integration docs cosmetic fixes (#29984)
Fixed links at Writer partners integration docs
2025-02-27 10:52:49 -05:00
Lakindu Boteju
f69deee1bd community: Add cost data for aws bedrock anthropic.claude-3-7 model (#30016)
This pull request includes updates to the
`libs/community/langchain_community/callbacks/bedrock_anthropic_callback.py`
file to add a new model version to the list of supported models.

Updates to supported models:

* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.003` for 1000 input tokens.
* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.015` for 1000 output tokens.

AWS Bedrock pricing reference : https://aws.amazon.com/bedrock/pricing
2025-02-27 09:51:52 -05:00
Mark Perfect
289b3422dc docs: Add Milvus Standalone to documentation (#29650)
- [x] **PR title**:


- [x] **PR message**:
- Added a new section for how to set up and use Milvus with Docker, and
added an example of how to instantiate Milvus for hybrid retrieval
- Fixed the documentation setup to run `make lint` and `make format`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 21:31:40 +00:00
Lakindu Boteju
e0e9e560b3 PyMuPDF4LLM integration to LangChain (#29953)
## PyMuPDF4LLM integration to LangChain for PDF content extraction in
Markdown format

### Description

[PyMuPDF4LLM](https://github.com/pymupdf/RAG) makes it easier to extract
PDF content in Markdown format, needed for LLM & RAG applications.
(License: GNU Affero General Public License v3.0)


[langchain-pymupdf4llm](https://github.com/lakinduboteju/langchain-pymupdf4llm)
integrates PyMuPDF4LLM to LangChain as a Document Loader.
(License: MIT License)

This pull request introduces the integration of
[PyMuPDF4LLM](https://pymupdf.readthedocs.io/en/latest/pymupdf4llm) into
the LangChain project as an integration package:
[`langchain-pymupdf4llm`](https://github.com/lakinduboteju/langchain-pymupdf4llm).
The most important changes include adding new Jupyter notebooks to
document the integration and updating the package configuration file to
include the new package.

### Documentation:

* `docs/docs/integrations/providers/pymupdf4llm.ipynb`: Added a new
Jupyter notebook to document the integration of `PyMuPDF4LLM` with
LangChain, including installation instructions and class imports.
* `docs/docs/integrations/document_loaders/pymupdf4llm.ipynb`: Added a
new Jupyter notebook to document the usage of `langchain-pymupdf4llm` as
a LangChain integration package in detail.

### Package registration:

* `libs/packages.yml`: Updated the package configuration file to include
the `langchain-pymupdf4llm` package.

### Additional information

* Related to: https://github.com/langchain-ai/langchain/pull/29848

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:59:12 -05:00
Dan Mirsky
d98c3f76c2 core[patch]: Fix FileCallbackHandler name resolution, Fixes #29941 (#29942)
- **Description:** Same changes as #26593 but for FileCallbackHandler
- **Issue:**  Fixes #29941
- **Dependencies:** None
- **Twitter handle:** None

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-26 14:54:24 -05:00
Christophe Bornet
b3885c124f core: Add ruff rules TC (#29268)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
Some fixes done for TC001,TC002 and TC003 but these rules are excluded
since they don't play well with Pydantic.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 19:39:05 +00:00
talos
9cd20080fc community: Update SQLiteVec table trigger (#29914)
**Issue**: This trigger can only be used by the first table created.
Cannot create additional triggers for other tables.

**fixed**: Update the trigger name so that it can be used for new
tables.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:10:13 +00:00
ccurme
7562677f3f langchain[patch]: delete erroneous lock file (#30007)
Picked up during merge.
2025-02-26 15:01:05 +00:00
Erick Friis
3c96012f5e langchain: make numpy optional (#29182)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 14:35:24 +00:00
James Yang
8c28742980 docs: fix kinetica vectorstore typo (#29999)
Description:
- fix kinetica vectorstore typo
- add links

Co-authored-by: jamesongithub@users.noreply.github.com <jamesongithub@users.noreply.github.com>
2025-02-26 08:31:56 -05:00
Artem Yankov
6177b9f9ab community: add title, score and raw_content to tavily search results (#29995)
**Description:**

Tavily search results returned from API include useful information like
title, score and (optionally) raw_content that is missed in wrapper
although it's documented there properly. Add this data to the result
structure.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 23:27:21 +00:00
Eugene Yurtsev
b525226531 core[patch]: version 0.3.40 (#29997)
Version 0.3.40 release
2025-02-25 23:09:40 +00:00
Vadym Barda
0fc50b82a0 core[patch]: allow passing description to @tool decorator (#29976) 2025-02-25 17:45:36 -05:00
Naveen SK
21bfc95e14 docs: Correct grammatical typos in various documentation files (#29983)
**Description:**
Fixed grammatical typos in various documentation files

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@MrNaveenSK

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-25 19:13:31 +00:00
ccurme
1158d3134d langchain[patch]: remove aiohttp (#29991)
My guess is this was left over from when `community` was in langchain.
2025-02-25 11:43:00 -05:00
ccurme
afd7888392 langchain[patch]: remove explicit dependency on tenacity (#29990)
Not used anywhere in `langchain`, already a dependency of
langchain-core.
2025-02-25 11:31:55 -05:00
naveencloud
143c39a4a4 Update gitlab.mdx (#29987)
Instead of Github it was mentioned that Gitlab which causing confusion
while refering the documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 15:06:37 +00:00
ccurme
32704f0ad8 langchain: update extended test (#29988) 2025-02-25 14:58:20 +00:00
Yan
47e1a384f7 Writer partners integration docs (#29961)
**Documentation of Writer provider and additional features**
* [PyPi langchain-writer
web-page](https://pypi.org/project/langchain-writer/)
* [GitHub langchain-writer
repo](https://github.com/writer/langchain-writer)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 19:30:09 -05:00
Antonio Pisani
820a4c068c Transition prolog_tool doc to langgraph (#29972)
@ccurme As suggested I transitioned the prolog_tool documentation to
langgraph

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 23:34:53 +00:00
ccurme
79f5bbfb26 anthropic[patch]: release 0.3.8 (#29973) 2025-02-24 15:24:35 -05:00
ccurme
ded886f622 anthropic[patch]: support claude 3.7 sonnet (#29971) 2025-02-24 15:17:47 -05:00
Bagatur
d00d645829 docs[patch]: update disable_streaming docstring (#29968) 2025-02-24 18:40:31 +00:00
ccurme
b7a1705052 openai[patch]: release 0.3.7 (#29967) 2025-02-24 11:59:28 -05:00
ccurme
5437ee385b core[patch]: release 0.3.39 (#29966) 2025-02-24 11:47:01 -05:00
ccurme
291a232fb8 openai[patch]: set global ssl context (#29932)
We set 
```python
global_ssl_context = ssl.create_default_context(cafile=certifi.where())
```
at the module-level and share it among httpx clients.
2025-02-24 11:25:16 -05:00
ccurme
9ce07980b7 core[patch]: pydantic 2.11 compat (#29963)
Resolves https://github.com/langchain-ai/langchain/issues/29951

Was able to reproduce the issue with Anthropic installing from pydantic
`main` and correct it with the fix recommended in the issue.

Thanks very much @Viicos for finding the bug and the detailed writeup!
2025-02-24 11:11:25 -05:00
ccurme
0d3a3b99fc core[patch]: release 0.3.38 (#29962) 2025-02-24 15:04:53 +00:00
ccurme
b1a7f4e106 core, openai[patch]: support serialization of pydantic models in messages (#29940)
Resolves https://github.com/langchain-ai/langchain/issues/29003,
https://github.com/langchain-ai/langchain/issues/27264
Related: https://github.com/langchain-ai/langchain-redis/issues/52

```python
from langchain.chat_models import init_chat_model
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from pydantic import BaseModel

cache = SQLiteCache()

set_llm_cache(cache)

class Temperature(BaseModel):
    value: int
    city: str

llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(Temperature)
```
```python
# 681 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
```python
# 6.98 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
2025-02-24 09:34:27 -05:00
HackHuang
1645ec1890 docs(tool_artifacts.ipynb) : Remove the unnecessary information (#29960)
Update tool_artifacts.ipynb : Remove the unnecessary information as
below.


8b511a3a78/docs/docs/how_to/tool_artifacts.ipynb (L95)
2025-02-24 09:22:18 -05:00
HackHuang
78c54fccf3 docs(custom_tools.ipynb) : Fix the invalid URL link (#29958)
Update custom_tools.ipynb : Fix the invalid URL link about `@tool
decorator`
2025-02-24 14:03:26 +00:00
ccurme
927ec20b69 openai[patch]: update system role to developer for o-series models (#29785)
Some o-series models will raise a 400 error for `"role": "system"`
(`o1-mini` and `o1-preview` will raise, `o1` and `o3-mini` will not).

Here we update `ChatOpenAI` to update the role to `"developer"` for all
model names matching `^o\d`.

We only make this change on the ChatOpenAI class (not BaseChatOpenAI).
2025-02-24 08:59:46 -05:00
Ahmed Tammaa
8b511a3a78 [Exception Handling] DeepSeek JSONDecodeError (#29758)
For Context please check #29626 

The Deepseek is using langchain_openai. The error happens that it show
`json decode error`.

I added a handler for this to give a more sensible error message which
is DeepSeek API returned empty/invalid json.

Reproducing the issue is a bit challenging as it is inconsistent,
sometimes DeepSeek returns valid data and in other times it returns
invalid data which triggers the JSON Decode Error.

This PR is an exception handling, but not an ultimate fix for the issue.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 15:00:32 -05:00
Julien Elkaim
e586bffe51 community: Repair embeddings/llamacpp's embed_query method (#29935)
**Description:** As commented on the commit
[41b6a86](41b6a86bbe)
it introduced a bug for when we do an embedding request and the model
returns a non-nested list. Typically it's the case for model
**_nomic-embed-text_**.

- I added the unit test, and ran `make format`, `make lint` and `make
test` from the `community` package.
- No new dependency.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:32:17 +00:00
Saraswathy Kalaiselvan
5ca4933b9d docs: updated ChatLiteLLM model_kwargs description (#29937)
- [x] **PR title**: docs: (community) update ChatLiteLLM

- [x] **PR message**:
- **Description:** updated description of model_kwargs parameter which
was wrongly describing for temperature.
    - **Issue:** #29862 
    - **Dependencies:** N/A
    
- [x] **Add tests and docs**: N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:27:13 +00:00
ccurme
512eb1b764 anthropic[patch]: update models for integration tests (#29938) 2025-02-23 14:23:48 -05:00
Christophe Bornet
f6d4fec4d5 core: Add ruff rules ANN (type annotations) (#29271)
See https://docs.astral.sh/ruff/rules/#flake8-annotations-ann
The interest compared to only mypy is that ruff is very fast at
detecting missing annotations.

ANN101 and ANN102 are deprecated so we ignore them 
ANN401 (no Any type) ignored to be in sync with mypy config

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-22 17:46:28 -05:00
Bagatur
979a991dc2 core[patch]: dont deep copy merge_message_runs (#28454)
afaict no need to deep copy here, if we merge messages then we convert
them to chunks first anyways
2025-02-22 21:56:45 +00:00
Mohammad Mohtashim
afa94e5bf7 _wait_for_run calling fix for OpenAIAssistantRunnable (#29927)
- **Description:** Fixed the `OpenAIAssistantRunnable` call of
`_wait_for_run`
- **Issue:**  #29923
2025-02-22 00:27:24 +00:00
Vadym Barda
437fe6d216 core[patch]: return ToolMessage from tools when tool call ID is empty string (#29921) 2025-02-21 11:53:15 -05:00
Taofiq Aiyelabegan
5ee8a8f063 [Integration]: Langchain-Permit (#29867)
## Which area of LangChain is being modified?
- This PR adds a new "Permit" integration to the `docs/integrations/`
folder.
- Introduces two new Tools (`LangchainJWTValidationTool` and
`LangchainPermissionsCheckTool`)
- Introduces two new Retrievers (`PermitSelfQueryRetriever` and
`PermitEnsembleRetriever`)
- Adds demo scripts in `examples/` showcasing usage.

## Description of Changes
- Created `langchain_permit/tools.py` for JWT validation and permission
checks with Permit.
- Created `langchain_permit/retrievers.py` for custom Permit-based
retrievers.
- Added documentation in `docs/integrations/providers/permit.ipynb` (or
`.mdx`) to explain setup, usage, and examples.
- Provided sample scripts in `examples/demo_scripts/` to illustrate
usage of these tools and retrievers.
- Ensured all code is linted and tested locally.

Thank you again for reviewing!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-21 10:59:00 -05:00
Jean-Philippe Dournel
ebe38baaf9 community/mlx_pipeline: fix crash at mlx call (#29915)
- **Description:** 
Since mlx_lm 0.20, all calls to mlx crash due to deprecation of the way
parameters are passed to methods generate and generate_step.
Parameters top_p, temp, repetition_penalty and repetition_context_size
are not passed directly to those method anymore but wrapped into
"sampler" and "logit_processor".


- **Dependencies:** mlx_lm (optional)

-  **Tests:** 
I've had a new test to existing test file:
tests/integration_tests/llms/test_mlx_pipeline.py

---------

Co-authored-by: Jean-Philippe Dournel <jp@insightkeeper.io>
2025-02-21 09:14:53 -05:00
Sinan CAN
bd773cffc3 docs: remove redundant cell in sql_large_db guide (#29917)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-21 08:38:49 -05:00
ccurme
1fa9f6bc20 docs: build mongo in api ref (#29908) 2025-02-20 19:58:35 -05:00
Chaunte W. Lacewell
d972c6d6ea partners: add langchain-vdms (#29857)
**Description:** Deprecate vdms in community, add integration
langchain-vdms, and update any related files
**Issue:** n/a
**Dependencies:** langchain-vdms
**Twitter handle:** n/a

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 19:48:46 -05:00
Mohammad Mohtashim
8293142fa0 mistral[patch]: support model_kwargs (#29838)
- **Description:** Frequency_penalty added as a client parameter
- **Issue:** #29803

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 18:47:39 -05:00
ccurme
924d9b1b33 cli[patch]: fix retriever template (#29907)
Chat model tabs don't render correctly in .ipynb template.
2025-02-20 17:51:19 +00:00
Brayden Zhong
a70f31de5f Community: RankLLMRerank AttributeError (Handle list-based rerank results) (#29840)
# community: Fix AttributeError in RankLLMRerank (`list` object has no
attribute `candidates`)

## **Description**
This PR fixes an issue in `RankLLMRerank` where reranking fails with the
following error:

```
AttributeError: 'list' object has no attribute 'candidates'
```

The issue arises because `rerank_batch()` returns a `List[Result]`
instead of an object containing `.candidates`.

### **Changes Introduced**
- Adjusted `compress_documents()` to support both:
  - Old API format: `rerank_results.candidates`
  - New API format: `rerank_results` as a list
  - Also fix wrong .txt location parsing while I was at it.

---

## **Issue**
Fixes **AttributeError** in `RankLLMRerank` when using
`compression_retriever.invoke()`. The issue is observed when
`rerank_batch()` returns a list instead of an object with `.candidates`.

**Relevant log:**
```
AttributeError: 'list' object has no attribute 'candidates'
```

## **Dependencies**
- No additional dependencies introduced.

---

## **Checklist**
- [x] **Backward compatible** with previous API versions
- [x] **Tested** locally with different RankLLM models
- [x] **No new dependencies introduced**
- [x] **Linted** with `make format && make lint`
- [x] **Ready for review**

---

## **Testing**
- Ran `compression_retriever.invoke(query)`

## **Reviewers**
If no review within a few days, please **@mention** one of:
- @baskaryan
- @efriis
- @eyurtsev
- @ccurme
- @vbarda
- @hwchase17
2025-02-20 12:38:31 -05:00
Levon Ghukasyan
ec403c442a Separate deepale vector store (#29902)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:37:19 +00:00
Jorge Piedrahita Ortiz
3acf842e35 core: add sambanova chat models to load module mapping (#29855)
- **Description:** add sambanova integration package chat models to load
module mapping, to allow serialization and deserialization
2025-02-20 12:30:50 -05:00
ccurme
d227e4a08e mistralai[patch]: release 0.2.7 (#29906) 2025-02-20 17:27:12 +00:00
Hande
d8bab89e6e community: add cognee retriever (#29878)
This PR adds a new cognee integration, knowledge graph based retrieval
enabling developers to ingest documents into cognee’s knowledge graph,
process them, and then retrieve context via CogneeRetriever.
It includes:
- langchain_cognee package with a CogneeRetriever class
- a test for the integration, demonstrating how to create, process, and
retrieve with cognee
- an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


Followed additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

Thank you for the review!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:15:23 +00:00
Sinan CAN
97dd5f45ae Update retrieval.mdx (#29905)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:12:29 +00:00
dokato
92b415a9f6 community: Made some Jira fields optional for agent to work correctly (#29876)
**Description:** Two small changes have been proposed here:
(1)
Previous code assumes that every issue has a priority field. If an issue
lacks this field, the code will raise a KeyError.
Now, the code checks if priority exists before accessing it. If priority
is missing, it assigns None instead of crashing. This prevents runtime
errors when processing issues without a priority.

(2)

Also If the "style" field is missing, the code throws a KeyError.
`.get("style", None)` safely retrieves the value if present.

**Issue:** #29875 

**Dependencies:** N/A
2025-02-20 12:10:11 -05:00
am-kinetica
ca7eccba1f Handled a bug around empty query results differently (#29877)
Thank you for contributing to LangChain!

- [ ] **Handled query records properly**: "community:
vectorstores/kinetica"

- [ ] **Bugfix for empty query results handling**: 
- **Description:** checked for the number of records returned by a query
before processing further
- **Issue:** resulted in an `AttributeError` earlier which has now been
fixed

@efriis
2025-02-20 12:07:49 -05:00
Antonio Pisani
2c403a3ea9 docs: Add langchain-prolog documentation (#29788)
I want to add documentation for a new integration with SWI-Prolog.

@hwchase17 check this out:

https://github.com/apisani1/langchain-prolog/tree/main/examples/travel_agent

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 11:50:28 -05:00
Marlene
be7fa920fa Partner: Azure AI Langchain Docs and Package Registry (#29879)
This PR adds documentation for the Azure AI package in Langchain to the
main mono-repo

No issue connected or updated dependencies.

Utilises existing tests and makes updates to the docs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 14:35:26 +00:00
Hankyeol Kyung
2dd0ce3077 openai: Update reasoning_effort arg documentation (#29897)
**Description:** Update docstring for `reasoning_effort` argument to
specify that it applies to reasoning models only (e.g., OpenAI o1 and
o3-mini), clarifying its supported models.
**Issue:** None
**Dependencies:** None
2025-02-20 09:03:42 -05:00
Joe Ferrucci
c28ee329c9 Fix typo in local_llms.ipynb docs (#29903)
Change `tailed` to `tailored`

`Docs > How-To > Local LLMs:`

https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific
2025-02-20 09:03:10 -05:00
ccurme
ed3c2bd557 core[patch]: set version="v2" as default in astream_events (#29894) 2025-02-19 23:21:37 +00:00
Fabian Blatz
a2d05a376c community: ConfluenceLoader: add a filter method for attachments (#29882)
Adds a `attachment_filter_func` parameter to the ConfluenceLoader class
which can be used to determine which files are indexed. This is useful
if you are interested in excluding files based on their media type or
other metadata.
2025-02-19 18:20:45 -05:00
ccurme
9ed47a4d63 community[patch]: release 0.3.18 (#29896) 2025-02-19 20:13:00 +00:00
ccurme
92889edafd core[patch]: release 0.3.37 (#29895) 2025-02-19 20:04:35 +00:00
ccurme
ffd6194060 core[patch]: de-beta rate limiters (#29891) 2025-02-19 19:19:59 +00:00
Erick Friis
5637210a20 infra: run docs build on packages.yml updates (#29796)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-19 18:45:30 +00:00
ccurme
fb4c8423f0 docs: fix builds (#29890)
Missed in https://github.com/langchain-ai/langchain/pull/29889
2025-02-19 13:35:59 -05:00
ccurme
68b13e5172 pinecone: delete from monorepo (#29889)
This now lives in https://github.com/langchain-ai/langchain-pinecone
2025-02-19 12:55:15 -05:00
Erick Friis
6c1e21d128 core: basemessage.text() (#29078) 2025-02-18 17:45:44 -08:00
Ben Burns
e2ba336e72 docs: fix partner package table build for packages with no download stats (#29871)
The build in #29867 is currently broken because `langchain-cli` didn't
add download stats to the provider file.

This change gracefully handles sorting packages with missing download
counts. I initially updated the build to fetch download counts on every
run, but pypistats [requests](https://pypistats.org/api/) that users not
fetch stats like this via CI.
2025-02-19 11:05:57 +13:00
Eugene Yurtsev
8e5074d82d core: release 0.3.36 (#29869)
Release 0.3.36
2025-02-18 19:51:43 +00:00
Vadym Barda
d04fa1ae50 core[patch]: allow passing JSON schema as args_schema to tools (#29812) 2025-02-18 14:44:31 -05:00
ccurme
5034a8dc5c xai[patch]: release 0.2.1 (#29854) 2025-02-17 14:30:41 -05:00
ccurme
83dcef234d xai[patch]: support dedicated structured output feature (#29853)
https://docs.x.ai/docs/guides/structured-outputs

Interface appears identical to OpenAI's.
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class Joke(BaseModel):
    setup: str
    punchline: str

llm = init_chat_model("xai:grok-2").with_structured_output(
    Joke, method="json_schema"
)
llm.invoke("Tell me a joke about cats.")
```
2025-02-17 14:19:51 -05:00
ccurme
9d6fcd0bfb infra: add xai to scheduled testing (#29852) 2025-02-17 18:59:45 +00:00
ccurme
8a3b05ae69 langchain[patch]: release 0.3.19 (#29851) 2025-02-17 13:36:23 -05:00
ccurme
c9061162a1 langchain[patch]: add xai to extras (#29850) 2025-02-17 17:49:34 +00:00
Bagatur
1acf57e9bd langchain[patch]: init_chat_model xai support (#29849) 2025-02-17 09:45:39 -08:00
Paul Nikonowicz
1a55da9ff4 docs: Update gemini vector docs (#29841)
# Description

2 changes: 
1. removes get pass from the code example as it reads from stdio causing
a freeze to occur
2. updates to the latest gemini model in the example

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-17 07:54:23 -05:00
hsm207
037b129b86 weaviate: Add-deprecation-warning (#29757)
- **Description:** add deprecation warning when using weaviate from
langchain_community
  - **Issue:** NA
  - **Dependencies:** NA
  - **Twitter handle:** NA

---------

Signed-off-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 21:42:18 -05:00
Đỗ Quang Minh
cd198ac9ed community: add custom model for OpenAIWhisperParser (#29831)
Add `model` properties for OpenAIWhisperParser. Defaulted to `whisper-1`
(previous value).
Please help me update the docs and other related components of this
repo.
2025-02-16 21:26:07 -05:00
Cole McIntosh
6874c9c1d0 docs: add notebook for langchain-salesforce package (#29800)
**Description:**  
This PR adds a Jupyter notebook that explains the features,
installation, and usage of the
[`langchain-salesforce`](https://github.com/colesmcintosh/langchain-salesforce)
package. The notebook includes:
- Setup instructions for configuring Salesforce credentials  
- Example code demonstrating common operations such as querying,
describing objects, creating, updating, and deleting records

**Issue:**  
N/A

**Dependencies:**  
No new dependencies are required.

**Tests and Docs:**  
- Added an example notebook demonstrating the usage of the
`langchain-salesforce` package, located in `docs/docs/integrations`.

**Lint and Test:**  
- Ran `make format`, `make lint`, and `make test` successfully.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 08:34:57 -05:00
Jan Heimes
60f58df5b3 community: add top_k as param to Needle Retriever (#29821)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
This PR adds top_k as a param to the Needle Retriever. By default we use
top 10.



- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-16 08:30:52 -05:00
Mateusz Szewczyk
8147679169 docs: Rename IBM product name to IBM watsonx (#29802)
Thank you for contributing to LangChain!

Rename IBM product name to `IBM watsonx`

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-15 21:48:02 -05:00
Jesus Fernandez Bes
1dfac909d8 community: Adding IN Operator to AzureCosmosDBNoSQLVectorStore (#29805)
- ** Description**: I have added a new operator in the operator map with
key `$in` and value `IN`, so that you can define filters using lists as
values. This was already contemplated but as IN operator was not in the
map they cannot be used.
- **Issue**: Fixes #29804.
- **Dependencies**: No extra.
2025-02-15 21:44:54 -05:00
Wahed Hemati
8901b113c3 docs: add Discord integration docs (#29822)
This PR adds documentation for the `langchain-discord-shikenso`
integration, including an example notebook at
`docs/docs/integrations/tools/discord.ipynb` and updates to
`libs/packages.yml` to track the new package.

  **Issue:**  
  N/A

  **Dependencies:**  
  None

  **Twitter handle:**  
  N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:43:45 -05:00
Akmal Ali Jasmin
f1792e486e fix: Correct getpass usage in Google Generative AI Embedding docs (#29809) (#29810)
**fix: Correct getpass usage in Google Generative AI Embedding docs
(#29809)**

- **Description:** Corrected the `getpass` usage in the Google
Generative AI Embedding documentation by replacing `getpass()` with
`getpass.getpass()` to fix the `TypeError`.
- **Issue:** #29809  
- **Dependencies:** None  

**Additional Notes:**  
The change ensures compatibility with Google Colab and follows Python's
`getpass` module usage standards.
2025-02-15 21:41:00 -05:00
HackHuang
80ca310c15 langchain : Add the full code snippet in rag.ipynb (#29820)
docs(rag.ipynb) : Add the `full code` snippet, it’s necessary and useful
for beginners to demonstrate.

Preview the change :
https://langchain-git-fork-googtech-patch-3-langchain.vercel.app/docs/tutorials/rag/

Two `full code` snippets are added as below :
<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

##################################################################################################
# 5.Using LangGraph to tie together the retrieval and generation steps into a single application #                               #
##################################################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    context: List[Document]
    answer: str

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["question"])
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
```

</details>

<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
from typing import Literal
from typing_extensions import Annotated

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

# Search analysis: Add some metadata to the documents in our vector store,
# so that we can filter on section later. 
total_documents = len(all_splits)
third = total_documents // 3
for i, document in enumerate(all_splits):
    if i < third:
        document.metadata["section"] = "beginning"
    elif i < 2 * third:
        document.metadata["section"] = "middle"
    else:
        document.metadata["section"] = "end"

# Search analysis: Define the schema for our search query
class Search(TypedDict):
    query: Annotated[str, ..., "Search query to run."]
    section: Annotated[
        Literal["beginning", "middle", "end"], ..., "Section to query."]

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

###################################################################
# 5.Using LangGraph to tie together the analyze_query, retrieval  #
# and generation steps into a single application                  #
###################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    query: Search
    context: List[Document]
    answer: str

# Search analysis: Define the node of application, 
# which be used to generate a query from the user's raw input
def analyze_query(state: State):
    structured_llm = llm.with_structured_output(Search)
    query = structured_llm.invoke(state["question"])
    return {"query": query}

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    query = state["query"]
    retrieved_docs = vector_store.similarity_search(
        query["query"],
        filter=lambda doc: doc.metadata.get("section") == query["section"],
    )
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([analyze_query, retrieve, generate]) 
graph_builder.add_edge(START, "analyze_query")
graph = graph_builder.compile()
```

</details>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:37:58 -05:00
Michael Chin
b2c21f3e57 docs: Update SagemakerEndpoint examples (#29814)
Related issue: https://github.com/langchain-ai/langchain-aws/issues/361

Updated the AWS `SagemakerEndpoint` LLM documentation to import from
`langchain-aws`.
2025-02-15 21:34:56 -05:00
Krishna Kulkarni
a98c5f1c4b langchain_community: add image support to DuckDuckGoSearchAPIWrapper (#29816)
- [ ] **PR title**: langchain_community: add image support to
DuckDuckGoSearchAPIWrapper

- **Description:** This PR enhances the DuckDuckGoSearchAPIWrapper
within the langchain_community package by introducing support for image
searches. The enhancement includes:
  - Adding a new method _ddgs_images to handle image search queries.
- Updating the run and results methods to process and return image
search results appropriately.
- Modifying the source parameter to accept "images" as a valid option,
alongside "text" and "news".
- **Dependencies:** No additional dependencies are required for this
change.
2025-02-15 21:32:14 -05:00
Iris Liu
0d9f0b4215 docs: updates Chroma integration API ref docs (#29826)
- Description: updates Chroma integration API ref docs
- Issue: #29817
- Dependencies: N/A
- Twitter handle: @irieliu

Co-authored-by: “Iris <“liuirisny@gmail.com”>
2025-02-15 21:05:21 -05:00
ccurme
3fe7c07394 openai[patch]: release 0.3.6 (#29824) 2025-02-15 13:53:35 -05:00
ccurme
65a6dce428 openai[patch]: enable streaming for o1 (#29823)
Verified streaming works for the `o1-2024-12-17` snapshot as well.
2025-02-15 12:42:05 -05:00
Christophe Bornet
3dffee3d0b all: Bump blockbuster version to 1.5.18 (#29806)
Has fixes for running on Windows and non-CPython runtimes.
2025-02-14 07:55:38 -08:00
ccurme
d9a069c414 tests[patch]: release 0.3.12 (#29797) 2025-02-13 23:57:44 +00:00
ccurme
e4f106ea62 groq[patch]: remove xfails (#29794)
These appear to pass.
2025-02-13 15:49:50 -08:00
Erick Friis
f34e62ef42 packages: add langchain-xai (#29795)
wasn't registered per the contribution guide:
https://python.langchain.com/docs/contributing/how_to/integrations/
2025-02-13 15:36:41 -08:00
ccurme
49cc6106f7 tests[patch]: fix query for test_tool_calling_with_no_arguments (#29793) 2025-02-13 23:15:52 +00:00
Erick Friis
1a225fad03 multiple: fix uv path deps (#29790)
file:// format wasn't working with updates - it doesn't install as an
editable dep

move to tool.uv.sources with path= instead
2025-02-13 21:32:34 +00:00
Erick Friis
ff13384eb6 packages: update counts, add command (#29789) 2025-02-13 20:45:25 +00:00
Mateusz Szewczyk
8d0e31cbc5 docs: Fix model_id on EmbeddingTabs page (#29784)
Thank you for contributing to LangChain!

Fix `model_id` in IBM provider on EmbeddingTabs page

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 09:41:51 -08:00
Mateusz Szewczyk
61f1be2152 docs: Added IBM to ChatModelTabs and EmbeddingTabs (#29774)
Thank you for contributing to LangChain!

Added IBM to ChatModelTabs and EmbeddingTabs

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:43:42 -08:00
HackHuang
76d32754ff core : update the class docs of InMemoryVectorStore in in_memory.py (#29781)
- **Description:** Add the new introduction about checking `store` in
in_memory.py, It’s necessary and useful for beginners.
```python
Check Documents:
    .. code-block:: python
    
        for doc in vector_store.store.values():
            print(doc)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 16:41:47 +00:00
Mateusz Szewczyk
b82cef36a5 docs: Update IBM WatsonxLLM and ChatWatsonx documentation (#29752)
Thank you for contributing to LangChain!

Update presented model in `WatsonxLLM` and `ChatWatsonx` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:41:07 -08:00
Mohammad Mohtashim
96ad09fa2d (Community): Added API Key for Jina Search API Wrapper (#29622)
- **Description:** Simple change for adding the API Key for Jina Search
API Wrapper
- **Issue:** #29596
2025-02-12 20:12:07 -08:00
ccurme
f1c66a3040 docs: minor fix to provider table (#29771)
Langfair renders as LangfAIr
2025-02-13 04:06:58 +00:00
Jakub Kopecký
c8cb7c25bf docs: update apify integration (#29553)
**Description:** Fixed and updated Apify integration documentation to
use the new [langchain-apify](https://github.com/apify/langchain-apify)
package.
**Twitter handle:** @apify
2025-02-12 20:02:55 -08:00
ccurme
16fb1f5371 chroma[patch]: release 0.2.2 (#29769)
Resolves https://github.com/langchain-ai/langchain/issues/29765
2025-02-13 02:39:16 +00:00
Mohammad Mohtashim
2310847c0f (Chroma): Small Fix in add_texts when checking for embeddings (#29766)
- **Description:** Small fix in `add_texts` to make embedding
nullability is checked properly.
- **Issue:** #29765

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 02:26:13 +00:00
Eric Pinzur
716fd89d8e docs: contributed Graph RAG Retriever integration (#29744)
**Description:** 

This adds the `Graph RAG` Retriever integration documentation, per
https://python.langchain.com/docs/contributing/how_to/integrations/.

* The integration exists in this public repository:
https://github.com/datastax/graph-rag
* We've implemented the standard langchain tests for retrievers:
https://github.com/datastax/graph-rag/blob/main/packages/langchain-graph-retriever/tests/test_langchain.py
* Our integration is published to PyPi:
https://pypi.org/project/langchain-graph-retriever/
2025-02-12 18:25:48 -08:00
Sunish Sheth
f42dafa809 Deprecating sql_database access for creating UC functions for agent tools (#29745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-13 02:24:44 +00:00
Thor 雷神 Schaeff
a0970d8d7e [WIP] chore: update ElevenLabs tool. (#29722)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:54:34 +00:00
Chaymae El Aattabi
4b08a7e8e8 Fix #29759: Use local chunk_size_ for looping in embed_documents (#29761)
This fix ensures that the chunk size is correctly determined when
processing text embeddings. Previously, the code did not properly handle
cases where chunk_size was None, potentially leading to incorrect
chunking behavior.

Now, chunk_size_ is explicitly set to either the provided chunk_size or
the default self.chunk_size, ensuring consistent chunking. This update
improves reliability when processing large text inputs in batches and
prevents unintended behavior when chunk_size is not specified.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:28:26 +00:00
Jorge Piedrahita Ortiz
1fbc01c350 docs: update sambanova integration api reference links (#29762)
- **Description:** update sambanova external package integration api
reference links in docs
2025-02-12 15:58:00 -08:00
Sunish Sheth
043d78d85d Deprecate langhchain community ucfunctiontoolkit in favor for databricks_langchain (#29746)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-12 15:50:35 -08:00
Hugues Chocart
e4eec9e9aa community: add langchain-abso documentation (#29739)
Add the documentation for the community package `langchain-abso`. It
provides a new Chat Model class, that uses https://abso.ai

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-02-12 19:57:33 +00:00
ccurme
e61f463745 core[patch]: release 0.3.35 (#29764) 2025-02-12 18:13:10 +00:00
Nuno Campos
fe59f2cc88 core: Fix output of convert_messages when called with BaseMessage.model_dump() (#29763)
- additional_kwargs was being nested twice
- example, response_metadata was placed inside additional_kwargs
2025-02-12 10:05:33 -08:00
Jacob Lee
f4e3e86fbb feat(langchain): Infer o3 modelstrings passed to init_chat_model as OpenAI (#29743) 2025-02-11 16:51:41 -08:00
Mohammad Mohtashim
9f3bcee30a (Community): Adding Structured Support for ChatPerplexity (#29361)
- **Description:** Adding Structured Support for ChatPerplexity
- **Issue:** #29357
- This is implemented as per the Perplexity official docs:
https://docs.perplexity.ai/guides/structured-outputs

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-11 15:51:18 -08:00
Jawahar S
994c5465e0 feat: add support for IBM WatsonX AI chat models (#29688)
**Description:** Updated init_chat_model to support Granite models
deployed on IBM WatsonX
**Dependencies:**
[langchain-ibm](https://github.com/langchain-ai/langchain-ibm)

Tagging @baskaryan @efriis for review when you get a chance.
2025-02-11 15:34:29 -08:00
Shailendra Mishra
c7d74eb7a3 Oraclevs integration (#29723)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  community: langchain_community/vectorstore/oraclevs.py


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Refactored code to allow a connection or a connection
pool.
- **Issue:** Normally an idel connection is terminated by the server
side listener at timeout. A user thus has to re-instantiate the vector
store. The timeout in case of connection is not configurable. The
solution is to use a connection pool where a user can specify a user
defined timeout and the connections are managed by the pool.
    - **Dependencies:** None
    - **Twitter handle:** 


- [ ] **Add tests and docs**: This is not a new integration. A user can
pass either a connection or a connection pool. The determination of what
is passed is made at run time. Everything should work as before.

- [ ] **Lint and test**:  Already done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-11 14:56:55 -08:00
ccurme
42ebf6ae0c deepseek[patch]: release 0.1.2 (#29742) 2025-02-11 11:53:43 -08:00
ccurme
ec55553807 pinecone[patch]: release 0.2.3 (#29741) 2025-02-11 19:27:39 +00:00
ccurme
001cf99253 pinecone[patch]: add support for python 3.13 (#29737) 2025-02-11 11:20:21 -08:00
ccurme
ba8f752bf5 openai[patch]: release 0.3.5 (#29740) 2025-02-11 19:20:11 +00:00
ccurme
9477f49409 openai, deepseek: make _convert_chunk_to_generation_chunk an instance method (#29731)
1. Make `_convert_chunk_to_generation_chunk` an instance method on
BaseChatOpenAI
2. Override on ChatDeepSeek to add `"reasoning_content"` to message
additional_kwargs.

Resolves https://github.com/langchain-ai/langchain/issues/29513
2025-02-11 11:13:23 -08:00
Christopher Menon
1edd27d860 docs: fix SQL-based metadata filter syntax, add link to BigQuery docs (#29736)
Fix the syntax for SQL-based metadata filtering in the [Google BigQuery
Vector Search
docs](https://python.langchain.com/docs/integrations/vectorstores/google_bigquery_vector_search/#searching-documents-with-metadata-filters).
Also add a link to learn more about BigQuery operators that can be used
here.

I have been using this library, and have found that this is the correct
syntax to use for the SQL-based filters.

**Issue**: no open issue.
**Dependencies**: none.
**Twitter handle**: none.

No tests as this is only a change to the documentation.

<!-- Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2025-02-11 11:10:12 -08:00
ccurme
d0c2dc06d5 mongodb[patch]: fix link in readme (#29738) 2025-02-11 18:19:59 +00:00
zzaebok
3b3d52206f community: change wikidata rest api version from v0 to v1 (#29708)
**Description:**

According to the [wikidata
documentation](https://www.wikidata.org/wiki/Wikidata_talk:REST_API),
Wikibase REST API version 1 (stable) is released from November 11, 2024.
Their guide is to use the new v1 API and, it just requires replacing v0
in the routes with v1 in almost all cases.
So I replaced WIKIDATA_REST_API_URL from v0 to v1 for stable usage.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-10 17:12:38 -08:00
ccurme
4a389ef4c6 community: fix extended testing (#29715)
v0.3.100 of premai sdk appears to break on import:
89d9276cbf/premai/api/__init__.py (L230)
2025-02-10 16:57:34 -08:00
Yoav Levy
af3f759073 docs: fixed nimble's provider page and retriever (#29695)
## **Description:**
- Added information about the retriever that Nimble's provider exposes.
- Fixed the authentication explanation on the retriever page.
2025-02-10 15:30:40 -08:00
Bhav Sardana
624216aa64 community:Fix for Pydantic model validator of GoogleApiYoutubeLoader (#29694)
- **Description:** Community: bugfix for pedantic model validator for
GoogleApiYoutubeLoader
- **Issue:** #29165, #27432 
Fix is similar to #29346
2025-02-10 08:57:58 -05:00
Changyong Um
60740c44c5 community: Add configurable text key for indexing and the retriever in Pinecone Hybrid Search (#29697)
**issue**

In Langchain, the original content is generally stored under the `text`
key. However, the `PineconeHybridSearchRetriever` searches the `context`
field in the metadata and cannot change this key. To address this, I
have modified the code to allow changing the key to something other than
context.

In my opinion, following Langchain's conventions, the `text` key seems
more appropriate than `context`. However, since I wasn't sure about the
author's intent, I have left the default value as `context`.
2025-02-10 08:56:37 -05:00
Jun He
894b0cac3c docs: Remove redundant line (#29698)
If I understand it correctly, chain1 is never used.
2025-02-10 08:53:21 -05:00
Tiest van Gool
6655246504 Classification Tutorial: Replaced .dict() with .model_dump() method (#29701)
The .dict() method is deprecated inf Pydantic V2.0 and use `model_dump`
method instead.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-10 08:38:15 -05:00
Edmond Wang
c36e6d4371 docs: Add Comments and Supplementary Example Code to Vearch Vector Dat… (#29706)
- **Description:** Added some comments to the example code in the Vearch
vector database documentation and included commonly used sample code.
- **Issue:** None
- **Dependencies:** None

---------

Co-authored-by: wangchuxiong <wangchuxiong@jd.com>
2025-02-10 08:35:38 -05:00
Akmal Ali Jasmin
bc5fafa20e [DOC] Fix #29685: HuggingFaceEndpoint missing task argument in documentation (#29686)
## **Description**
This PR updates the LangChain documentation to address an issue where
the `HuggingFaceEndpoint` example **does not specify the required `task`
argument**. Without this argument, users on `huggingface_hub == 0.28.1`
encounter the following error:

```
ValueError: Task unknown has no recommended model. Please specify a model explicitly.
```

---

## **Issue**
Fixes #29685

---

## **Changes Made**
 **Updated `HuggingFaceEndpoint` documentation** to explicitly define
`task="text-generation"`:
```python
llm = HuggingFaceEndpoint(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    task="text-generation"  # Explicitly specify task
)
```

 **Added a deprecation warning note** and recommended using
`InferenceClient`:
```python
from huggingface_hub import InferenceClient
from langchain.llms.huggingface_hub import HuggingFaceHub

client = InferenceClient(model=GEN_MODEL_ID, token=HF_TOKEN)

llm = HuggingFaceHub(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    client=client,
)
```

---

## **Dependencies**
- No new dependencies introduced.
- Change only affects **documentation**.

---

## **Testing**
-  Verified that adding `task="text-generation"` resolves the issue.
-  Tested the alternative approach with `InferenceClient` in Google
Colab.

---

## **Twitter Handle (Optional)**
If this PR gets announced, a shout-out to **@AkmalJasmin** would be
great! 🚀

---

## **Reviewers**
📌 **@langchain-maintainers** Please review this PR. Let me know if
further changes are needed.

🚀 This fix improves **developer onboarding** and ensures the **LangChain
documentation remains up to date**! 🚀
2025-02-08 14:41:02 -05:00
manukychen
3de445d521 using getattr and default value to prevent 'OpenSearchVectorSearch' has no attribute 'bulk_size' (#29682)
- Description: Adding getattr methods and set default value 500 to
cls.bulk_size, it can prevent the error below:
Error: type object 'OpenSearchVectorSearch' has no attribute 'bulk_size'

- Issue: https://github.com/langchain-ai/langchain/issues/29071
2025-02-08 14:39:57 -05:00
Yao Tianjia
5d581ba22c langchain: support the situation when action_input is null in json output_parser (#29680)
Description:
This PR fixes handling of null action_input in
[langchain.agents.output_parser]. Previously, passing null to
action_input could cause OutputParserException with unclear error
message which cause LLM don't know how to modify the action. The changes
include:

Added null-check validation before processing action_input
Implemented proper fallback behavior with default values
Maintained backward compatibility with existing implementations

Error Examples:
```
{
  "action":"some action",
  "action_input":null
}
```

Issue:
None

Dependencies:
None
2025-02-07 22:01:01 -05:00
Philippe PRADOS
beb75b2150 community[minor]: 05 - Refactoring PyPDFium2 parser (#29625)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once. This specific part focuses on updating the
PyPDFium2 parser.

For more details, see
https://github.com/langchain-ai/langchain/pull/28970.
2025-02-07 21:31:12 -05:00
Christophe Bornet
723031d548 community: Bump ruff version to 0.9 (#29206)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:21:10 +00:00
Christophe Bornet
30f6c9f5c8 community: Use Blockbuster to detect blocking calls in asyncio during tests (#29609)
Same as https://github.com/langchain-ai/langchain/pull/29043 for
langchain-community.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:10:39 +00:00
Christophe Bornet
3a57a28daa langchain: Use Blockbuster to detect blocking calls in asyncio during tests (#29616)
Same as https://github.com/langchain-ai/langchain/pull/29043 for the
langchain package.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:08:15 +00:00
Keenan Pepper
c67d473397 core: Make abatch_as_completed respect max_concurrency (#29426)
- **Description:** Add tests for respecting max_concurrency and
implement it for abatch_as_completed so that test passes
- **Issue:** #29425
- **Dependencies:** none
- **Twitter handle:** keenanpepper
2025-02-07 16:51:22 -08:00
Aaron V
dcfaae85d2 Core: Fix __add__ for concatting two BaseMessageChunk's (#29531)
Description:

The change allows you to use the overloaded `+` operator correctly when
`+`ing two BaseMessageChunk subclasses. Without this you *must*
instantiate a subclass for it to work.

Which feels... wrong. Base classes should be decoupled from sub classes
and should have in no way a dependency on them.

Issue:

You can't `+` a BaseMessageChunk with a BaseMessageChunk

e.g. this will explode

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import BaseMessageChunk


chunk1 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

# this will throw
new_chunk = chunk1 + chunk2
```

In case anyone ran into this issue themselves, it's probably best to use
the AIMessageChunk:

a la 

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import AIMessageChunk


chunk1 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

# No explosion!
new_chunk = chunk1 + chunk2
```

Dependencies:

None!

Twitter handle: 
`aaron_vogler`

Keeping these for later if need be:
```
baskaryan
efriis 
eyurtsev
ccurme 
vbarda
hwchase17
baskaryan
efriis
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 00:43:36 +00:00
Marlene
4fa3ef0d55 Community/Partner: Adding Azure community and partner user agent to better track usage in Python (#29561)
- This pull request includes various changes to add a `user_agent`
parameter to Azure OpenAI, Azure Search and Whisper in the Community and
Partner packages. This helps in identifying the source of API requests
so we can better track usage and help support the community better. I
will also be adding the user_agent to the new `langchain-azure` repo as
well.

- No issue connected or  updated dependencies. 
- Utilises existing tests and docs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:28:30 +00:00
Ella Charlaix
c401254770 huggingface: Add ipex support to HuggingFaceEmbeddings (#29386)
ONNX and OpenVINO models are available by specifying the `backend`
argument (the model is loaded using `optimum`
https://github.com/huggingface/optimum)

```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "onnx"},
)
```

With this PR we also enable the IPEX backend 



```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "ipex"},
)
```
2025-02-07 15:21:09 -08:00
Bruno Alvisio
3eaf561561 core: Handle unterminated escape character when parsing partial JSON (#29065)
**Description**
Currently, when parsing a partial JSON, if a string ends with the escape
character, the whole key/value is removed. For example:

```
>>> from langchain_core.utils.json import parse_partial_json
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> 
>>> parse_partial_json(my_str)
{'foo': 'bar'}
```

My expectation (and with this fix) would be for `parse_partial_json()`
to return:
```
>>> from langchain_core.utils.json import parse_partial_json
>>> 
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> parse_partial_json(my_str)
{'foo': 'bar', 'baz': 'qux'}
```

Notes:
1. It could be argued that current behavior is still desired.
2. I have experienced this issue when the streaming output from an LLM
and the chunk happens to end with `\\`
3. I haven't included tests. Will do if change is accepted.
4. This is specially troublesome when this function is used by

187131c55c/libs/core/langchain_core/output_parsers/transform.py (L111)

since what happens is that, for example, if the received sequence of
chunks are: `{"foo": "b` , `ar\\` :

Then, the result of calling `self.parse_result()` is:
```
{"foo": "b"}
```
and the second time:
```
{}
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:18:21 +00:00
ccurme
0040d93b09 docs: showcase extras in chat model tabs (#29677)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 18:16:44 -05:00
Viren
252cf0af10 docs: add LangFair as a provider (#29390)
**Description:**
- Add `docs/docs/providers/langfair.mdx`
- Register langfair in `libs/packages.yml`

**Twitter handle:** @LangFair

**Tests and docs**
1. Integration tests not needed as this PR only adds a .mdx file to
docs.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Dylan Bouchard <dylan.bouchard@cvshealth.com>
Co-authored-by: Dylan Bouchard <109233938+dylanbouchard@users.noreply.github.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 21:27:37 +00:00
Erick Friis
eb9eddae0c docs: use init_chat_model (#29623) 2025-02-07 12:39:27 -08:00
ccurme
bff25b552c community: release 0.3.17 (#29676) 2025-02-07 19:41:44 +00:00
ccurme
01314c51fa langchain: release 0.3.18 (#29654) 2025-02-07 13:40:26 -05:00
ccurme
92e2239414 openai[patch]: make parallel_tool_calls explicit kwarg of bind_tools (#29669)
Improves discoverability and documentation.

cc @vbarda
2025-02-07 13:34:32 -05:00
ccurme
2a243df7bb infra: add UV_NO_SYNC to monorepo makefile (#29670)
Helpful for running `api_docs_quick_preview` locally.
2025-02-07 17:17:05 +00:00
Marc Ammann
5690575f13 openai: Removed tool_calls from completion chunk after other chunks have already been sent. (#29649)
- **Description:** Before sending a completion chunk at the end of an
OpenAI stream, removing the tool_calls as those have already been sent
as chunks.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** -

@ccurme as mentioned in another PR

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-07 10:15:52 -05:00
Ikko Eltociear Ashimine
0d45ad57c1 community: update base_o365.py (#29657)
extention -> extension
2025-02-07 08:43:29 -05:00
weeix
1b064e198f docs: Fix llama.cpp GPU Installation in llamacpp.ipynb (Deprecated Env Variable) (#29659)
- **Description:** The llamacpp.ipynb notebook used a deprecated
environment variable, LLAMA_CUBLAS, for llama.cpp installation with GPU
support. This commit updates the notebook to use the correct GGML_CUDA
variable, fixing the installation error.
- **Issue:** none
-  **Dependencies:** none
2025-02-07 08:43:09 -05:00
Vincent Emonet
3645181d0e qdrant: Add similarity_search_with_score_by_vector() function to the QdrantVectorStore (#29641)
Added `similarity_search_with_score_by_vector()` function to the
`QdrantVectorStore` class.

It is required when we want to query multiple time with the same
embeddings. It was present in the now deprecated original `Qdrant`
vectorstore implementation, but was absent from the new one. It is also
implemented in a number of others `VectorStore` implementations

I have added tests for this new function

Note that I also argued in this discussion that it should be part of the
general `VectorStore`
https://github.com/langchain-ai/langchain/discussions/29638

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 00:55:58 +00:00
ccurme
488cb4a739 anthropic: release 0.3.7 (#29653) 2025-02-06 17:05:57 -05:00
ccurme
ab09490c20 openai: release 0.3.4 (#29652) 2025-02-06 17:02:21 -05:00
ccurme
29a0c38cc3 openai[patch]: add test for message.name (#29651) 2025-02-06 16:49:28 -05:00
ccurme
91cca827c0 tests: release 0.3.11 (#29648) 2025-02-06 21:48:09 +00:00
Sunish Sheth
25ce1e211a docs: Updating the imports for langchain-databricks to databricks-langchain (#29646)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-06 13:28:07 -08:00
ccurme
e1b593ae77 text-splitters[patch]: release 0.3.6 (#29647) 2025-02-06 16:16:05 -05:00
ccurme
a91e58bc10 core: release 0.3.34 (#29644) 2025-02-06 15:53:56 -05:00
Vincent Emonet
08b9eaaa6f community: improve FastEmbedEmbeddings support for ONNX execution provider (e.g. GPU) (#29645)
I made a change to how was implemented the support for GPU in
`FastEmbedEmbeddings` to be more consistent with the existing
implementation `langchain-qdrant` sparse embeddings implementation

It is directly enabling to provide the list of ONNX execution providers:
https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/fastembed_sparse.py#L15

It is a bit less clear to a user that just wants to enable GPU, but
gives more capabilities to work with other execution providers that are
not the `CUDAExecutionProvider`, and is more future proof

Sorry for the disturbance @ccurme

> Nice to see you just moved to `uv`! It is so much nicer to run
format/lint/test! No need to manually rerun the `poetry install` with
all required extras now
2025-02-06 15:31:23 -05:00
Erick Friis
1bf620222b infra: remove deepseek from scheduled tests (#29643) 2025-02-06 19:43:03 +00:00
ccurme
3450bfc806 infra: add UV_FROZEN to makefiles (#29642)
These are set in Github workflows, but forgot to add them to most
makefiles for convenience when developing locally.

`uv run` will automatically sync the lock file. Because many of our
development dependencies are local installs, it will pick up version
changes and update the lock file. Passing `--frozen` or setting this
environment variable disables the behavior.
2025-02-06 14:36:54 -05:00
ccurme
d172984c91 infra: migrate to uv (#29566) 2025-02-06 13:36:26 -05:00
ccurme
9da06e6e94 standard-tests[patch]: use has_structured_output property to engage structured output tests (#29635)
Motivation: dedicated structured output features are becoming more
common, such that integrations can support structured output without
supporting tool calling.

Here we make two changes:

1. Update the `has_structured_output` method to default to True if a
model supports tool calling (in addition to defaulting to True if
`with_structured_output` is overridden).
2. Update structured output tests to engage if `has_structured_output`
is True.
2025-02-06 10:09:06 -08:00
Vincent Emonet
db8201d4da community: fix typo in the module imported when using GPU with FastEmbedEmbeddings (#29631)
Made a mistake in the module to import (the module stay the same only
the installed package changes), fixed it and tested it

https://github.com/langchain-ai/langchain/pull/29627

@ccurme
2025-02-06 10:26:08 -05:00
Mohammed Abbadi
f8fd65dea2 community: Update deeplake.py (#29633)
Deep Lake recently released version 4, which introduces significant
architectural changes, including a new on-disk storage format, enhanced
indexing mechanisms, and improved concurrency. However, LangChain's
vector store integration currently does not support Deep Lake v4 due to
breaking API changes.

Previously, the installation command was:
`pip install deeplake[enterprise]`
This installs the latest available version, which now defaults to Deep
Lake v4. Since LangChain's vector store integration is still dependent
on v3, this can lead to compatibility issues when using Deep Lake as a
vector database within LangChain.

To ensure compatibility, the installation command has been updated to:
`pip install deeplake[enterprise]<4.0.0`
This constraint ensures that pip installs the latest available version
of Deep Lake within the v3 series while avoiding the incompatible v4
update.
2025-02-06 10:25:13 -05:00
Vincent Emonet
0ac5536f04 community: add support for using GPUs with FastEmbedEmbeddings (#29627)
- **Description:** add a `gpu: bool = False` field to the
`FastEmbedEmbeddings` class which enables to use GPU (through ONNX CUDA
provider) when generating embeddings with any fastembed model. It just
requires the user to install a different dependency and we use a
different provider when instantiating `fastembed.TextEmbedding`
- **Issue:** when generating embeddings for a really large amount of
documents this drastically increase performance (honestly that is a must
have in some situations, you can't just use CPU it is way too slow)
- **Dependencies:** no direct change to dependencies, but internally the
users will need to install `fastembed-gpu` instead of `fastembed`, I
made all the changes to the init function to properly let the user know
which dependency they should install depending on if they enabled `gpu`
or not
 
cf. fastembed docs about GPU for more details:
https://qdrant.github.io/fastembed/examples/FastEmbed_GPU/

I did not added test because it would require access to a GPU in the
testing environment
2025-02-06 08:04:19 -05:00
Dmitrii Rashchenko
0ceda557aa add o1 and o3-mini to pricing (#29628)
### PR Title:  
**community: add latest OpenAI models pricing**  

### Description:  
This PR updates the OpenAI model cost calculation mapping by adding the
latest OpenAI models, **o1 (non-preview)** and **o3-mini**, based on the
pricing listed on the [OpenAI pricing
page](https://platform.openai.com/docs/pricing).

### Changes:  
- Added pricing for `o1`, `o1-2024-12-17`, `o1-cached`, and
`o1-2024-12-17-cached` for input tokens.
- Added pricing for `o1-completion` and `o1-2024-12-17-completion` for
output tokens.
- Added pricing for `o3-mini`, `o3-mini-2025-01-31`, `o3-mini-cached`,
and `o3-mini-2025-01-31-cached` for input tokens.
- Added pricing for `o3-mini-completion` and
`o3-mini-2025-01-31-completion` for output tokens.

### Issue:  
N/A  

### Dependencies:  
None  

### Testing & Validation:  
- No functional changes outside of updating the cost mapping.  
- No tests were added or modified.
2025-02-06 08:02:20 -05:00
ZhangShenao
ac53977dbc [MistralAI] Improve MistralAIEmbeddings (#29242)
- Add static method decorator for method.
- Add expected exception for retry decorator

#29125
2025-02-05 21:31:54 -05:00
Andrew Wason
22aa5e07ed standard-tests: Fix ToolsIntegrationTests to correctly handle "content_and_artifact" tools (#29391)
**Description:**

The response from `tool.invoke()` is always a ToolMessage, with content
and artifact fields, not a tuple.
The tuple is converted to a ToolMessage here

b6ae7ca91d/libs/core/langchain_core/tools/base.py (L726)

**Issue:**

Currently `ToolsIntegrationTests` requires `invoke()` to return a tuple
and so standard tests fail for "content_and_artifact" tools. This fixes
that to check the returned ToolMessage.

This PR also adds a test that now passes.
2025-02-05 21:27:09 -05:00
Mohammad Anash
f849305a56 fixed Bug in PreFilter of AzureCosmosDBNoSqlVectorSearch (#29613)
Description: Fixes PreFilter value handling in Azure Cosmos DB NoSQL
vectorstore. The current implementation fails to handle numeric values
in filter conditions, causing an undefined value variable error. This PR
adds support for numeric, boolean, and NULL values while maintaining the
existing string and list handling.

Changes:
Added handling for numeric types (int/float)
Added boolean value support
Added NULL value handling
Added type validation for unsupported values
Fixed scope of value variable initialization

Issue: 
Fixes #29610

Implementation Notes:
No changes to public API
Backwards compatible
Maintains consistent behavior with existing MongoDB-style filtering
Preserves SQL injection prevention through proper value handling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-06 02:20:26 +00:00
Philippe PRADOS
6ff0d5c807 community[minor]: 04 - Refactoring PDFMiner parser (#29526)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once. This specific part focuses on updating the XXX
parser.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-02-05 21:08:27 -05:00
Yoav Levy
4460d20ba9 docs: Nimble provider doc fixes (#29597)
## Description

- Removed broken link for the API Reference
- Added `OPENAI_API_KEY` setter for the chains to properly run
- renamed one of our examples so it won't override the original
retriever and cause confusion due to it using a different mode of
retrieving
- Moved one of our simple examples to be the first example of our
retriever :)
2025-02-05 11:24:37 -08:00
Isaac Francisco
91ffd7caad core: allow passing message dicts into ChatPromptTemplate (#29363)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-05 09:45:52 -08:00
ccurme
69595b0914 docs: fix builds (#29607)
Failing with:
> ValueError: Provider page not found for databricks-langchain. Please
add one at docs/integrations/providers/databricks-langchain.{mdx,ipynb}
2025-02-05 14:24:53 +00:00
ccurme
91a33a9211 anthropic[patch]: release 0.3.6 (#29606) 2025-02-05 14:18:02 +00:00
ccurme
5cbe6aba8f anthropic[patch]: support citations in streaming (#29591) 2025-02-05 09:12:07 -05:00
William FH
5ae4ed791d Drop duplicate inputs (#29589) 2025-02-04 18:06:10 -08:00
Erick Friis
65f0deb81a packages: databricks-langchain (#29593) 2025-02-05 01:53:34 +00:00
Yoav Levy
621bba7e26 docs: add nimble as a provider (#29579)
## Description:

- Add docs/docs/providers/nimbleway.ipynb
- Add docs/docs/integrations/retrievers/nimbleway.ipynb
- Register nimbleway in libs/packages.yml

- X (twitter) handle: @urielkn / @LevyNorbit8
2025-02-04 16:47:03 -08:00
Erick Friis
50d61eafa2 partners/deepseek: release 0.1.1 (#29592) 2025-02-04 23:46:38 +00:00
Erick Friis
7edfcbb090 docs: rename to langchain-deepseek in docs (#29587) 2025-02-04 14:22:17 -08:00
Erick Friis
04e8f3b6d7 infra: add deepseek api key to release (#29585) 2025-02-04 10:35:07 -08:00
Erick Friis
df8fa882b2 deepseek: bump core (#29584) 2025-02-04 10:25:46 -08:00
Erick Friis
455f65947a deepseek: rename to langchain-deepseek from langchain-deepseek-official (#29583) 2025-02-04 17:57:25 +00:00
Philippe PRADOS
5771e561fb [Bugfix langchain_community] Fix PyMuPDFLoader (#29550)
- **Description:**  add legacy properties
    - **Issue:** #29470
    - **Twitter handle:** pprados
2025-02-04 09:24:40 -05:00
Ashutosh Kumar
65b404a2d1 [oci_generative_ai] Option to pass auth_file_location (#29481)
**PR title**: "community: Option to pass auth_file_location for
oci_generative_ai"

**Description:** Option to pass auth_file_location, to overwrite config
file default location "~/.oci/config" where profile name configs
present. This is not fixing any issues. Just added optional parameter
called "auth_file_location", which internally supported by any OCI
client including GenerativeAiInferenceClient.
2025-02-03 21:44:13 -05:00
Teruaki Ishizaki
aeb42dc900 partners: Fixed the procedure of initializing pad_token_id (#29500)
- **Description:** Add to check pad_token_id and eos_token_id of model
config. It seems that this is the same bug as the HuggingFace TGI bug.
It's same bug as #29434
- **Issue:** #29431
- **Dependencies:** none
- **Twitter handle:** tell14

Example code is followings:
```python
from langchain_huggingface.llms import HuggingFacePipeline

hf = HuggingFacePipeline.from_model_id(
    model_id="meta-llama/Llama-3.2-3B-Instruct",
    task="text-generation",
    pipeline_kwargs={"max_new_tokens": 10},
)

from langchain_core.prompts import PromptTemplate

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | hf

question = "What is electroencephalography?"

print(chain.invoke({"question": question}))
```
2025-02-03 21:40:33 -05:00
Tanushree
e8b91283ef Banner for interrupt (#29567)
Adding banner for interrupt
2025-02-03 17:40:24 -08:00
Erick Friis
ab67137fa3 docs: chat model order experiment (#29480) 2025-02-03 18:55:18 +00:00
AmirPoursaberi
a6efd22ba1 Fix a tiny typo in create_retrieval_chain docstring (#29552)
Hi there!

To fix a tiny typo in `create_retrieval_chain` docstring.
2025-02-03 10:54:49 -05:00
JHIH-SIOU LI
48fa3894c2 docs: update readthedocs document loader options (#29556)
Hi there!

This PR updates the documentation according to the code.
If we run the example as is, then it would result in the following
error:

![image](https://github.com/user-attachments/assets/9c0a336c-775c-489c-a275-f1153d447ecb)

It seems that this part of the code already supplied the required
argument to the BeautifulSoup4:

0c782ee547/libs/community/langchain_community/document_loaders/readthedocs.py (L87-L90)

Since the example can only work by removing this argument, it also seems
legit to remove it from the documentation.
2025-02-03 10:54:24 -05:00
Tyllen
0c782ee547 docs: update payman docs (#29479)
- **Description:** fix the import docs variables

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-02 02:41:54 +00:00
Hemant Rawat
db1693aa70 community: fix issue #29429 in age_graph.py (#29506)
## Description:

This PR addresses issue #29429 by fixing the _wrap_query method in
langchain_community/graphs/age_graph.py. The method now correctly
handles Cypher queries with UNION and EXCEPT operators, ensuring that
the fields in the SQL query are ordered as they appear in the Cypher
query. Additionally, the method now properly handles cases where RETURN
* is not supported.

### Issue: #29429

### Dependencies: None


### Add tests and docs:

Added unit tests in tests/unit_tests/graphs/test_age_graph.py to
validate the changes.
No new integrations were added, so no example notebook is necessary.
Lint and test:

Ran make format, make lint, and make test to ensure code quality and
functionality.
2025-02-01 21:24:45 -05:00
Keenan Pepper
2f97916dea docs: Add goodfire notebook and add to packages.yml (#29512)
- **Description:** Add Goodfire ipynb notebook and add
langchain-goodfire package to packages.yml
- **Issue:** n/a
- **Dependencies:** docs only
- **Twitter handle:** keenanpepper

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-01 19:43:20 -05:00
ccurme
a3c5e4d070 deepseek[patch]: bump langchain-openai and add to scheduled testing (#29535) 2025-02-01 18:40:59 -05:00
ccurme
16a422f3fa community: add standard tests for Perplexity (#29534) 2025-02-01 17:02:57 -05:00
A Venkata Sai Krishna Varun
21d8d41595 docs: update delete method in vectorstores.mdx (#29497)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 18:15:28 +00:00
Mark Perfect
b8e218b09f docs: Fix Milvus vector store initialization (#29511)
- [x] **PR title**:


- [x] **PR message**:

- A change in the Milvus API has caused an issue with the local vector
store initialization. Having used an Ollama embedding model, the vector
store initialization results in the following error:

<img width="978" alt="image"
src="https://github.com/user-attachments/assets/d57e495c-1764-4fbe-ab8c-21ee44f1e686"
/>

- This is fixed by setting the index type explicitly:

`vector_store = Milvus(embedding_function=embeddings,
connection_args={"uri": URI}, index_params={"index_type": "FLAT",
"metric_type": "L2"},)`

Other small documentation edits were also made.


- [x] **Add tests and docs**:
  N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 12:57:36 -05:00
Amit Ghadge
0c405245c4 [Integrations][Tool] Added Jenkins tools support (#29516)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 12:50:10 -05:00
Subrat Lima
5b826175c9 docs: Update local_llms.ipynb - fixed a typo (#29520)
Description: fixed a typo in the how to > local llma > llamafile section
description.
2025-01-31 11:18:24 -05:00
Christophe Bornet
aab2e42169 core[patch]: Use Blockbuster to detect blocking calls in asyncio during tests (#29043)
This PR uses the [blockbuster](https://github.com/cbornet/blockbuster)
library in langchain-core to detect blocking calls made in the asyncio
event loop during unit tests.
Avoiding blocking calls is hard as these can be deeply buried in the
code or made in 3rd party libraries.
Blockbuster makes it easier to detect them by raising an exception when
a call is made to a known blocking function (eg: `time.sleep`).

Adding blockbuster allowed to find a blocking call in
`aconfig_with_context` (it ends up calling `get_function_nonlocals`
which loads function code).

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_
2025-01-31 10:06:34 -05:00
Philippe PRADOS
ceda8bc050 community[minor]: 03 - Refactoring PyPDF parser (#29330)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses on updating the PyPDF parser.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).
2025-01-31 10:05:07 -05:00
Julian Castro Pulgarin
b7e3e337b1 community: Fix YahooFinanceNewsTool to handle updated yfinance data structure (#29498)
*Description:**
Updates the YahooFinanceNewsTool to handle the current yfinance news
data structure. The tool was failing with a KeyError due to changes in
the yfinance API's response format. This PR updates the code to
correctly extract news URLs from the new structure.

**Issue:** #29495

**Dependencies:** 
No new dependencies required. Works with existing yfinance package.

The changes maintain backwards compatibility while fixing the KeyError
that users were experiencing.

The modified code properly handles the new data structure where:
- News type is now at `content.contentType`
- News URL is now at `content.canonicalUrl.url`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 02:31:44 +00:00
Vadym Barda
22219eefaf docs: update README/intro (#29492) 2025-01-29 22:50:00 +00:00
Erick Friis
332e303858 partners/mistralai: release 0.2.6 (#29491) 2025-01-29 22:23:14 +00:00
Erick Friis
2c795f5628 partners/openai: release 0.3.3 (#29490) 2025-01-29 22:23:03 +00:00
Erick Friis
f307b3cc5f langchain: release 0.3.17 (#29485) 2025-01-29 22:22:49 +00:00
Erick Friis
5cad3683b4 partners/groq: release 0.2.4 (#29488) 2025-01-29 22:22:30 +00:00
Erick Friis
e074c26a6b partners/fireworks: release 0.2.7 (#29487) 2025-01-29 22:22:18 +00:00
Erick Friis
685609e1ef partners/anthropic: release 0.3.5 (#29486) 2025-01-29 22:22:11 +00:00
Erick Friis
ed3a5e664c standard-tests: release 0.3.10 (#29484) 2025-01-29 22:21:05 +00:00
Erick Friis
29461b36d9 partners/ollama: release 0.2.3 (#29489) 2025-01-29 22:19:44 +00:00
Erick Friis
07e2e80fe7 core: release 0.3.33 (#29483) 2025-01-29 14:11:53 -08:00
Erick Friis
8f95da4eb1 multiple: structured output tracing standard metadata (#29421)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:00:26 -08:00
ccurme
284c935b08 tests[patch]: improve coverage of structured output tests (#29478) 2025-01-29 14:52:09 -05:00
Erick Friis
c79274cb7c docs: typo in contrib integrations (#29477) 2025-01-29 19:39:36 +00:00
ccurme
a3878a3c62 infra: update deps for notebook tests (#29476) 2025-01-29 10:23:50 -05:00
Matheus Torquato
7aae738296 docs:Fix Imports for Document and BaseRetriever (#29473)
This pull request addresses an issue with import statements in the
langchain_core/retrievers.py file. The following changes have been made:

Corrected the import for Document from langchain_core.documents.base.
Corrected the import for BaseRetriever from langchain_core.retrievers.
These changes ensure that the SimpleRetriever class can correctly
reference the Document and BaseRetriever classes, improving code
reliability and maintainability.

---------

Co-authored-by: Matheus Torquato <mtorquat@jaguarlandrover.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:32:05 +00:00
Mohammad Anash
12bcc85927 added operator filter for supabase (#29475)
Description
This PR adds support for MongoDB-style $in operator filtering in the
Supabase vectorstore implementation. Currently, filtering with $in
operators returns no results, even when matching documents exist. This
change properly translates MongoDB-style filters to PostgreSQL syntax,
enabling efficient multi-document filtering.
Changes

Modified similarity_search_by_vector_with_relevance_scores to handle
MongoDB-style $in operators
Added automatic conversion of $in filters to PostgreSQL IN clauses
Preserved original vector type handling and numpy array conversion
Maintained compatibility with existing postgrest filters
Added support for the same filtering in
similarity_search_by_vector_returning_embeddings

Issue
Closes #27932

Implementation Notes
No changes to public API or function signatures
Backwards compatible - behavior unchanged for non-$in filters
More efficient than multiple individual queries for multi-ID searches
Preserves all existing functionality including numpy array conversion
for vector types

Dependencies
None

Additional Notes
The implementation handles proper SQL escaping for filter values
Maintains consistent behavior with other vectorstore implementations
that support MongoDB-style operators
Future extensions could support additional MongoDB-style operators ($gt,
$lt, etc.)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:24:18 +00:00
ccurme
585f467d4a mistral[patch]: release 0.2.5 (#29463) 2025-01-28 18:29:54 -05:00
ccurme
ca9d4e4595 mistralai: support method="json_schema" in structured output (#29461)
https://docs.mistral.ai/capabilities/structured-output/custom_structured_output/
2025-01-28 18:17:39 -05:00
Michael Chin
e120378695 community: Additional AWS deprecations (#29447)
Added deprecation warnings for a few more classes that weremoved to
`langchain-aws` package:
- [SageMaker Endpoint
LLM](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)
- [Amazon Kendra
retriever](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.kendra.AmazonKendraRetriever.html)
- [Amazon Bedrock Knowledge Bases
retriever](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)
2025-01-28 09:50:14 -05:00
Tommy Cox
6f711794a7 docs: tiny grammary fix to why_langchain.mdx (#29455)
Description: Tiny grammar fix to doc - why_langchain.mdx
2025-01-28 09:49:33 -05:00
Erick Friis
2d776351af community: release 0.3.16 (#29452) 2025-01-28 07:44:54 +00:00
Erick Friis
737a68fcdc langchain: release 0.3.16 (#29451) 2025-01-28 07:31:09 +00:00
Erick Friis
fa3857c9d0 docs: tests/standard tests api ref redirect (#29444) 2025-01-27 23:21:50 -08:00
Erick Friis
8bf9c71673 core: release 0.3.32 (#29450) 2025-01-28 07:20:04 +00:00
Erick Friis
ecdc881328 langchain: add deepseek provider to init chat model (#29449) 2025-01-27 23:13:59 -08:00
Erick Friis
dced0ed3fd deepseek, docs: chatdeepseek integration added (#29445) 2025-01-28 06:32:58 +00:00
Vadym Barda
7cbf885c18 docs: replace 'state_modifier' with 'prompt' (#29415) 2025-01-27 21:29:18 -05:00
Isaac Francisco
2bb2c9bfe8 change behavior for converting a string to openai messages (#29446) 2025-01-27 18:18:54 -08:00
ccurme
b1fdac726b groq[patch]: update model used in test (#29441)
`llama-3.1-70b-versatile` was [shut
down](https://console.groq.com/docs/deprecations).
2025-01-27 21:11:44 +00:00
Adrián Panella
1551d9750c community(doc_loaders): allow any credential type in AzureAIDocumentI… (#29289)
allow any credential type in AzureAIDocumentInteligence, not only
`api_key`.
This allows to use any of the credentials types integrated with AD.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:56:30 +00:00
ccurme
f00c66cc1f chroma[patch]: release 0.2.1 (#29440) 2025-01-27 20:41:35 +00:00
Jorge Piedrahita Ortiz
3b886cdbb2 libs: add sambanova-lagchain integration package (#29417)
- **Description:**: Add sambanova-langchain integration package as
suggested in previous PRs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:34:55 +00:00
Mohammad Anash
aba1fd0bd4 fixed similarity search with score error #29407 (#29413)
Description: Fix TypeError in AzureSearch similarity_search_with_score
by removing search_type from kwargs before passing to underlying
requests.

This resolves issue #29407 where search_type was being incorrectly
passed through to Session.request().
Issue: #29407

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:34:42 +00:00
itaismith
7b404fcd37 partners[chroma]: Upgrade Chroma to 0.6.x (#29404)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-27 15:32:21 -05:00
Teruaki Ishizaki
3fce78994e community: Fixed the procedure of initializing pad_token_id (#29434)
- **Description:** Add to check pad_token_id and eos_token_id of model
config. It seems that this is the same bug as the HuggingFace TGI bug.
In addition, the source code of
libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py
also requires similar changes.
- **Issue:** #29431
- **Dependencies:** none
- **Twitter handle:** tell14
2025-01-27 14:54:54 -05:00
Christophe Bornet
dbb6b7b103 core: Add ruff rules TRY (tryceratops) (#29388)
TRY004 ("use TypeError rather than ValueError") existing errors are
marked as ignore to preserve backward compatibility.
LMK if you prefer to fix some of them.

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-24 05:01:40 +00:00
Erick Friis
723b603f52 docs: groq api key links (#29402) 2025-01-24 04:33:18 +00:00
ccurme
bd1909fe05 docs: document citations in ChatAnthropic (#29401) 2025-01-23 17:18:51 -08:00
ccurme
bbc50f65e7 anthropic[patch]: release 0.3.4 (#29399) 2025-01-23 23:55:58 +00:00
ccurme
ed797e17fb anthropic[patch]: always return content blocks if citations are generated (#29398)
We currently return string (and therefore no content blocks / citations)
if the response is of the form
```
[
    {"text": "a claim", "citations": [...]},
]
```

There are other cases where we do return citations as-is:
```
[
    {"text": "a claim", "citations": [...]},
    {"text": "some other text"},
    {"text": "another claim", "citations": [...]},
]
```
Here we update to return content blocks including citations in the first
case as well.
2025-01-23 18:47:23 -05:00
ccurme
933b35b9c5 docs: update how-to index page (#29393) 2025-01-23 16:55:36 -05:00
Bagatur
317fb86fd9 openai[patch]: fix int test (#29395) 2025-01-23 21:23:01 +00:00
Bagatur
8d566a8fe7 openai[patch]: detect old models in with_structured_output (#29392)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-23 20:47:32 +00:00
Christophe Bornet
b6ae7ca91d core: Cache RunnableLambda __repr__ (#29199)
`RunnableLambda`'s `__repr__` may do costly OS operation by calling
`get_lambda_source`.
So it's better to cache it.
See #29043

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 18:34:47 +00:00
Christophe Bornet
618e550f06 core: Cache RunnableLambda deps (#29200)
`RunnableLambda`'s `deps` may do costly OS operation by calling
`get_function_nonlocals`.
So it's better to cache it.
See #29043
2025-01-23 13:09:07 -05:00
ccurme
f795ab99ec docs: fix title rendered for integration package (#29387)
"Tilores LangchAIn" -> "Tilores"
2025-01-23 12:21:19 -05:00
Stefan Berkner
8977451c76 docs: add Tilores provider and tools (#29244)
Description: This PR adds documentation for the Tilores provider and
tools.
Issue: closes #26320
2025-01-23 12:17:59 -05:00
Ahmed Tammaa
d5b8aabb32 text-splitters[patch]: delete unused html_chunks_with_headers.xslt (#29340)
This pull request removes the now-unused html_chunks_with_headers.xslt
file from the codebase. In a previous update ([PR
#27678](https://github.com/langchain-ai/langchain/pull/27678)), the
HTMLHeaderTextSplitter class was refactored to utilize BeautifulSoup
instead of lxml and XSLT for HTML processing. As a result, the
html_chunks_with_headers.xslt file is no longer necessary and can be
safely deleted to maintain code cleanliness and reduce potential
confusion.

Issue: N/A

Dependencies: N/A
2025-01-23 11:29:08 -05:00
Wang Ran (汪然)
8f2c11e17b core[patch]: fix API reference for draw_ascii (#29370)
typo: no `draw` but `draw_ascii` and other things

now, it works:
<img width="688" alt="image"
src="https://github.com/user-attachments/assets/5b5a8cc2-cf81-4a5c-b443-da0e4426556c"
/>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 16:04:58 +00:00
Michael Chin
2df9daa7f2 docs: Update BedrockEmbeddings import example in aws.mdx (#29364)
The `BedrockEmbeddings` class in `langchain-community` has been
deprecated since v0.2.11:


https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/bedrock.py#L14-L19

Updated the AWS docs for `BedRockEmbeddings` to use the new class in
`langchain-aws`.
2025-01-23 10:05:57 -05:00
Loris Alexandre
e4921239a6 community: missing mandatory parameter partition_key for AzureCosmosDBNoSqlVectorSearch (#29382)
- **Description:** the `delete` function of
AzureCosmosDBNoSqlVectorSearch is using
`self._container.delete_item(document_id)` which miss a mandatory
parameter `partition_key`
We use the class function `delete_document_by_id` to provide a default
`partition_key`
- **Issue:** #29372 
- **Dependencies:** None
- **Twitter handle:** None

Co-authored-by: Loris Alexandre <loris.alexandre@boursorama.fr>
2025-01-23 10:05:10 -05:00
Terry Tan
ec0ebb76f2 community: fix Google Scholar tool errors (#29371)
Resolve https://github.com/langchain-ai/langchain/issues/27557
2025-01-23 10:03:01 -05:00
江同学呀
a1e62070d0 community: Fix the problem of error reporting when OCR extracts text from PDF. (#29378)
- **Description:** The issue has been fixed where images could not be
recognized from ```xObject[obj]["/Filter"]``` (whose value can be either
a string or a list of strings) in the ```_extract_images_from_page()```
method. It also resolves the bug where vectorization by Faiss fails due
to the failure of image extraction from a PDF containing only
images```IndexError: list index out of range```.

![69a60f3f6bd474641b9126d74bb18f7e](https://github.com/user-attachments/assets/dc9e098d-2862-49f7-93b0-00f1056727dc)

- **Issue:** 
    Fix the following issues:
[#15227 ](https://github.com/langchain-ai/langchain/issues/15227)
[#22892 ](https://github.com/langchain-ai/langchain/issues/22892)
[#26652 ](https://github.com/langchain-ai/langchain/issues/26652)
[#27153 ](https://github.com/langchain-ai/langchain/issues/27153)
    Related issues:
[#7067 ](https://github.com/langchain-ai/langchain/issues/7067)

- **Dependencies:** None
- **Twitter handle:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 15:01:52 +00:00
Tim Mallezie
a13faab6b7 community; allow to set gitlab url in gitlab tool in constrictor (#29380)
This pr, expands the gitlab url so it can also be set in a constructor,
instead of only through env variables.

This allows to do something like this. 
```
       # Create the GitLab API wrapper
        gitlab_api = GitLabAPIWrapper(
            gitlab_url=self.gitlab_url,
            gitlab_personal_access_token=self.gitlab_personal_access_token,
            gitlab_repository=self.gitlab_repository,
            gitlab_branch=self.gitlab_branch,
            gitlab_base_branch=self.gitlab_base_branch,
        )
```
Where before you could not set the url in the constructor.

Co-authored-by: Tim Mallezie <tim.mallezie@dropsolid.com>
2025-01-23 09:36:27 -05:00
Tyllen
f2ea62f632 docs: add payman docs (#29362)
- **Description:** Adding the docs to use the payman-langchain
integration :)
2025-01-22 18:37:47 -08:00
Erick Friis
861024f388 docs: openai audio input (#29360) 2025-01-22 23:45:35 +00:00
Erick Friis
3f1d20964a standard-tests: release 0.3.9 (#29356) 2025-01-22 09:46:19 -08:00
Macs Dickinson
7378c955db community: adds support for getting github releases for the configured repository (#29318)
**Description:** adds support for github tool to query github releases
on the configure respository
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** @macsdickinson

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-22 15:45:52 +00:00
Tayaa Med Amine
ef1610e24a langchain[patch]: support ollama in init_embeddings (#29349)
Why not Ollama ?

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-22 14:47:12 +00:00
Siddhant
9eb10a9240 langchain: added vectorstore docstring linting (#29241)
…ore.py

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  
Added docstring linting in the vectorstore.py file relating to issue
#25154


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Siddhant Jain <sjain35@buffalo.edu>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 03:47:43 +00:00
Erick Friis
a2ed796aa6 infra: run doc lint on root pyproject change (#29350) 2025-01-22 03:22:13 +00:00
Sohan
de1fc4811d packages, docs: Pipeshift - Langchain integration of pipeshift (#29114)
Description: Added pipeshift integration. This integrates pipeshift LLM
and ChatModels APIs with langchain
Dependencies: none

Unit Tests & Integration tests are added

Documentation is added as well

This PR is w.r.t
[#27390](https://github.com/langchain-ai/langchain/pull/27390) and as
per request, a freshly minted `langchain-pipeshift` package is uploaded
to PYPI. Only changes to the docs & packages.yml are made in langchain
master branch

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 03:03:06 +00:00
Erick Friis
e723882a49 docs: mongodb api ref redirect (#29348) 2025-01-21 16:48:03 -08:00
Christophe Bornet
836c791829 text-splitters: Bump ruff version to 0.9 (#29231)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:27:58 +00:00
Christophe Bornet
a004dec119 langchain: Bump ruff version to 0.9 (#29211)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:26:39 +00:00
Christophe Bornet
2340b3154d standard-tests: Bump ruff version to 0.9 (#29230)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:23:01 +00:00
Christophe Bornet
e4a78dfc2a core: Bump ruff version to 0.9 (#29201)
Also run some preview autofix and formatting

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:20:09 +00:00
Ella Charlaix
6f95db81b7 huggingface: Add IPEX models support (#29179)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:16:44 +00:00
Bhav Sardana
d6a7aaa97d community: Fix for Pydantic model validator of GoogleApiClient (#29346)
- [ *] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Fix for pedantic model validator for GoogleApiHandler
    - **Issue:** the issue #29165 

- [ *] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

---------

Signed-off-by: Bhav Sardana <sardana.bhav@gmail.com>
2025-01-21 15:17:43 -05:00
Christophe Bornet
1c4ce7b42b core: Auto-fix some docstrings (#29337) 2025-01-21 13:29:53 -05:00
ccurme
86a0720310 fireworks[patch]: update model used in integration tests (#29342)
No access to firefunction-v1 and -v2.
2025-01-21 11:05:30 -05:00
Hugo Berg
32c9c58adf Community: fix missing f-string modifier in oai structured output parsing error (#29326)
- **Description:** The ValueError raised on certain structured-outputs
parsing errors, in langchain openai community integration, was missing a
f-string modifier and so didn't produce useful outputs. This is a
2-line, 2-character change.
- **Issue:** None open that this fixes
- **Dependencies:** Nothing changed
- **Twitter handle:** None

- [X] **Add tests and docs**: There's nothing to add for.
- [-] **Lint and test**: Happy to run this if you deem it necessary.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-21 14:26:38 +00:00
Nuno Campos
566915d7cf core: fix call to get closure vars for partial-wrapped funcs (#29316)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-21 09:26:15 -05:00
ZhangShenao
33e22ccb19 [Doc] Improve api doc (#29324)
- Fix doc description
- Add static method decorator
2025-01-21 09:16:08 -05:00
Tugay Talha İçen
7b44c3384e Docs: update huggingfacehub.ipynb (#29329)
langchain -> langchain langchain-huggingface 


Updated the installation command from:
%pip install --upgrade --quiet langchain sentence_transformers to: %pip
install --upgrade --quiet langchain-huggingface sentence_transformers

This resolves an import error in the notebook when using from
langchain_huggingface.embeddings import HuggingFaceEmbeddings.
2025-01-21 09:12:22 -05:00
Bagatur
536b44a47f community[patch]: Release 0.3.15 (#29325) 2025-01-21 03:10:07 +00:00
Bagatur
ec5fae76d4 langchain[patch]: Release 0.3.15 (#29322) 2025-01-21 02:24:11 +00:00
Bagatur
923e6fb321 core[patch]: 0.3.31 (#29320) 2025-01-21 01:17:31 +00:00
Ikko Eltociear Ashimine
06456c1dcf docs: update google_cloud_sql_mssql.ipynb (#29315)
arbitary -> arbitrary
2025-01-20 16:11:08 -05:00
Ahmed Tammaa
d3ed9b86be text-splitters[minor]: Replace lxml and XSLT with BeautifulSoup in HTMLHeaderTextSplitter for Improved Large HTML File Processing (#27678)
This pull request updates the `HTMLHeaderTextSplitter` by replacing the
`split_text_from_file` method's implementation. The original method used
`lxml` and XSLT for processing HTML files, which caused
`lxml.etree.xsltapplyerror maxhead` when handling large HTML documents
due to limitations in the XSLT processor. Fixes #13149

By switching to BeautifulSoup (`bs4`), we achieve:

- **Improved Performance and Reliability:** BeautifulSoup efficiently
processes large HTML files without the errors associated with `lxml` and
XSLT.
- **Simplified Dependencies:** Removes the dependency on `lxml` and
external XSLT files, relying instead on the widely used `beautifulsoup4`
library.
- **Maintained Functionality:** The new method replicates the original
behavior, ensuring compatibility with existing code and preserving the
extraction of content and metadata.

**Issue:**

This change addresses issues related to processing large HTML files with
the existing `HTMLHeaderTextSplitter` implementation. It resolves
problems where users encounter lxml.etree.xsltapplyerror maxhead due to
large HTML documents.

**Dependencies:**

- **BeautifulSoup (`beautifulsoup4`):** The `beautifulsoup4` library is
now used for parsing HTML content.
  - Installation: `pip install beautifulsoup4`

**Code Changes:**

Updated the `split_text_from_file` method in `HTMLHeaderTextSplitter` as
follows:

```python
def split_text_from_file(self, file: Any) -> List[Document]:
    """Split HTML file using BeautifulSoup.

    Args:
        file: HTML file path or file-like object.

    Returns:
        List of Document objects with page_content and metadata.
    """
    from bs4 import BeautifulSoup
    from langchain.docstore.document import Document
    import bs4

    # Read the HTML content from the file or file-like object
    if isinstance(file, str):
        with open(file, 'r', encoding='utf-8') as f:
            html_content = f.read()
    else:
        # Assuming file is a file-like object
        html_content = file.read()

    # Parse the HTML content using BeautifulSoup
    soup = BeautifulSoup(html_content, 'html.parser')

    # Extract the header tags and their corresponding metadata keys
    headers_to_split_on = [tag[0] for tag in self.headers_to_split_on]
    header_mapping = dict(self.headers_to_split_on)

    documents = []

    # Find the body of the document
    body = soup.body if soup.body else soup

    # Find all header tags in the order they appear
    all_headers = body.find_all(headers_to_split_on)

    # If there's content before the first header, collect it
    first_header = all_headers[0] if all_headers else None
    if first_header:
        pre_header_content = ''
        for elem in first_header.find_all_previous():
            if isinstance(elem, bs4.Tag):
                text = elem.get_text(separator=' ', strip=True)
                if text:
                    pre_header_content = text + ' ' + pre_header_content
        if pre_header_content.strip():
            documents.append(Document(
                page_content=pre_header_content.strip(),
                metadata={}  # No metadata since there's no header
            ))
    else:
        # If no headers are found, return the whole content
        full_text = body.get_text(separator=' ', strip=True)
        if full_text.strip():
            documents.append(Document(
                page_content=full_text.strip(),
                metadata={}
            ))
        return documents

    # Process each header and its associated content
    for header in all_headers:
        current_metadata = {}
        header_name = header.name
        header_text = header.get_text(separator=' ', strip=True)
        current_metadata[header_mapping[header_name]] = header_text

        # Collect all sibling elements until the next header of the same or higher level
        content_elements = []
        for sibling in header.find_next_siblings():
            if sibling.name in headers_to_split_on:
                # Stop at the next header
                break
            if isinstance(sibling, bs4.Tag):
                content_elements.append(sibling)

        # Get the text content of the collected elements
        current_content = ''
        for elem in content_elements:
            text = elem.get_text(separator=' ', strip=True)
            if text:
                current_content += text + ' '

        # Create a Document if there is content
        if current_content.strip():
            documents.append(Document(
                page_content=current_content.strip(),
                metadata=current_metadata.copy()
            ))
        else:
            # If there's no content, but we have metadata, still create a Document
            documents.append(Document(
                page_content='',
                metadata=current_metadata.copy()
            ))

    return documents
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 16:10:37 -05:00
Christophe Bornet
989eec4b7b core: Add ruff rule S101 (no assert) (#29267)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 20:24:31 +00:00
Christophe Bornet
e5d62c6ce7 core: Add ruff rule W293 (whitespaces) (#29272) 2025-01-20 15:16:12 -05:00
Philippe PRADOS
4efc5093c1 community[minor]: Refactoring PyMuPDF parser, loader and add image blob parsers (#29063)
* Adds BlobParsers for images. These implementations can take an image
and produce one or more documents per image. This interface can be used
for exposing OCR capabilities.
* Update PyMuPDFParser and Loader to standardize metadata, handle
images, improve table extraction etc.

- **Twitter handle:** pprados

This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses to prepare the update of all parsers.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 15:15:43 -05:00
Syed Baqar Abbas
f175319303 [feat] Added backwards compatibility for OllamaEmbeddings initialization (migration from langchain_community.embeddings to langchain_ollama.embeddings (#29296)
- [feat] **Added backwards compatibility for OllamaEmbeddings
initialization (migration from `langchain_community.embeddings` to
`langchain_ollama.embeddings`**: "langchain_ollama"
- **Description:** Given that `OllamaEmbeddings` from
`langchain_community.embeddings` is deprecated, code is being shifted to
``langchain_ollama.embeddings`. However, this does not offer backward
compatibility of initializing the parameters and `OllamaEmbeddings`
object.
    - **Issue:** #29294 
    - **Dependencies:** None
    - **Twitter handle:** @BaqarAbbas2001


## Additional Information
Previously, `OllamaEmbeddings` from `langchain_community.embeddings`
used to support the following options:

e9abe583b2/libs/community/langchain_community/embeddings/ollama.py (L125-L139)

However, in the new package `from langchain_ollama import
OllamaEmbeddings`, there is no method to set these options. I have added
these parameters to resolve this issue.

This issue was also discussed in
https://github.com/langchain-ai/langchain/discussions/29113
2025-01-20 11:16:29 -05:00
CLOVA Studio 개발
7a95ffc775 community: fix some features on Naver ChatModel & embedding model 2 (#29243)
## Description
- Responding to `NCP API Key` changes.
- To fix `ChatClovaX` `astream` function to raise `SSEError` when an
error event occurs.
- To add `token length` and `ai_filter` to ChatClovaX's
`response_metadata`.
- To update document for apply NCP API Key changes.

cc. @efriis @vbarda
2025-01-20 11:01:03 -05:00
Sangyun_LEE
5d64597490 docs: fix broken Appearance of langchain_community/document_loaders/recursive_url_loader API Reference (#29305)
# PR mesesage
## Description
Fixed a broken Appearance of RecurisveUrlLoader API Reference.

### Before
<p align="center">
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/f39df65d-b788-411d-88af-8bfa2607c00b"
/>
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/b8a92b70-4548-4b4a-965f-026faeebd0ec"
/>
</p>

### After
<p align="center">
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/8ea28146-de45-42e2-b346-3004ec4dfc55"
/>
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/914c6966-4055-45d3-baeb-2d97eab06fe7"
/>
</p>

## Issue:
N/A
## Dependencies
None
## Twitter handle
N/A

# Add tests and docs
Not applicable; this change only affects documentation.

# Lint and test
Ran make format, make lint, and make test to ensure no issues.
2025-01-20 10:56:59 -05:00
Hemant Rawat
6c52378992 Add Google-style docstring linting and update pyproject.toml (#29303)
### Description:

This PR introduces Google-style docstring linting for the
ModelLaboratory class in libs/langchain/langchain/model_laboratory.py.
It also updates the pyproject.toml file to comply with the latest Ruff
configuration standards (deprecating top-level lint settings in favor of
lint).

### Changes include:
- [x] Added detailed Google-style docstrings to all methods in
ModelLaboratory.
- [x] Updated pyproject.toml to move select and pydocstyle settings
under the [tool.ruff.lint] section.
- [x] Ensured all files pass Ruff linting.

Issue:
Closes #25154

### Dependencies:
No additional dependencies are required for this change.

### Checklist
- [x] Files passes ruff linting.
- [x] Docstrings conform to the Google-style convention.
- [x] pyproject.toml updated to avoid deprecation warnings.
- [x] My PR is ready to review, please review.
2025-01-19 14:37:21 -05:00
Mohammad Mohtashim
b5fbebb3c8 (Community): Changing the BaseURL and Model for MiniMax (#29299)
- **Description:** Changed the Base Default Model and Base URL to
correct versions. Plus added a more explicit exception if user provides
an invalid API Key
- **Issue:** #29278
2025-01-19 14:15:02 -05:00
ccurme
c20f7418c7 openai[patch]: fix Azure LLM test (#29302)
The tokens I get are:
```
['', '\n\n', 'The', ' sun', ' was', ' setting', ' over', ' the', ' horizon', ',', ' casting', '']
```
so possibly an extra empty token is included in the output.

lmk @efriis if we should look into this further.
2025-01-19 17:25:42 +00:00
ccurme
6b249a0dc2 openai[patch]: release 0.3.1 (#29301) 2025-01-19 17:04:00 +00:00
ThomasSaulou
e9abe583b2 chatperplexity stream-citations in additional kwargs (#29273)
chatperplexity stream-citations in additional kwargs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-18 22:31:10 +00:00
Farzad Sharif
8fad9214c7 docs: fix qa_per_user.ipynb (#29290)
# Description
The `config` option was not passed to `configurable_retriever.invoke()`.
Screenshot below. Fixed.
 
<img width="731" alt="Screenshot 2025-01-18 at 11 59 28 AM"
src="https://github.com/user-attachments/assets/21f30739-2abd-4150-b3ad-626ea9e3f96c"
/>
2025-01-18 16:24:31 -05:00
Vadim Rusin
2fb6fd7950 docs: fix broken link in JSONOutputParser reference (#29292)
### PR message:
- **Description:** Fixed a broken link in the documentation for the
`JSONOutputParser`. Updated the link to point to the correct reference:
  From:  

`https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JSONOutputParser.html`
  To:  

`https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html`.
This ensures accurate navigation for users referring to the
`JsonOutputParser` documentation.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** N/A

### Add tests and docs:
Not applicable; this change only affects documentation.

### Lint and test:
Ran `make format`, `make lint`, and `make test` to ensure no issues.
2025-01-18 16:17:34 -05:00
TheSongg
1cd4d8d101 [langchain_community.llms.xinference]: Rewrite _stream() method and support stream() method in xinference.py (#29259)
- [ ] **PR title**:[langchain_community.llms.xinference]: Rewrite
_stream() method and support stream() method in xinference.py

- [ ] **PR message**: Rewrite the _stream method so that the
chain.stream() can be used to return data streams.

       chain = prompt | llm
       chain.stream(input=user_input)


- [ ] **tests**: 
      from langchain_community.llms import Xinference
      from langchain.prompts import PromptTemplate

      llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
          stream = True
       )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
      chain = prompt | llm
      chain.stream(input={'country': 'France'})
2025-01-17 20:31:59 -05:00
Amaan
d4b9404fd6 docs: add langchain dappier tool integration notebook (#29265)
Add tools to interact with Dappier APIs with an example notebook.

For `DappierRealTimeSearchTool`, the tool can be invoked with:

```python
from langchain_dappier import DappierRealTimeSearchTool

tool = DappierRealTimeSearchTool()

tool.invoke({"query": "What happened at the last wimbledon"})
```

```
At the last Wimbledon in 2024, Carlos Alcaraz won the title by defeating Novak Djokovic. This victory marked Alcaraz's fourth Grand Slam title at just 21 years old! 🎉🏆🎾
```

For DappierAIRecommendationTool the tool can be invoked with:

```python
from langchain_dappier import DappierAIRecommendationTool

tool = DappierAIRecommendationTool(
    data_model_id="dm_01j0pb465keqmatq9k83dthx34",
    similarity_top_k=3,
    ref="sportsnaut.com",
    num_articles_ref=2,
    search_algorithm="most_recent",
)
```

```
[{"author": "Matt Weaver", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 08:04:03 +0000", "source_url": "https://sportsnaut.com/chili-bowl-thursday-bell-column/", "summary": "The article highlights the thrilling unpredictability... ", "title": "Thursday proves why every lap of Chili Bowl..."},
{"author": "Matt Higgins", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 02:48:42 +0000", "source_url": "https://sportsnaut.com/new-york-mets-news-pete-alonso...", "summary": "The New York Mets are likely parting ways with star...", "title": "MLB insiders reveal New York Mets’ last-ditch..."},
{"author": "Jim Cerny", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 05:10:39 +0000", "source_url": "https://www.foreverblueshirts.com/new-york-rangers-news...", "summary": "The New York Rangers achieved a thrilling 5-3 comeback... ", "title": "Rangers score 3 times in 3rd period for stirring 5-3..."}]
```

The integration package can be found over here -
https://github.com/DappierAI/langchain-dappier
2025-01-17 19:02:28 -05:00
ccurme
184ea8aeb2 anthropic[patch]: update tool choice type (#29276) 2025-01-17 15:26:33 -05:00
ccurme
ac52021097 anthropic[patch]: release 0.3.2 (#29275) 2025-01-17 19:48:31 +00:00
ccurme
c616b445f2 anthropic[patch]: support parallel_tool_calls (#29257)
Need to:
- Update docs
- Decide if this is an explicit kwarg of bind_tools
- Decide if this should be in standard test with flag for supporting
2025-01-17 19:41:41 +00:00
Erick Friis
628145b172 infra: fix api build (#29274) 2025-01-17 10:41:59 -08:00
Zapiron
97a5bc7fc7 docs: Fixed typos and improve metadata explanation (#29266)
Fix mini typos and made the explanation of metadata filtering clearer
2025-01-17 11:17:40 -05:00
Jun He
f0226135e5 docs: Remove redundant "%" (#29205)
Before this commit, the copied command can't be used directly.
2025-01-17 14:30:58 +00:00
Michael Chin
36ff83a0b5 docs: Message history for Neptune chains (#29260)
Expanded the Amazon Neptune documentation with new sections detailing
usage of chat message history with the
`create_neptune_opencypher_qa_chain` and
`create_neptune_sparql_qa_chain` functions.
2025-01-17 09:06:17 -05:00
ccurme
d5360b9bd6 core[patch]: release 0.3.30 (#29256) 2025-01-16 17:52:37 -05:00
Nuno Campos
595297e2e5 core: Add support for calls in get_function_nonlocals (#29255)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-16 14:43:42 -08:00
Luis Lopez
75663f2cae community: Add cost per 1K tokens for fine-tuned model cached input (#29248)
### Description

- Since there is no cost per 1k input tokens for a fine-tuned cached
version of `gpt-4o-mini-2024-07-18` is not available when using the
`OpenAICallbackHandler`, it raises an error when trying to make calls
with such model.
- To add the price in the `MODEL_COST_PER_1K_TOKENS` dictionary

cc. @efriis
2025-01-16 15:19:26 -05:00
Junon
667d2a57fd add mode arg to OBSFileLoader.load() method (#29246)
- **Description:** add mode arg to OBSFileLoader.load() method
  - **Issue:** #29245
  - **Dependencies:** no dependencies required for this change

---------

Co-authored-by: Junon_Gz <junon_gz@qq.com>
2025-01-16 11:09:04 -05:00
c6388d736b docs: fix typo in tool_results_pass_to_model.ipynb (how-to) (#29252)
Description: fix typo. change word from `cals` to `calls`
Issue: closes #29251 
Dependencies: None
Twitter handle: None
2025-01-16 11:05:28 -05:00
Erick Friis
4bc6cb759f docs: update recommended code interpreters (#29236)
unstable :(
2025-01-15 16:03:26 -08:00
Erick Friis
5eb4dc5e06 standard-tests: double messages test (#29237) 2025-01-15 15:14:29 -08:00
Nithish Raghunandanan
1051fa5729 couchbase: Migrate couchbase partner package to different repo (#29239)
**Description:** Migrate the couchbase partner package to
[Couchbase-Ecosystem](https://github.com/Couchbase-Ecosystem/langchain-couchbase)
org
2025-01-15 12:37:27 -08:00
Nadeem Sajjad
eaf2fb287f community(pypdfloader): added page_label in metadata for pypdf loader (#29225)
# Description

## Summary
This PR adds support for handling multi-labeled page numbers in the
**PyPDFLoader**. Some PDFs use complex page numbering systems where the
actual content may begin after multiple introductory pages. The
page_label field helps accurately reflect the document’s page structure,
making it easier to handle such cases during document parsing.

## Motivation
This feature improves document parsing accuracy by allowing users to
access the actual page labels instead of relying only on the physical
page numbers. This is particularly useful for documents where the first
few pages have roman numerals or other non-standard page labels.

## Use Case
This feature is especially useful for **Retrieval-Augmented Generation**
(RAG) systems where users may reference page numbers when asking
questions. Some PDFs have both labeled page numbers (like roman numerals
for introductory sections) and index-based page numbers.

For example, a user might ask:

	"What is mentioned on page 5?"

The system can now check both:
	•	**Index-based page number** (page)
	•	**Labeled page number** (page_label)

This dual-check helps improve retrieval accuracy. Additionally, the
results can be validated with an **agent or tool** to ensure the
retrieved pages match the user’s query contextually.

## Code Changes

- Added a page_label field to the metadata of the Document class in
**PyPDFLoader**.
- Implemented support for retrieving page_label from the
pdf_reader.page_labels.
- Created a test case (test_pypdf_loader_with_multi_label_page_numbers)
with a sample PDF containing multi-labeled pages
(geotopo-komprimiert.pdf) [[Source of
pdf](https://github.com/py-pdf/sample-files/blob/main/009-pdflatex-geotopo/GeoTopo-komprimiert.pdf)].
- Updated existing tests to ensure compatibility and verify page_label
extraction.

## Tests Added

- Added a new test case for a PDF with multi-labeled pages.
- Verified both page and page_label metadata fields are correctly
extracted.

## Screenshots

<img width="549" alt="image"
src="https://github.com/user-attachments/assets/65db9f5c-032e-4592-926f-824777c28f33"
/>
2025-01-15 14:18:07 -05:00
Mehdi
1a38948ee3 Mehdi zare/fmp data doc (#29219)
Title: community: add Financial Modeling Prep (FMP) API integration

Description: Adding LangChain integration for Financial Modeling Prep
(FMP) API to enable semantic search and structured tool creation for
financial data endpoints. This integration provides semantic endpoint
search using vector stores and automatic tool creation with proper
typing and error handling. Users can discover relevant financial
endpoints using natural language queries and get properly typed
LangChain tools for discovered endpoints.

Issue: N/A

Dependencies:

fmp-data>=0.3.1
langchain-core>=0.1.0
faiss-cpu
tiktoken
Twitter handle: @mehdizarem

Unit tests and example notebook have been added:

Tests are in tests/integration_tests/est_tools.py and
tests/unit_tests/test_tools.py
Example notebook is in docs/tools.ipynb
All format, lint and test checks pass:

pytest
mypy .
Dependencies are imported within functions and not added to
pyproject.toml. The changes are backwards compatible and only affect the
community package.

---------

Co-authored-by: mehdizare <mehdizare@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-15 15:31:01 +00:00
Mohammad Mohtashim
288613d361 (text-splitters): Small Fix in _process_html for HTMLSemanticPreservingSplitter to properly extract the metadata. (#29215)
- **Description:** Include `main` in the list of elements whose child
elements needs to be processed for splitting the HTML.
- **Issue:** #29184
2025-01-15 10:18:06 -05:00
TheSongg
4867fe7ac8 [langchain_community.llms.xinference]: fix error in xinference.py (#29216)
- [ ] **PR title**: [langchain_community.llms.xinference]: fix error in
xinference.py

- [ ] **PR message**:
- The old code raised an ValidationError:
pydantic_core._pydantic_core.ValidationError: 1 validation error for
Xinference when import Xinference from xinference.py. This issue has
been resolved by adjusting it's type and default value.

File "/media/vdc/python/lib/python3.10/site-packages/pydantic/main.py",
line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data,
self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for
Xinference
        client
Field required [type=missing, input_value={'server_url':
'http://10...t4', 'model_kwargs': {}}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing

- [ ] **tests**:

       from langchain_community.llms import Xinference
       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
         )
2025-01-15 10:11:26 -05:00
Kostadin Devedzhiev
bea5798b04 docs: Fix typo in retrievers documentation: 'An vectorstore' -> 'A vectorstore' (#29221)
- [x] **PR title**: "docs: Fix typo in documentation"

- [x] **PR message**:
- **Description:** Fixed a typo in the documentation, changing "An
vectorstore" to "A vector store" for grammatical accuracy.
    - **Issue:** N/A (no issue filed for this typo fix)
    - **Dependencies:** None
    - **Twitter handle:** N/A


- [x] **Add tests and docs**: This is a minor documentation fix that
doesn't require additional tests or example notebooks.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-15 10:10:14 -05:00
Sohaib Athar
d1cf10373b Update elasticsearch_retriever.ipynb (#29223)
docs: fix typo (connection)
- **Twitter handle:** @ReallyVirtual
2025-01-15 10:09:51 -05:00
Syed Baqar Abbas
4278046329 [fix] Convert table names to list for compatibility in SQLDatabase (#29229)
- [langchain_community.utilities.SQLDatabase] **[fix] Convert table
names to list for compatibility in SQLDatabase**:
  - The issue #29227 is being fixed here
  - The "package" modified is community
  - The issue lied in this block of code:

44b41b699c/libs/community/langchain_community/utilities/sql_database.py (L72-L77)

- [langchain_community.utilities.SQLDatabase] **[fix] Convert table
names to list for compatibility in SQLDatabase**:
- **Description:** When the SQLDatabase is initialized, it runs a code
`self._inspector.get_table_names(schema=schema)` which expects an output
of list. However, with some connectors (such as snowflake) the data type
returned could be another iterable. This results in a type error when
concatenating the table_names to view_names. I have added explicit type
casting to prevent this.
    - **Issue:** The issue #29227 is being fixed here
    - **Dependencies:** None
    - **Twitter handle:** @BaqarAbbas2001

## Additional Information
When the following method is called for a Snowflake database:

44b41b699c/libs/community/langchain_community/utilities/sql_database.py (L75)

Snowflake under the hood calls:
```python
from snowflake.sqlalchemy.snowdialect import SnowflakeDialect
SnowflakeDialect.get_table_names
```

This method returns a `dict_keys()` object which is incompatible to
concatenate with a list and results in a `TypeError`

### Relevant Library Versions
- **snowflake-sqlalchemy**: 1.7.2  
- **snowflake-connector-python**: 3.12.4  
- **sqlalchemy**: 2.0.20  
- **langchain_community**: 0.3.14
2025-01-15 10:00:03 -05:00
Jin Hyung Ahn
05554265b4 community: Fix ConfluenceLoader load() failure caused by deleted pages (#29232)
## Description
This PR modifies the is_public_page function in ConfluenceLoader to
prevent exceptions caused by deleted pages during the execution of
ConfluenceLoader.process_pages().


**Example scenario:**
Consider the following usage of ConfluenceLoader:
```python
import os
from langchain_community.document_loaders import ConfluenceLoader

loader = ConfluenceLoader(
        url=os.getenv("BASE_URL"),
        token=os.getenv("TOKEN"),
        max_pages=1000,
        cql=f'type=page and lastmodified >= "2020-01-01 00:00"',
        include_restricted_content=False,
)

# Raised Exception : HTTPError: Outdated version/old_draft/trashed? Cannot find content Please provide valid ContentId.
documents = loader.load()
```

If a deleted page exists within the query result, the is_public_page
function would previously raise an exception when calling
get_all_restrictions_for_content, causing the loader.load() process to
fail for all pages.



By adding a pre-check for the page's "current" status, unnecessary API
calls to get_all_restrictions_for_content for non-current pages are
avoided.


This fix ensures that such pages are skipped without affecting the rest
of the loading process.





## Issue
N/A (No specific issue number)

## Dependencies
No new dependencies are introduced with this change.

## Twitter handle
[@zenoengine](https://x.com/zenoengine)
2025-01-15 09:56:23 -05:00
Mohammad Mohtashim
21eb39dff0 [Community]: AzureOpenAIWhisperParser Authenication Fix (#29135)
- **Description:** `AzureOpenAIWhisperParser` authentication fix as
stated in the issue.
- **Issue:** #29133
2025-01-15 09:44:53 -05:00
Erick Friis
44b41b699c docs: api docs build folder prep update (#29220) 2025-01-15 03:52:00 +00:00
Erick Friis
b05543c69b packages: disable mongodb for api docs (#29218) 2025-01-15 02:23:01 +00:00
Erick Friis
30badd7a32 packages: update mongodb folder (#29217) 2025-01-15 02:01:06 +00:00
pm390
76172511fd community: Additional parameters for OpenAIAssistantV2Runnable (#29207)
**Description:** Added Additional parameters that could be useful for
usage of OpenAIAssistantV2Runnable.

This change is thought to allow langchain users to set parameters that
cannot be set using assistants UI
(max_completion_tokens,max_prompt_tokens,parallel_tool_calls) and
parameters that could be useful for experimenting like top_p and
temperature.

This PR originated from the need of using parallel_tool_calls in
langchain, this parameter is very important in openAI assistants because
without this parameter set to False strict mode is not respected by
OpenAI Assistants
(https://platform.openai.com/docs/guides/function-calling#parallel-function-calling).

> Note: Currently, if the model calls multiple functions in one turn
then strict mode will be disabled for those calls.

**Issue:** None
**Dependencies:** openai
2025-01-14 15:53:37 -05:00
Guy Korland
efadad6067 Add Link to FalkorDB Memory example (#29204)
- **Description:** Add Link to FalkorDB Memory example
2025-01-14 13:27:52 -05:00
Bagatur
4ab04ad6be docs: oai api ref nit (#29210) 2025-01-14 17:55:16 +00:00
Michael Chin
d9b856abad community: Deprecate Amazon Neptune resources in langchain-community (#29191)
Related: https://github.com/langchain-ai/langchain-aws/pull/322

The legacy `NeptuneOpenCypherQAChain` and `NeptuneSparqlQAChain` classes
are being replaced by the new LCEL format chains
`create_neptune_opencypher_qa_chain` and
`create_neptune_sparql_qa_chain`, respectively, in the `langchain_aws`
package.

This PR adds deprecation warnings to all Neptune classes and functions
that have been migrated to `langchain_aws`. All relevant documentation
has also been updated to replace `langchain_community` usage with the
new `langchain_aws` implementations.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-14 10:23:34 -05:00
Erick Friis
c55af44711 anthropic: pydantic mypy plugin (#29144) 2025-01-13 15:32:40 -08:00
Erick Friis
cdf3a17e55 docs: fix httpx conflicts with overrides in docs build (#29180) 2025-01-13 21:25:00 +00:00
ccurme
1bf6576709 cli[patch]: fix anchor links in templates (#29178)
These are outdated and can break docs builds.
2025-01-13 18:28:18 +00:00
Christopher Varjas
e156b372fb langchain: support api key argument with OpenAI moderation chain (#29140)
**Description:** Makes it possible to instantiate
`OpenAIModerationChain` with an `openai_api_key` argument only and no
`OPENAI_API_KEY` environment variable defined.

**Issue:** https://github.com/langchain-ai/langchain/issues/25176

**Dependencies:** `openai`

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-13 11:00:02 -05:00
Nikhil Shahi
335ca3a606 docs: add HyperbrowserLoader docs (#29143)
### Description
This PR adds docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It includes a document loader that uses Hyperbrowser to scrape
or crawl any urls and return formatted markdown or html content as well
as relevant metadata.
[Hyperbrowser](https://hyperbrowser.ai) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

### Issue
None

### Dependencies
None

### Twitter Handle
`@hyperbrowser`
2025-01-13 10:45:39 -05:00
Zhengren Wang
4c0217681a cookbook: fix typo in cookbook/mongodb-langchain-cache-memory.ipynb (#29149)
Description: fix "enviornment" into "environment". 
Issue: Typo
Dependencies: None
Twitter handle: zrwang01
2025-01-13 10:35:34 -05:00
Gabe Cornejo
e64bfb537f docs: Fix old link to Unstructured package in document_loader_markdown.ipynb (#29175)
Fixed a broken link in `document_loader_markdown.ipynb` to point to the
updated documentation page for the Unstructured package.
Issue: N/A
Dependencies: None
2025-01-13 15:26:01 +00:00
Tymon Żarski
689592f9bb community: Fix rank-llm import paths for new 0.20.3 version (#29154)
# **PR title**: "community: Fix rank-llm import paths for new 0.20.3
version"
- The "community" package is being modified to handle updated import
paths for the new `rank-llm` version.

---

## Description
This PR updates the import paths for the `rank-llm` package to account
for changes introduced in version `0.20.3`. The changes ensure
compatibility with both pre- and post-revamp versions of `rank-llm`,
specifically version `0.12.8`. Conditional imports are introduced based
on the detected version of `rank-llm` to handle different path
structures for `VicunaReranker`, `ZephyrReranker`, and `SafeOpenai`.

## Issue
RankLLMRerank usage throws an error when used GPT (not only) when
rank-llm version is > 0.12.8 - #29156

## Dependencies
This change relies on the `packaging` and `pkg_resources` libraries to
handle version checks.

## Twitter handle
@tymzar
2025-01-13 10:22:14 -05:00
Andrew
0e3115330d Add additional_instructions on openai assistan runs create. (#29164)
- **Description**: In the functions `_create_run` and `_acreate_run`,
the parameters passed to the creation of
`openai.resources.beta.threads.runs` were limited.

  Source: 
  ```
  def _create_run(self, input: dict) -> Any:
        params = {
            k: v
            for k, v in input.items()
            if k in ("instructions", "model", "tools", "run_metadata")
        }
        return self.client.beta.threads.runs.create(
            input["thread_id"],
            assistant_id=self.assistant_id,
            **params,
        )
  ```
- OpenAI Documentation
([createRun](https://platform.openai.com/docs/api-reference/runs/createRun))

- Full list of parameters `openai.resources.beta.threads.runs` ([source
code](https://github.com/openai/openai-python/blob/main/src/openai/resources/beta/threads/runs/runs.py#L91))

 
- **Issue:** Fix #17574 



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-13 10:11:47 -05:00
ccurme
e4ceafa1c8 langchain[patch]: update extended tests for compatibility with langchain-openai==0.3 (#29174) 2025-01-13 15:04:22 +00:00
Syed Muneeb Abbas
8ef7f3eacc Fixed the import error in OpenAIWhisperParserLocal and resolved the L… (#29168)
…angChain parser issue.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-13 09:47:31 -05:00
Priyansh Agrawal
c115c09b6d community: add missing format specifier in error log in CubeSemanticLoader (#29172)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**
- **Description:** Add a missing format specifier in an an error log in
`langchain_community.document_loaders.CubeSemanticLoader`
- **Issue:** raises `TypeError: not all arguments converted during
string formatting`


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-13 09:32:57 -05:00
ThomasSaulou
349b5c91c2 fix chatperplexity: remove 'stream' from params in _stream method (#29173)
quick fix chatperplexity: remove 'stream' from params in _stream method
2025-01-13 09:31:37 -05:00
LIU Yuwei
f980144e9c community: add init for unstructured file loader (#29101)
## Description
Add `__init__` for unstructured loader of
epub/image/markdown/pdf/ppt/word to restrict the input type to `str` or
`Path`.
In the
[signature](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html)
these unstructured loaders receive `file_path: str | List[str] | Path |
List[Path]`, but actually they only receive `str` or `Path`.

## Issue
None

## Dependencies
No changes.
2025-01-13 09:26:00 -05:00
Erick Friis
bbc3e3b2cf openai: disable streaming for o1 by default (#29147)
Currently 400s
https://community.openai.com/t/streaming-support-for-o1-o1-2024-12-17-resulting-in-400-unsupported-value/1085043

o1-mini and o1-preview stream fine
2025-01-11 02:24:11 +00:00
Isaac Francisco
62074bac60 replace all LANGCHAIN_ flags with LANGSMITH_ flags (#29120) 2025-01-11 01:24:40 +00:00
Bagatur
5c2fbb5b86 docs: Update openai README.md (#29146) 2025-01-10 17:24:16 -08:00
Erick Friis
0a54aedb85 anthropic: pdf integration test (#29142) 2025-01-10 21:56:31 +00:00
ccurme
8de8519daf tests[patch]: release 0.3.8 (#29141) 2025-01-10 21:53:41 +00:00
Jiang
7d3fb21807 Add lindorm as new integration (#29123)
Misoperation caused the pr close: [origin pr
link](https://github.com/langchain-ai/langchain/pull/29085)

---------

Co-authored-by: jiangzhijie <jiangzhijie.jzj@alibaba-inc.com>
2025-01-10 16:30:37 -05:00
Zapiron
7594ad694f docs: update the correct learning objective YAML instead of XML (#29131)
Update the correct learning objective for the how-to page by changing
XML to YAML which is taught.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-10 16:13:13 -05:00
Mateusz Szewczyk
b1d3e25eb6 docs: Update IBM WatsonxRerank documentation (#29138)
Thank you for contributing to LangChain!

Update presented model in `WatsonxRerank` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-10 15:07:29 -05:00
ccurme
4819b500e8 pinecone[patch]: release 0.2.2 (#29139) 2025-01-10 14:59:57 -05:00
Ashvin
46fd09ffeb partner: Update aiohttp in langchain pinecone. (#28863)
- **partner**: "Update Aiohttp for resolving vulnerability issue"
    
- **Description:** I have updated the upper limit of aiohttp from `3.10`
to `3.10.5` in the pyproject.toml file of langchain-pinecone. Hopefully
this will resolve #28771 . Please review this as I'm quite unsure.

---------

Co-authored-by: = <=>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-10 14:54:52 -05:00
ccurme
df5ec45b32 docs[patch]: update docs for langchain-openai==0.3 (#29119)
Update model for one notebook that specified `gpt-4`.

Otherwise just updating cassettes.

---------

Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-01-10 13:29:31 -05:00
ccurme
f3d370753f xai[minor]: release 0.2 (#29132)
Update `langchain-openai` to 0.3. See [release
notes](https://github.com/langchain-ai/langchain/releases/tag/langchain-openai%3D%3D0.3.0)
for details. Should only impact default values of `temperature`, `n`,
and `max_retries`.
2025-01-10 11:47:27 -05:00
ccurme
6e63ccba84 openai[minor]: release 0.3 (#29100)
## Goal

Solve the following problems with `langchain-openai`:

- Structured output with `o1` [breaks out of the
box](https://langchain.slack.com/archives/C050X0VTN56/p1735232400232099).
- `with_structured_output` by default does not use OpenAI’s [structured
output
feature](https://platform.openai.com/docs/guides/structured-outputs).
- We override API defaults for temperature and other parameters.

## Breaking changes:

- Default method for structured output is changing to OpenAI’s dedicated
[structured output
feature](https://platform.openai.com/docs/guides/structured-outputs).
For schemas specified via TypedDict or JSON schema, strict schema
validation is disabled by default but can be enabled by specifying
`strict=True`.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- Models that don’t support `method="json_schema"` (e.g., `gpt-4` and
`gpt-3.5-turbo`, currently the default model for ChatOpenAI) will raise
an error unless `method` is explicitly specified.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- Schemas specified via Pydantic `BaseModel` that have fields with
non-null defaults or metadata (like min/max constraints) will raise an
error.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- `strict` now defaults to False for `method="json_schema"` when schemas
are specified via TypedDict or JSON schema.
- To recover previous behavior, use `with_structured_output(schema,
strict=True)`
- Schemas specified via Pydantic V1 will raise a warning (and use
`method="function_calling"`) unless `method` is explicitly specified.
- To remove the warning, pass `method="function_calling"` into
`with_structured_output`.
- Streaming with default structured output method / Pydantic schema no
longer generates intermediate streamed chunks.
- To recover previous behavior, pass `method="function_calling"` into
`with_structured_output`.
- We no longer override default temperature (was 0.7 in LangChain, now
will follow OpenAI, currently 1.0).
- To recover previous behavior, initialize `ChatOpenAI` or
`AzureChatOpenAI` with `temperature=0.7`.
- Note: conceptually there is a difference between forcing a tool call
and forcing a response format. Tool calls may have more concise
arguments vs. generating content adhering to a schema. Prompts may need
to be adjusted to recover desired behavior.

---------

Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-01-10 10:50:32 -05:00
ccurme
facfd42768 docs[patch]: fix links in partner package table (#29112)
Integrations in external repos are not built into [API
ref](https://python.langchain.com/api_reference/), so currently [the
table](https://python.langchain.com/docs/integrations/providers/#integration-packages)
includes broken links. Here we update the links for this type of package
to point to PyPi.
2025-01-09 10:37:15 -05:00
ccurme
815bfa1913 openai[patch]: support streaming with json_schema response format (#29044)
- Stream JSON string content. Final chunk includes parsed representation
(following OpenAI
[docs](https://platform.openai.com/docs/guides/structured-outputs#streaming)).
- Mildly (?) breaking change: if you were using streaming with
`response_format` before, usage metadata will disappear unless you set
`stream_usage=True`.

## Response format

Before:

![Screenshot 2025-01-06 at 11 59
01 AM](https://github.com/user-attachments/assets/e54753f7-47d5-421d-b8f3-172f32b3364d)


After:

![Screenshot 2025-01-06 at 11 58
13 AM](https://github.com/user-attachments/assets/34882c6c-2284-45b4-92f7-5b5b69896903)


## with_structured_output

For pydantic output, behavior of `with_structured_output` is unchanged
(except for warning disappearing), because we pluck the parsed
representation straight from OpenAI, and OpenAI doesn't return it until
the stream is completed. Open to alternatives (e.g., parsing from
content or intermediate dict chunks generated by OpenAI).

Before:

![Screenshot 2025-01-06 at 12 38
11 PM](https://github.com/user-attachments/assets/913d320d-f49e-4cbb-a800-b394ae817fd1)

After:

![Screenshot 2025-01-06 at 12 38
58 PM](https://github.com/user-attachments/assets/f7a45dd6-d886-48a6-8d76-d0e21ca767c6)
2025-01-09 10:32:30 -05:00
Panos Vagenas
858f655a25 docs: add Docling loader docs (#29104)
### Description
This adds the docs for the Docling document loader.
[Docling](https://github.com/DS4SD/docling) parses PDF, DOCX, PPTX,
HTML, and other formats into a rich unified representation including
document layout, tables etc., making them ready for generative AI
workflows like RAG.

Some references:
- https://research.ibm.com/blog/docling-generative-AI
-
https://www.redhat.com/en/blog/docling-missing-document-processing-companion-generative-ai
- [Docling Technical Report](https://arxiv.org/abs/2408.09869)

The introduced `DoclingLoader` enables users to:
- use various document types in their LLM applications with ease and
speed, and
- leverage Docling's rich representation for advanced, document-native
grounding.

### Issue
Replacing PR #27987 as discussed with @efriis
[here](https://github.com/langchain-ai/langchain/pull/27987#issuecomment-2489354930).

### Dependencies
None

---------

Signed-off-by: Panos Vagenas <35837085+vagenas@users.noreply.github.com>
2025-01-09 10:15:35 -05:00
fzowl
cc55e32924 docs: Adding voyage-3-large to the .ipynb file (#29098)
**Description:**
Adding voyage-3-large model to the .ipynb file (its just extending a
list, so not even a code change)


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-09 10:01:55 -05:00
Tacobaco
287abd9e0d Update word in databricks_vector_search.ipynb from "cna" to "can" (#29109)
fix to word "can"

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-09 10:01:00 -05:00
Mohammad Mohtashim
a46c2bce51 [Community]: Small Fix in google_firestore memory notebook (#29107)
- **Description:** Just a small fix in google_firestore memory notebook
- **Issue:** @29095
2025-01-09 10:00:41 -05:00
Joshua Campbell
00dcc44739 Langchain_community: Fix issue with missing backticks in arango client (#29110)
- **Description:** Adds backticks to generate_schema function in the
arango graph client
- **Issue:** We experienced an issue with the generate schema function
when talking to our arango database where these backticks were missing
    - **Dependencies:** none
    - **Twitter handle:** @anangelofgrace
2025-01-09 10:00:10 -05:00
Inah Jeon
fa6f08faa1 docs: Add upstage document parse loader to pdf loaders (#29099)
Add upstage document parse loader to pdf loaders

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-08 15:32:39 -05:00
LIU Yuwei
2b09f798e1 community: add init for UnstructuredHTMLLoader to solve pathlib paths (#29091)
## Description
Add `__init__` for `UnstructuredHTMLLoader` to restrict the input type
to `str` or `Path`, and transfer the `self.file_path` to `str` just like
`UnstructuredXMLLoader` does.

## Issue
Fix #29090 

## Dependencies
No changes.
2025-01-08 10:19:27 -05:00
Jin Hyung Ahn
c8ca1cd42f community: fix "confluence-loader" enable include_labels for documents loaded via CQL (#29089)
## Description
This PR enables label inclusion for documents loaded via CQL in the
confluence-loader.

- Updated _lazy_load to pass the include_labels parameter instead of
False in process_pages calls for documents loaded via CQL.
- Ensured that labels can now be fetched and added to the metadata for
documents queried with cql.

## Related Modification History
This PR builds on the previous functionality introduced in
[#28259](https://github.com/langchain-ai/langchain/pull/28259), which
added support for including labels with the include_labels option.
However, this functionality did not work as expected for CQL queries,
and this PR fixes that issue.

If the False handling was intentional due to another issue, please let
me know. I have verified with our Confluence instance that this change
allows labels to be correctly fetched for documents loaded via CQL.

## Issue
Fixes #29088


## Dependencies
No changes.

## Twitter Handle
[@zenoengine](https://x.com/zenoengine)
2025-01-08 10:16:39 -05:00
Inah Jeon
9d290abccd partner: Update Upstage Model Names and Remove Deprecated Model (#29093)
This PR updates model names in the upstage library to reflect the latest
naming conventions and removes deprecated models.

Changes:

Renamed Models:
- `solar-1-mini-chat` -> `solar-mini`
- `solar-1-mini-embedding-query` -> `embedding-query`

Removed Deprecated Models:
- `layout-analysis` (replaced to `document-parse`)

Reference:
- https://console.upstage.ai/docs/getting-started/overview
-
https://github.com/langchain-ai/langchain-upstage/releases/tag/libs%2Fupstage%2Fv0.5.0

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-08 10:13:22 -05:00
Zapiron
9f5fa50bbf docs: Remove additional ` in heading (#29096)
Remove the additional ` in the pipe operator heading
2025-01-08 10:11:30 -05:00
Prashanth Rao
b1dafaef9b Kùzu package integration docs (#29076)
## Langchain Kùzu

### Description
 
This PR adds docs for the `langchain-kuzu` package [on
PyPI](https://pypi.org/project/langchain-kuzu/) that was recently
published, allowing Kùzu users to more easily use and work with
LangChain QA chains. The package will also make it easier for the Kùzu
team to continue supporting and updating the integration over future
releases.

### Twitter Handle

Please tag [@kuzudb](https://x.com/kuzudb) on Twitter once this PR is
merged, so LangChain users can be notified!

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2025-01-08 01:14:00 +00:00
Erick Friis
cc0f81f40f partners/groq: release 0.2.3 (#29081) 2025-01-07 23:36:51 +00:00
Erick Friis
fcc9cdd100 multiple: disable socket for unit tests (#29080) 2025-01-07 15:31:50 -08:00
Erick Friis
539ebd5431 groq: user agent (#29079) 2025-01-07 23:21:57 +00:00
Erick Friis
c5bee0a544 pinecone: bump core version (#29077) 2025-01-07 20:23:33 +00:00
Cory Waddingham
ce9e9f9314 pinecone: Review pinecone tests (#29073)
Title: langchain-pinecone: improve test structure and async handling

Description: This PR improves the test infrastructure for the
langchain-pinecone package by:
1. Implementing LangChain's standard test patterns for embeddings
2. Adding comprehensive configuration testing
3. Improving async test coverage
4. Fixing integration test issues with namespaces and async markers

The changes make the tests more robust, maintainable, and aligned with
LangChain's testing standards while ensuring proper async behavior in
the embeddings implementation.

Key improvements:
- Added standard EmbeddingsTests implementation
- Split custom configuration tests into a separate test class
- Added proper async test coverage with pytest-asyncio
- Fixed namespace handling in vector store integration tests
- Improved test organization and documentation

Dependencies: None (uses existing test dependencies)

Tests and Documentation:
-  Added standard test implementation following LangChain's patterns
-  Added comprehensive unit tests for configuration and async behavior
-  All tests passing locally
- No documentation changes needed (internal test improvements only)

Twitter handle: N/A

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-07 11:46:30 -08:00
ccurme
d9c51b71c4 infra[patch]: drop prompty from core dependents (#29068) 2025-01-07 11:01:29 -05:00
Philippe PRADOS
2921597c71 community[patch]: Refactoring PDF loaders: 01 prepare (#29062)
- **Refactoring PDF loaders step 1**: "community: Refactoring PDF
loaders to standardize approaches"

- **Description:** Declare CloudBlobLoader in __init__.py. file_path is
Union[str, PurePath] anywhere
- **Twitter handle:** pprados

This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses to prepare the update of all parsers.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

@eyurtsev it's the start of a PR series.
2025-01-07 11:00:04 -05:00
lspataroG
a49448a7c9 Add Google Vertex AI Vector Search Hybrid Search Documentation (#29064)
Add examples in the documentation to use hybrid search in Vertex AI
[Vector
Search](https://github.com/langchain-ai/langchain-google/pull/628)
2025-01-07 10:29:03 -05:00
Keiichi Hirobe
0d226de25c [docs] Update indexing.ipynb (#29055)
According to https://github.com/langchain-ai/langchain/pull/21127, now
`AzureSearch` should be compatible with LangChain indexer.
2025-01-07 10:03:32 -05:00
ccurme
55677e31f7 text-splitters[patch]: release 0.3.5 (#29054)
Resolves https://github.com/langchain-ai/langchain/issues/29053
2025-01-07 09:48:26 -05:00
Erick Friis
187131c55c Revert "integrations[patch]: remove non-required chat param defaults" (#29048)
Reverts langchain-ai/langchain#26730

discuss best way to release default changes (esp openai temperature)
2025-01-06 14:45:34 -08:00
Bagatur
3d7ae8b5d2 integrations[patch]: remove non-required chat param defaults (#26730)
anthropic:
  - max_retries

openai:
  - n
  - temperature
  - max_retries

fireworks
  - temperature

groq
  - n
  - max_retries
  - temperature

mistral
  - max_retries
  - timeout
  - max_concurrent_requests
  - temperature
  - top_p
  - safe_mode

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 22:26:22 +00:00
UV
b9db8e9921 DOC: Improve human input prompt in FewShotChatMessagePromptTemplate example (#29023)
Fixes #29010 

This PR updates the example for FewShotChatMessagePromptTemplate by
modifying the human input prompt to include a more descriptive and
user-friendly question format ('What is {input}?') instead of just
'{input}'. This change enhances clarity and usability in the
documentation example.

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 12:29:15 -08:00
ccurme
1f78d4faf4 voyageai[patch]: release 0.1.4 (#29046) 2025-01-06 20:20:19 +00:00
Eugene Evstafiev
6a152ce245 docs: add langchain-pull-md Markdown loader (#29024)
- [x] **PR title**: "docs: add langchain-pull-md Markdown loader"

- [x] **PR message**: 
- **Description:** This PR introduces the `langchain-pull-md` package to
the LangChain community. It includes a new document loader that utilizes
the pull.md service to convert URLs into Markdown format, particularly
useful for handling web pages rendered with JavaScript frameworks like
React, Angular, or Vue.js. This loader helps in efficient and reliable
Markdown conversion directly from URLs without local rendering, reducing
server load.
    - **Issue:** NA
    - **Dependencies:** requests >=2.25.1
    - **Twitter handle:** https://x.com/eugeneevstafev?s=21

- [x] **Add tests and docs**: 
1. Added unit tests to verify URL checking and conversion
functionalities.
2. Created a comprehensive example notebook detailing the usage of the
new loader.

- [x] **Lint and test**: 
- Completed local testing using `make format`, `make lint`, and `make
test` commands as per the LangChain contribution guidelines.


**Related Links:**
- [Package Repository](https://github.com/chigwell/langchain-pull-md)
- [PyPI Package](https://pypi.org/project/langchain-pull-md/)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 19:32:43 +00:00
Ashvin
20a715a103 community: Fix redundancy in code. (#29022)
In my previous PR (#28953), I added an unwanted condition for validating
the Azure ML Endpoint. In this PR, I have rectified the issue.
2025-01-06 12:58:16 -05:00
Jason Rodrigues
c8d6f9d52b Update index.mdx (#29029)
spell check

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-04 22:04:00 -05:00
Adrián Panella
acddfc772e core: allow artifact in create_retriever_tool (#28903)
Add option to return content and artifacts, to also be able to access
the full info of the retrieved documents.

They are returned as a list of dicts in the `artifacts` property if
parameter `response_format` is set to `"content_and_artifact"`.

Defaults to `"content"` to keep current behavior.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 22:10:31 +00:00
ccurme
3e618b16cd community[patch]: release 0.3.14 (#29019) 2025-01-03 15:34:24 -05:00
ccurme
18eb9c249d langchain[patch]: release 0.3.14 (#29018) 2025-01-03 15:15:44 -05:00
ccurme
8e50e4288c core[patch]: release 0.3.29 (#29017) 2025-01-03 14:58:39 -05:00
ccurme
85403bfa99 core[patch]: substantially speed up @deprecated (#29016)
Resolves https://github.com/langchain-ai/langchain/issues/26918

Unit tests don't raise any additional `LangChainDeprecationWarning`.
Would like guidance on how to test this more thoroughly if needed.

Note: speed up for `bind_tools` path is shown below. This is
**redundant** with the speedup in
https://github.com/langchain-ai/langchain/pull/29015. I include it for
demonstration purposes.

Before:

![Screenshot 2025-01-03 at 12 54
50 PM](https://github.com/user-attachments/assets/87f289eb-4cad-4304-85f7-5c58c59080f1)

After:

![Screenshot 2025-01-03 at 12 55
35 PM](https://github.com/user-attachments/assets/95ad0506-e1d1-4c5c-bb27-6a634d8810c9)
2025-01-03 14:38:53 -05:00
ccurme
4bb391fd4e core[patch]: remove deprecated functions from tool binding hotpath (#29015)
(Inspired by https://github.com/langchain-ai/langchain/issues/26918)

We rely on some deprecated public functions in the hot path for tool
binding (`convert_pydantic_to_openai_function`,
`convert_python_function_to_openai_function`, and
`format_tool_to_openai_function`). My understanding is that what is
deprecated is not the functionality they implement, but use of them in
the public API -- we expect to continue to rely on them.

Here we update these functions to be private and not deprecated. We keep
the public, deprecated functions as simple wrappers that can be safely
deleted.

The `@deprecated` wrapper adds considerable latency due to its use of
the `inspect` module. This update speeds up `bind_tools` by a factor of
~100x:

Before:

![Screenshot 2025-01-03 at 11 22
55 AM](https://github.com/user-attachments/assets/94b1c433-ce12-406f-b64c-ca7103badfe0)

After:

![Screenshot 2025-01-03 at 11 23
41 AM](https://github.com/user-attachments/assets/02d0deab-82e4-45ca-8cc7-a20b91a5b5db)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 19:29:01 +00:00
Eugene Evstafiev
a86904e735 docs: fix typo (#29012)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: fix typo"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a minor fix of typo
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. ~~a test for the integration, preferably unit tests that do not rely
on network access,~~
2. ~~an example notebook showing its use. It lives in
`docs/docs/integrations` directory.~~


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-03 09:52:24 -08:00
Erick Friis
919d1c7da6 box: remove box readme for api docs build (#29014) 2025-01-03 09:50:04 -08:00
Erick Friis
d8bc556c94 packages: update box location (#29013) 2025-01-03 09:45:13 -08:00
Amaan
8d7daa59fb docs: add langchain dappier retriever integration notebooks (#28931)
Add a retriever to interact with Dappier APIs with an example notebook.

The retriever can be invoked with:

```python
from langchain_dappier import DappierRetriever

retriever = DappierRetriever(
    data_model_id="dm_01jagy9nqaeer9hxx8z1sk1jx6",
    k=5
)

retriever.invoke("latest tech news")
```

To retrieve 5 documents related to latest news in the tech sector. The
included notebook also includes deeper details about controlling filters
such as selecting a data model, number of documents to return, site
domain reference, minimum articles from the reference domain, and search
algorithm, as well as including the retriever in a chain.

The integration package can be found over here -
https://github.com/DappierAI/langchain-dappier
2025-01-03 10:21:41 -05:00
ccurme
0185010b88 community[patch]: additional check for prompt caching support (#29008)
Prompt caching explicitly excludes `gpt-4o-2024-05-13`:
https://platform.openai.com/docs/guides/prompt-caching

Resolves https://github.com/langchain-ai/langchain/issues/28997
2025-01-03 10:14:07 -05:00
zzaebok
4de52e7891 docs: fix typo in callbacks_custom_events (#29005)
This PR is to correct a simple typo (dipsatch -> dispatch) in how-to
guide.
2025-01-03 09:36:21 -05:00
Andreas Motl
e493e227c9 docs: CrateDB: Educate readers about full and semantic cache components (#29000)
Dear @ccurme and @efriis,

following up on our initial patch adding documentation about CrateDB
[^1], with version 0.1.0, just released, the [CrateDB
provider](https://python.langchain.com/docs/integrations/providers/cratedb/)
starts providing `CrateDBCache` and `CrateDBSemanticCache` classes. This
little patch updates the documentation accordingly.

Happy New Year!

With kind regards,
Andreas.

[^1]: Thanks for merging
https://github.com/langchain-ai/langchain/pull/28877 so quickly.

/cc @kneth, @simonprickett


#### Preview
- [Full
Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#full-cache)
- [Semantic
Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#semantic-cache)
2025-01-03 09:31:05 -05:00
Ahmad Elmalah
b258ff1930 Docs: Add 'Optional' to installation section to fix an issue (#28902)
Problem:
"Optional" object is used in one example without importing, which raises
the following error when copying the example into IDE or Jupyter Lab

![image](https://github.com/user-attachments/assets/3a6c48cc-937f-4774-979b-b3da64ced247)

Solution:
Just importing Optional from typing_extensions module, this solves the
problem!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 00:05:27 +00:00
Erick Friis
97dc906a18 docs: add stripe toolkit (#28122) 2025-01-02 16:03:37 -08:00
RuofanChen03
5c32307a7a docs: Add FAISS Filter with Advanced Query Operators Documentation & Demonstration (#28938)
## Description
This pull request updates the documentation for FAISS regarding filter
construction, following the changes made in commit `df5008f`.

## Issue
None. This is a follow-up PR for documentation of
[#28207](https://github.com/langchain-ai/langchain/pull/28207)

## Dependencies:
None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 16:08:25 -05:00
Tari Yekorogha
ba9dfd9252 docs: Add FalkorDB Chat Message History and Update Package Registry (#28914)
This commit updates the documentation and package registry for the
FalkorDB Chat Message History integration.

**Changes:**

- Added a comprehensive example notebook
falkordb_chat_message_history.ipynb demonstrating how to use FalkorDB
for session-based chat message storage.

- Added a provider notebook for FalkorDB

- Updated libs/packages.yml to register FalkorDB as an integration
package, following LangChain's new guidelines for community
integrations.

**Notes:**

- This update aligns with LangChain's process for registering new
integrations via documentation updates and package registry
modifications.

- No functional or core package changes were made in this commit.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 15:46:47 -05:00
ccurme
39b35b3606 docs[patch]: fix link (#28994) 2025-01-02 15:38:31 -05:00
Ashvin
d26c102a5a community: Update azureml endpoint (#28953)
- In this PR, I have updated the AzureML Endpoint with the latest
endpoint.
- **Description:** I have changed the existing `/chat/completions` to
`/models/chat/completions` in
libs/community/langchain_community/llms/azureml_endpoint.py
    - **Issue:** #25702

---------

Co-authored-by: = <=>
2025-01-02 14:47:02 -05:00
ccurme
7c28321f04 core[patch]: fix deprecation admonition in API ref (#28992)
Before:

![Screenshot 2025-01-02 at 1 49
30 PM](https://github.com/user-attachments/assets/cb30526a-fc0b-439f-96d1-962c226d9dc7)

After:

![Screenshot 2025-01-02 at 1 49
38 PM](https://github.com/user-attachments/assets/32c747ea-6391-4dec-b778-df457695d197)
2025-01-02 14:37:55 -05:00
Yanzhong Su
d57f0c46da docs: fix typo in how-to guides (#28951)
This PR is to correct a simple typo in how-to guides section.
2025-01-02 14:11:25 -05:00
Mohammad Mohtashim
0e74757b0a (Community): DuckDuckGoSearchAPIWrapper backend changed from api to auto (#28961)
- **Description:** `DuckDuckGoSearchAPIWrapper` default value for
backend has been changed to avoid User Warning
- **Issue:** #28957
2025-01-02 14:08:22 -05:00
Saeed Hassanvand
273b2fe81e docs: Remove deprecated schema() usage in examples (#28956)
This pull request updates the documentation in
`docs/docs/how_to/custom_tools.ipynb` to reflect the recommended
approach for generating JSON schemas in Pydantic. Specifically, it
replaces instances of the deprecated `schema()` method with the newer
and more versatile `model_json_schema()`.
2025-01-02 12:22:29 -05:00
Ikko Eltociear Ashimine
a092f5a607 docs: update multi_vector.ipynb (#28954)
accross -> across
2025-01-02 12:16:52 -05:00
Mohammad Mohtashim
aa551cbcee (Core) Small Change in Docstring for method partial for BasePromptTemplate (#28969)
- **Description:** Very small change in Docstring for
`BasePromptTemplate`
- **Issue:** #28966
2025-01-02 12:16:30 -05:00
minpeter
a873e0fbfb community: update documentation and model IDs for FriendliAI provider (#28984)
### Description  

- In the example, remove `llama-2-13b-chat`,
`mixtral-8x7b-instruct-v0-1`.
- Fix llm friendli streaming implementation.
- Update examples in documentation and remove duplicates.

### Issue  
N/A  

### Dependencies  
None  

### Twitter handle  
`@friendliai`
2025-01-02 12:15:59 -05:00
Hrishikesh Kalola
437ec53e29 langchain.agents: corrected documentation (#28986)
**Description:**
This PR updates the codebase to reflect the deprecation of the AgentType
feature. It includes the following changes:

Documentation Update:

Added a deprecation notice to the AgentType class comment.
Provided a reference to the official LangChain migration guide for
transitioning to LangGraph agents.
Reference Link: https://python.langchain.com/docs/how_to/migrate_agent/

**Twitter handle:** @hrrrriiiishhhhh

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 12:13:42 -05:00
Muhammad Magdy Abomouta
308825a6d5 docs: Update streaming.mdx (#28985)
Description:
Add a missing 'has' verb in the Streaming Conceptual Guide.
2025-01-02 11:54:32 -05:00
Mohammad Mohtashim
49a26c1fca (Community): Fix Keyword argument for AzureAIDocumentIntelligenceParser (#28959)
- **Description:** Fix the `body` keyword argument for
AzureAIDocumentIntelligenceParser`
- **Issue:** #28948
2025-01-02 11:27:12 -05:00
ccurme
efc687a13b community[patch]: fix instantiation for Slack tools (#28990)
Believe the current implementation raises PydanticUserError following
[this](https://github.com/pydantic/pydantic/releases/tag/v2.10.1)
Pydantic release.

Resolves https://github.com/langchain-ai/langchain/issues/28989
2025-01-02 16:14:17 +00:00
Yunlin Mao
c59093d67f docs: add modelscope endpoint (#28941)
## Description

To integrate ModelScope inference API endpoints for both Embeddings,
LLMs and ChatModels, install the package
`langchain-modelscope-integration` (as discussed in issue #28928 ). This
is necessary because the package name `langchain-modelscope` was already
registered by another party.

ModelScope is a premier platform designed to connect model checkpoints
with model applications. It provides the necessary infrastructure to
share open models and promote model-centric development. For more
information, visit GitHub page:
[ModelScope](https://github.com/modelscope).
2025-01-02 10:08:41 -05:00
Sathesh Sivashanmugam
a37be6dc65 docs: Minor typo fixed, install necessary pip (#28976)
Description: Document update. A minor typo is fixed. Install lxml as
required.
    Issue: -
    Dependencies: -
    Twitter handle: @sathesh

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:21:29 +00:00
Yanzhong Su
b8aa8c86ba docs: Remove redundant word for improved sentence fluency (#28975)
Remove redundant word for improved sentence fluency

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:14:08 +00:00
Bagatur
1c797ac68f infra: speed up unit tests (#28974)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:13:08 +00:00
Morgante Pell
79fc9b6b04 cli: bump gritql version (#28981)
**Description:**

bump gritql dependency, to use new binary names from
[here](https://github.com/getgrit/gritql/pull/565)

**Issue:**

fixes https://github.com/langchain-ai/langchain/issues/27822
2025-01-01 20:02:46 -08:00
Bagatur
edbe7d5f5e core,anthropic[patch]: fix with_structured_output typing (#28950) 2024-12-28 15:46:51 -05:00
Scott Hurrey
ccf69368b4 docs: Update documentation for BoxBlobLoader, extra_fields (#28942)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- **Description:** Update docs to add BoxBlobLoader and extra_fields to
all Box connectors.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @BoxPlatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-27 12:06:58 -08:00
dabzr
ffbe5b2106 partners: fix default value for stop_sequences in ChatGroq (#28924)
- **Description:**  
This PR addresses an issue with the `stop_sequences` field in the
`ChatGroq` class. Currently, the field is defined as:
```python
stop: Optional[Union[List[str], str]] = Field(None, alias="stop_sequences")
```  
This causes the language server (LSP) to raise an error indicating that
the `stop_sequences` parameter must be implemented. The issue occurs
because `Field(None, alias="stop_sequences")` is different compared to
`Field(default=None, alias="stop_sequences")`.


![image](https://github.com/user-attachments/assets/bfc34cb1-c664-4c31-b856-8f18419c7350)
To resolve the issue, the field is updated to:  
```python
stop: Optional[Union[List[str], str]] = Field(default=None, alias="stop_sequences")
```  
While this issue does not affect runtime behavior, it ensures
compatibility with LSPs and improves the development experience.
- **Issue:** N/A  
- **Dependencies:** None
2024-12-26 16:43:34 -05:00
Andy Wermke
5940ed3952 community: Fix error handling bug in ChatDeepInfra (#28918)
In the async ClientResponse, `response.text` is not a string property,
but an asynchronous function returning a string.
2024-12-26 14:45:12 -05:00
Ahmad Elmalah
d46fddface Docs: Updaing 'JSON Schema' code block output (#28909)
Out seems outdate, I ran the example several times and this is the
updated output, one key-value pair was missing!

![image](https://github.com/user-attachments/assets/95231ce7-714e-43ac-b07e-57debded4735)
2024-12-26 14:37:11 -05:00
Steve Kim
0fd4a68d34 docs: Update VectorStoreTabs.js (#28916)
- Title: Fix typo to correct "embedding" to "embeddings" in PGVector
initialization example

- Problem: There is a typo in the example code for initializing the
PGVector class. The current parameter "embedding" is incorrect as the
class expects "embeddings".

- Correction: The corrected code snippet is:

vector_store = PGVector(
    embeddings=embeddings,
    collection_name="my_docs",
    connection="postgresql+psycopg://...",
)
2024-12-26 14:31:58 -05:00
Benjamin
8db2338e96 docs: Fix typo in Build a Retrieval Augmented Generation Part 1 section (#28921)
This PR fixes a typo in [Build a Retrieval Augmented Generation (RAG)
App: Part 1](https://python.langchain.com/docs/tutorials/rag/)
2024-12-26 14:29:35 -05:00
zep.hyr
7b4d2d5d44 Community : Add cost information for missing OpenAI model (#28882)
In the previous commit, the cached model key for this model was omitted.
When using the "gpt-4o-2024-11-20" model, the token count in the
callback appeared as 0, and the cost was recorded as 0.

We add model and cost information so that the token count and cost can
be displayed for the respective model.

- The message before modification is as follows.
```
Tokens Used: 0
Prompt Tokens: 0
Prompt Tokens Cached: 0 
Completion Tokens: 0  
Reasoning Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```

- The message after modification is as follows.
```
Tokens Used: 3783 
Prompt Tokens: 3625
Prompt Tokens Cached: 2560
Completion Tokens: 158
Reasoning Tokens: 0
Successful Requests: 1
Total Cost (USD): $0.010642500000000001
```
2024-12-26 14:28:31 -05:00
Erick Friis
5991b45a88 docs: change margin (#28908) 2024-12-24 21:04:08 +00:00
Erick Friis
17f1ec8610 docs: remove console log (#28894) 2024-12-23 21:22:21 +00:00
Erick Friis
3726a944c0 docs: sorted by downloads [wip] (#28869) 2024-12-23 13:13:35 -08:00
Andreas Motl
6352edf77f docs: CrateDB: Register package langchain-cratedb, and add minimal "provider" documentation (#28877)
Hi Erick. Coming back from a previous attempt, we now made a separate
package for the CrateDB adapter, called `langchain-cratedb`, as advised.
Other than registering the package within `libs/packages.yml`, this
patch includes a minimal amount of documentation to accompany the advent
of this new package. Let us know about any mistakes we made, or changes
you would like to see. Thanks, Andreas.

## About
- **Description:** Register a new database adapter package,
`langchain-cratedb`, providing traditional vector store, document
loader, and chat message history features for a start.
- **Addressed to:** @efriis, @eyurtsev
- **References:** GH-27710
- **Preview:** [Providers » More »
CrateDB](https://langchain-git-fork-crate-workbench-register-la-4bf945-langchain.vercel.app/docs/integrations/providers/cratedb/)

## Status
- **PyPI:** https://pypi.org/project/langchain-cratedb/
- **GitHub:** https://github.com/crate/langchain-cratedb
- **Documentation (CrateDB):**
https://cratedb.com/docs/guide/integrate/langchain/
- **Documentation (LangChain):** _This PR._

## Backlog?
Is this applicable for this kind of patch?
> - [ ] **Add tests and docs**: If you're adding a new integration,
please include
> 1. a test for the integration, preferably unit tests that do not rely
on network access,
> 2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

## Q&A
1. Notebooks that use the LangChain CrateDB adapter are currently at
[CrateDB LangChain
Examples](https://github.com/crate/cratedb-examples/tree/main/topic/machine-learning/llm-langchain),
and the documentation refers to them. Because they are derived from very
old blueprints coming from LangChain 0.0.x times, we guess they need a
refresh before adding them to `docs/docs/integrations`. Is it applicable
to merge this minimal package registration + documentation patch, which
already includes valid code snippets in `cratedb.mdx`, and add
corresponding notebooks on behalf of a subsequent patch later?

2. How would it work getting into the tabular list of _Integration
Packages_ enumerated on the [documentation entrypoint page about
Providers](https://python.langchain.com/docs/integrations/providers/)?

/cc Please also review, @ckurze, @wierdvanderhaar, @kneth,
@simonprickett, if you can find the time. Thanks!
2024-12-23 10:55:44 -05:00
Wang Ran (汪然)
e5c9da3eb6 core[patch]: remove redundant imports (#28861)
`Graph` has been imported at Line: 62
2024-12-23 10:31:23 -05:00
Adrián Panella
8d9907088b community(azuresearch): allow to use any valid credential (#28873)
Add option to use any valid credential type.
Differentiates async cases needed by Azure Search.

This could replace the use of a static token
2024-12-23 10:05:48 -05:00
ZhangShenao
4b4d09f82b [Doc] Improvement: Fix docs of ChatMLX (#28884)
- `ChatMLX` doesn't supports the role of system.
- Fix https://github.com/langchain-ai/langchain/issues/28532
#28532
2024-12-23 09:51:44 -05:00
Mohammad Mohtashim
41b6a86bbe Community: LlamaCppEmbeddings embed_documents and embed_query (#28827)
- **Description:** `embed_documents` and `embed_query` was throwing off
the error as stated in the issue. The issue was that `Llama` client is
returning the embeddings in a nested list which is not being accounted
for in the current implementation and therefore the stated error is
being raised.
- **Issue:** #28813

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-23 09:50:22 -05:00
Darien Schettler
32917a0b98 Update dataframe.py (#28871)
community: optimize DataFrame document loader

**Description:**
Simplify the `lazy_load` method in the DataFrame document loader by
combining text extraction and metadata cleanup into a single operation.
This makes the code more concise while maintaining the same
functionality.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** N/A
2024-12-22 19:16:16 -05:00
Erick Friis
cb4e6ac941 docs: frontmatter gen, colab/github links (#28852) 2024-12-21 17:38:31 +00:00
Mikhail Khludnev
2a7469e619 add langchain-localai link to Providers page localai.mdx (#28855)
follow up #28751

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-20 22:02:32 +00:00
yeounhak
f38fc89f35 community: Corrected aload func to be asynchronous from webBaseLoader (#28337)
- **Description:** The aload function, contrary to its name, is not an
asynchronous function, so it cannot work concurrently with other
asynchronous functions.

- **Issue:** #28336 

- **Test: **: Done

- **Docs: **
[here](e0a95e5646/docs/docs/integrations/document_loaders/web_base.ipynb (L201))

- **Lint: ** All checks passed

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-20 14:42:52 -05:00
Ahmad Elmalah
a08c76a6b2 Docs: Add langgraph to installation section for Rag tutorial (#28849)
**Issue**:
This tutorial depends on langgraph, however Langgraph is not mentioned
on the installation section for the tutorial, which raises an error when
copying and pasting the code snippets as following:

![image](https://github.com/user-attachments/assets/829c9118-fcf8-4f17-9abb-32e005ebae07)

**Solution**:
Just adding langgraph package to installation section, for both pip and
Conda tabs as this tutorial requires it.
2024-12-20 12:08:19 -05:00
Mohammad Mohtashim
8cf5f20bb5 required tool_choice added for ChatHuggingFace (#28851)
- **Description:** HuggingFace Inference Client V3 now supports
`required` as tool_choice which has been added.
- **Issue:** #28842
2024-12-20 12:06:04 -05:00
Sylvain DEPARTE
fcba567a77 partners: allow to set Prefix in AIMessage (for MistralAI) (#28846)
**Description:**

Added ability to set `prefix` attribute to prevent error : 
```
httpx.HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_error","param":null,"code":null}
```

Co-authored-by: Sylvain DEPARTE <sylvain.departe@wizbii.com>
2024-12-20 11:09:45 -05:00
Jacob Mansdorfer
6d81137325 community: adding langchain-predictionguard partner package documentation (#28832)
- *[x] **PR title**: "community: adding langchain-predictionguard
partner package documentation"

- *[x] **PR message**:
- **Description:** This PR adds documentation for the
langchain-predictionguard package to main langchain repo, along with
deprecating current Prediction Guard LLMs package. The LLMs package was
previously broken, so I also updated it one final time to allow it to
continue working from this point onward. . This enables users to chat
with LLMs through the Prediction Guard ecosystem.
    - **Package Links**: 
        -  [PyPI](https://pypi.org/project/langchain-predictionguard/)
- [Github
Repo](https://www.github.com/predictionguard/langchain-predictionguard)
    - **Issue:** None
    - **Dependencies:** None
- **Twitter handle:** [@predictionguard](https://x.com/predictionguard)

- *[x] **Add tests and docs**: All docs have been added for the partner
package, and the current LLMs package test was updated to reflect
changes.


- *[x] **Lint and test**: Linting tests are all passing.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-20 10:51:44 -05:00
Leonid Ganeline
5135bf1002 docs: integrations google packages (#28840)
Issue: several Google integrations are implemented on the
[github.com/googleapis](https://github.com/googleapis) organization
repos and these integrations are almost lost. But they are essential
integrations.
Change: added a list of all packages that have Google integrations.
Added a description of this situation.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-20 09:46:06 -05:00
Barry McCardel
5a351a133c fix tiny 'lil typo in tutorial page (#28839)
ez pz
2024-12-19 16:33:59 -08:00
ccurme
f0e858b4e3 core[patch]: release 0.3.28 (#28837) 2024-12-19 17:52:32 -05:00
ccurme
137d1e9564 langchain[patch]: fix test following update to langchain-openai (#28838) 2024-12-19 22:39:48 +00:00
Emmanuel Leroy
c8db5a19ce langchain_community.chat_models.oci_generative_ai: Fix a bug when using optional parameters in tools (#28829)
When using tools with optional parameters, the parameter `type` is not
longer available since langchain update to 0.3 (because of the pydantic
upgrade?) and there is now an `anyOf` field instead. This results in the
`type` being `None` in the chat request for the tool parameter, and the
LLM call fails with the error:

```
oci.exceptions.ServiceError: {'target_service': 'generative_ai_inference', 
'status': 400, 'code': '400', 
'opc-request-id': '...', 
'message': 'Parameter definition must have a type.', 
'operation_name': 'chat'
...
}
```

Example code that fails:

```
from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
from langchain_core.tools import tool
from typing import Optional

llm = ChatOCIGenAI(
        model_id="cohere.command-r-plus",
        service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
        compartment_id="ocid1.compartment.oc1...",
        auth_profile="your_profile",
        auth_type="API_KEY",
        model_kwargs={"temperature": 0, "max_tokens": 3000},
)

@tool
def test(example: Optional[str] = None):
    """This is the tool to use to test things

    Args:
        example: example variable, defaults to None
    """
    return "this is a test"

llm_with_tools = llm.bind_tools([test])

result = llm_with_tools.invoke("can you make a test for g")
```

This PR sets the param type to `any` in that case, and fixes the
problem.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-19 22:17:34 +00:00
Bagatur
c3ccd93c12 patch openai json mode test (#28831) 2024-12-19 21:43:32 +00:00
Bagatur
ce6748dbfe xfail openai image token count test (#28828) 2024-12-19 21:23:30 +00:00
Anusha Karkhanis
26bdf40072 Langchain_Community: SQL LanguageParser (#28430)
## Description
(This PR has contributions from @khushiDesai, @ashvini8, and
@ssumaiyaahmed).

This PR addresses **Issue #11229** which addresses the need for SQL
support in document parsing. This is integrated into the generic
TreeSitter parsing library, allowing LangChain users to easily load
codebases in SQL into smaller, manageable "documents."

This pull request adds a new ```SQLSegmenter``` class, which provides
the SQL integration.

## Issue
**Issue #11229**: Add support for a variety of languages to
LanguageParser

## Testing
We created a file ```test_sql.py``` with several tests to ensure the
```SQLSegmenter``` is functional. Below are the tests we added:

- ```def test_is_valid```: Checks SQL validity.
- ```def test_extract_functions_classes```: Extracts individual SQL
statements.
- ```def test_simplify_code```: Simplifies SQL code with comments.

---------

Co-authored-by: Syeda Sumaiya Ahmed <114104419+ssumaiyaahmed@users.noreply.github.com>
Co-authored-by: ashvini hunagund <97271381+ashvini8@users.noreply.github.com>
Co-authored-by: Khushi Desai <khushi.desai@advantawitty.com>
Co-authored-by: Khushi Desai <59741309+khushiDesai@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-19 20:30:57 +00:00
Bagatur
a7f2148061 openai[patch]: Release 0.2.14 (#28826) 2024-12-19 11:56:44 -08:00
Bagatur
1378ddfa5f openai[patch]: type reasoning_effort (#28825) 2024-12-19 19:36:49 +00:00
Erick Friis
6a37899b39 core: dont mutate tool_kwargs during tool run (#28824)
fixes https://github.com/langchain-ai/langchain/issues/24621
2024-12-19 18:11:56 +00:00
Qun
033ac41760 fix crash when using create_xml_agent with parameterless function as … (#26002)
When using `create_xml_agent` or `create_json_chat_agent` to create a
agent, and the function corresponding to the tool is a parameterless
function, the `XMLAgentOutputParser` or `JSONAgentOutputParser` will
parse the tool input into an empty string, `BaseTool` will parse it into
a positional argument.
So, the program will crash finally because we invoke a parameterless
function but with a positional argument.Specially, below code will raise
StopIteration in
[_parse_input](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L419)
```python
from langchain import hub
from langchain.agents import AgentExecutor, create_json_chat_agent, create_xml_agent
from langchain_openai import ChatOpenAI

prompt = hub.pull("hwchase17/react-chat-json")

llm = ChatOpenAI()

# agent = create_xml_agent(llm, tools, prompt)
agent = create_json_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke(......)
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 13:00:46 -05:00
Luke
f69695069d text_splitters: Add HTMLSemanticPreservingSplitter (#25911)
**Description:** 

With current HTML splitters, they rely on secondary use of the
`RecursiveCharacterSplitter` to further chunk the document into
manageable chunks. The issue with this is it fails to maintain important
structures such as tables, lists, etc within HTML.

This Implementation of a HTML splitter, allows the user to define a
maximum chunk size, HTML elements to preserve in full, options to
preserve `<a>` href links in the output and custom handlers.

The core splitting begins with headers, similar to `HTMLHeaderSplitter`.
If these sections exceed the length of the `max_chunk_size` further
recursive splitting is triggered. During this splitting, elements listed
to preserve, will be excluded from the splitting process. This can cause
chunks to be slightly larger then the max size, depending on preserved
length. However, all contextual relevance of the preserved item remains
intact.

**Custom Handlers**: Sometimes, companies such as Atlassian have custom
HTML elements, that are not parsed by default with `BeautifulSoup`.
Custom handlers allows a user to provide a function to be ran whenever a
specific html tag is encountered. This allows the user to preserve and
gather information within custom html tags that `bs4` will potentially
miss during extraction.

**Dependencies:** User will need to install `bs4` in their project to
utilise this class

I have also added in `how_to` and unit tests, which require `bs4` to
run, otherwise they will be skipped.

Flowchart of process:


![HTMLSemanticPreservingSplitter](https://github.com/user-attachments/assets/20873c36-22ed-4c80-884b-d3c6f433f5a7)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 12:09:22 -05:00
Tommaso De Lorenzo
24bfa062bf langchain: add support for Google Anthropic Vertex AI model garden provider in init_chat_model (#28177)
Simple modification to add support for anthropic models deployed in
Google Vertex AI model garden in `init_chat_model` importing
`ChatAnthropicVertex`

- [v] **Lint and test**
2024-12-19 12:06:21 -05:00
Erick Friis
ff7b01af88 anthropic: less pydantic for client (#28823) 2024-12-19 08:00:02 -08:00
Erick Friis
f1d783748a anthropic: sdk bump (#28820) 2024-12-19 15:39:21 +00:00
Erick Friis
907f36a6e9 fireworks: fix lint (#28821) 2024-12-19 15:36:36 +00:00
Erick Friis
6526db4871 community: bump core (#28819) 2024-12-19 06:41:53 -08:00
Vignesh A
4c9acdfbf1 Community : Add OpenAI prompt caching and reasoning tokens tracking (#27135)
Added Token tracking for OpenAI's prompt caching and reasoning tokens
Costs updated from https://openai.com/api/pricing/

usage example
```python
from langchain_community.callbacks import get_openai_callback
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name="o1-mini",temperature=1)

with get_openai_callback() as cb:
    response = llm.invoke("hi "*1500)
    print(cb)
```
Output
```
Tokens Used: 1720
	Prompt Tokens: 1508
		Prompt Tokens Cached: 1408
	Completion Tokens: 212
		Reasoning Tokens: 192
Successful Requests: 1
Total Cost (USD): $0.0049559999999999995
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 09:31:13 -05:00
ScriptShi
97f1e1d39f community: tablestore vector store check the dimension of the embedding when writing it to store. (#28812)
Added some restrictions to a vectorstore I released in the community
before.
2024-12-19 09:30:43 -05:00
fzowl
024f020f04 docs: Adding VoyageAI to 'integrations/text_embedding/' dropdown (#28817)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


**Description:** 
Adding VoyageAI's text_embedding to 'integrations/text_embedding/'


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-19 09:29:30 -05:00
Wang Ran (汪然)
f48755d35b core: typo Utilities for tests. -> Utilities for pydantic. (#28814)
**Description:** typo
2024-12-19 09:26:17 -05:00
Wang Ran (汪然)
51b8ddaf10 core: typo in runnable (#28815)
Thank you for contributing to LangChain!

**Description:** Typo
2024-12-19 09:25:57 -05:00
Leonid Ganeline
c823cc532d docs: integration providers index update (#28808)
Issue: integrations related to a provider can be spread across several
packages and classes. It is very hard to find a provider using only
ToCs.
Fix: we have a very useful and helpful tool to search by provider name.
It is the `Search` field. So, I've added recommendations for using this
field. It seems obvious but it is not.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 02:28:37 +00:00
wangda
24c4af62b0 docs:Correcting spelling mistakes (#28780) 2024-12-18 21:11:50 -05:00
Erick Friis
3b036a1cf2 partners/fireworks: release 0.2.6 (#28805) 2024-12-18 22:48:35 +00:00
Erick Friis
4eb8bf7793 partners/anthropic: release 0.3.1 (#28801) 2024-12-18 22:45:38 +00:00
Lu Peng
50afa7c4e7 community: add new parameter default_headers (#28700)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- "community: 1. add new parameter `default_headers` for oci model
deployments and oci chat model deployments. 2. updated k parameter in
OCIModelDeploymentLLM class."


- [x] **PR message**:
- **Description:** 1. add new parameters `default_headers` for oci model
deployments and oci chat model deployments. 2. updated k parameter in
OCIModelDeploymentLLM class.


- [x] **Add tests and docs**:
  1. unit tests
  2. notebook

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 22:33:23 +00:00
Christophe Bornet
1e88adaca7 all: Add pre-commit hook (#26993)
This calls `make format` on projects that have modified files.
So `poetry install --with lint` must have been done for those projects.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 22:22:58 +00:00
Erick Friis
cc616de509 partners/xai: release 0.1.1 (#28806) 2024-12-18 22:15:24 +00:00
Erick Friis
ba8c1b0d8c partners/groq: release 0.2.2 (#28804) 2024-12-18 22:12:02 +00:00
Erick Friis
a119cae5bd partners/mistralai: release 0.2.4 (#28803) 2024-12-18 22:11:48 +00:00
Erick Friis
514d78516b partners/ollama: release 0.2.2 (#28802) 2024-12-18 22:11:08 +00:00
Bagatur
68940dd0d6 openai[patch]: Release 0.2.13 (#28800) 2024-12-18 22:08:47 +00:00
Erick Friis
4dc28b43ac community: release 0.3.13 (#28798) 2024-12-18 21:58:46 +00:00
Bagatur
557f63c2e6 core[patch]: Release 0.3.27 (#28799) 2024-12-18 21:58:03 +00:00
Bagatur
4a531437bb core[patch], openai[patch]: Handle OpenAI developer msg (#28794)
- Convert developer openai messages to SystemMessage
- store additional_kwargs={"__openai_role__": "developer"} so that the
correct role can be reconstructed if needed
- update ChatOpenAI to read in openai_role

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 21:54:07 +00:00
bjoaquinc
43b0736a51 docs: added a link to the taxonomy of labels in the contributing guide for easy access (#28719)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Added a link to make it easier to organize github
issues for langchain.
- **Issue:** After reading that there was a taxonomy of labels I had to
figure out how to find it.
    - **Dependencies:** None
    - **Twitter handle:** None


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 16:08:18 -05:00
Erick Friis
079f1d93ab langchain: release 0.3.13 (#28797) 2024-12-18 12:32:00 -08:00
Yuxin Chen
3256b5d6ae text-splitters: fix state persistence issue in ExperimentalMarkdownSyntaxTextSplitter (#28373)
- **Description:** 
This PR resolves an issue with the
`ExperimentalMarkdownSyntaxTextSplitter` class, which retains the
internal state across multiple calls to the `split_text` method. This
behaviour caused an unintended accumulation of chunks in `self`
variables, leading to incorrect outputs when processing multiple
Markdown files sequentially.

- Modified `libs\text-splitters\langchain_text_splitters\markdown.py` to
reset the relevant internal attributes at the start of each `split_text`
invocation. This ensures each call processes the input independently.
- Added unit tests in
`libs\text-splitters\tests\unit_tests\test_text_splitters.py` to verify
the fix and ensure the state does not persist across calls.

- **Issue:**  
Fixes [#26440](https://github.com/langchain-ai/langchain/issues/26440).

- **Dependencies:**
No additional dependencies are introduced with this change.


- [x] Unit tests were added to verify the changes.
- [x] Updated documentation where necessary.  
- [x] Ran `make format`, `make lint`, and `make test` to ensure
compliance with project standards.

---------

Co-authored-by: Angel Chen <angelchen396@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 20:27:59 +00:00
Mohammad Mohtashim
7c8f977695 Community: Fix with_structured_output for ChatSambaNovaCloud (#28796)
- **Description:** The `kwargs` was being checked as None object which
was causing the rest of code in `with_structured_output` not getting
executed. The checking part has been fixed in this PR.
- **Issue:** #28776
2024-12-18 14:35:06 -05:00
V.Prasanna kumar
684b146b18 Fixed adding float values into DynamoDB (#26562)
Thank you for contributing to LangChain!

- [x] **PR title**: Add float Message into Dynamo DB
  -  community
  - Example: "community: Chat Message History 


- [x] **PR message**: 
- **Description:** pushing float values into dynamo db creates error ,
solved that by converting to str type
    - **Issue:** Float values are not getting pushed
    - **Twitter handle:** VpkPrasanna
    
    
Have added an utility function for str conversion , let me know where to
place it happy to do an commit.
    
    This PR is from an discussion of #26543
    
    @hwchase17 @baskaryan @efriis

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 13:45:00 -05:00
William FH
50ea1c3ea3 [Core] respect tracing project name cvar (#28792) 2024-12-18 10:02:02 -08:00
Martin Triska
e6b41d081d community: DocumentLoaderAsParser wrapper (#27749)
## Description

This pull request introduces the `DocumentLoaderAsParser` class, which
acts as an adapter to transform document loaders into parsers within the
LangChain framework. The class enables document loaders that accept a
`file_path` parameter to be utilized as blob parsers. This is
particularly useful for integrating various document loading
capabilities seamlessly into the LangChain ecosystem.

When merged in together with PR
https://github.com/langchain-ai/langchain/pull/27716 It opens options
for `SharePointLoader` / `OneDriveLoader` to process any filetype that
has a document loader.

### Features

- **Flexible Parsing**: The `DocumentLoaderAsParser` class can adapt any
document loader that meets the criteria of accepting a `file_path`
argument, allowing for lazy parsing of documents.
- **Compatibility**: The class has been designed to work with various
document loaders, making it versatile for different use cases.

### Usage Example

To use the `DocumentLoaderAsParser`, you would initialize it with a
suitable document loader class and any required parameters. Here’s an
example of how to do this with the `UnstructuredExcelLoader`:

```python
from langchain_community.document_loaders.blob_loaders import Blob
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

# Initialize the parser adapter with UnstructuredExcelLoader
xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# Use parser, for ex. pass it to MimeTypeBasedParser
MimeTypeBasedParser(
    handlers={
        "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": xlsx_parser
    }
)
```


- **Dependencies:** None
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 12:47:08 -05:00
Erick Friis
9b024d00c9 text-splitters: release 0.3.4 (#28795) 2024-12-18 09:44:36 -08:00
Erick Friis
5cf965004c core: release 0.3.26 (#28793) 2024-12-18 17:28:42 +00:00
Mohammad Mohtashim
d49df4871d [Community]: Image Extraction Fixed for PDFPlumberParser (#28491)
- **Description:** One-Bit Images was raising error which has been fixed
in this PR for `PDFPlumberParser`
 - **Issue:** #28480

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 11:45:48 -05:00
binhnd102
f723a8456e Fixes: community: fix LanceDB return no metadata (#27024)
- [ x ] Fix when lancedb return table without metadata column
- **Description:** Check the table schema, if not has metadata column,
init the Document with metadata argument equal to empty dict
    - **Issue:** https://github.com/langchain-ai/langchain/issues/27005

- [ x ] **Add tests and docs**

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 15:21:28 +00:00
ANSARI MD AAQIB AHMED
91d28ef453 Add langchain-yt-dlp Document Loader Documentation (#28775)
## Overview
This PR adds documentation for the `langchain-yt-dlp` package, a YouTube
document loader that uses `yt-dlp` for Youtube videos metadata
extraaction.

## Changes
- Added documentation notebook for YoutubeLoader
- Updated packages.yml to include langchain-yt-dlp

## Motivation
The existing LangChain YoutubeLoader was unable to fetch YouTube
metadata due to changes in YouTube's structure. This package resolves
those issues by leveraging the `yt-dlp` library.

## Features
- Reliable YouTube metadata extraction

## Related
- Package Repository: https://github.com/aqib0770/langchain-yt-dlp
- PyPI Package: https://pypi.org/project/langchain-yt-dlp/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 10:16:50 -05:00
GITHUBear
33b1fb95b8 partners: langchain-oceanbase Integration (#28782)
Hi, langchain team! I'm a maintainer of
[OceanBase](https://github.com/oceanbase/oceanbase).

With the integration guidance, I create a python lib named
[langchain-oceanbase](https://github.com/oceanbase/langchain-oceanbase)
to integrate `Oceanbase Vector Store` with `Langchain`.

So I'd like to add the required docs. I will appreciate your feedback.
Thank you!

---------

Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 14:51:49 +00:00
Rave Harpaz
986b752fc8 Add OCI Generative AI new model and structured output support (#28754)
- [X] **PR title**: 
 community: Add new model and structured output support


- [X] **PR message**: 
- **Description:** add support for meta llama 3.2 image handling, and
JSON mode for structured output
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: 
  1. we have updated our unit tests,
  2. no changes required for documentation.


- [x] **Lint and test**: 
make format, make lint and make test we run successfully

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 09:50:25 -05:00
David Pryce-Compson
ef24220d3f community: adding haiku 3.5 and opus callbacks (#28783)
**Description:** 
Adding new AWS Bedrock model and their respective costs to match
https://aws.amazon.com/bedrock/pricing/ for the Bedrock callback

**Issue:** 
Missing models for those that wish to try them out

**Dependencies:**
Nothing added

**Twitter handle:**
@David_Pryce and / or @JamfSoftware

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-18 09:45:10 -05:00
Yudai Kotani
05a44797ee langchain_community: Add default None values to DocumentAttributeValue class properties (#28785)
**Description**: 
This PR addresses an issue where the DocumentAttributeValue class
properties did not have default values of None. By explicitly setting
the Optional attributes (DateValue, LongValue, StringListValue, and
StringValue) to default to None, this change ensures the class functions
as expected when no value is provided for these attributes.

**Changes Made**:
Added default None values to the following properties of the
DocumentAttributeValue class:
DateValue
LongValue
StringListValue
StringValue
Removed the invalid argument extra="allow" from the BaseModel
inheritance.
Dependencies: None.

**Twitter handle (optional)**: @__korikori1021

**Checklist**
- [x] Verified that KendraRetriever works as expected after the changes.

Co-authored-by: y1u0d2a1i <y.kotani@raksul.com>
2024-12-18 09:43:04 -05:00
Satyam Kumar
90f7713399 refactor: improve docstring parsing logic for Google style (#28730)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Description:  
Improved the `_parse_google_docstring` function in `langchain/core` to
support parsing multi-paragraph descriptions before the `Args:` section
while maintaining compliance with Google-style docstring guidelines.
This change ensures better handling of docstrings with detailed function
descriptions.

Issue:  
Fixes #28628

Dependencies:  
None.

Twitter handle:  
@isatyamks

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 09:35:19 -05:00
Zapiron
85c3bc1bbd docs: Grammar and Typo update for Runnable Conceptual guide (#28777) 2024-12-17 21:18:58 -05:00
Dong Shin
0b1359801e community: add trust_env at web_base_loader (#28514)
- **Description:** I am working to address a similar issue to the one
mentioned in https://github.com/langchain-ai/langchain/pull/19499.
Specifically, there is a problem with the Webbase loader used in
open-webui, where it fails to load the proxy configuration. This PR aims
to resolve that issue.




<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.-->

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 21:18:16 -05:00
Erick Friis
be738aa7de packages: enable vertex api build (#28773) 2024-12-17 11:31:14 -08:00
Bagatur
ac278cbe8b core[patch]: export InjectedToolCallId (#28772) 2024-12-17 19:29:20 +00:00
ccurme
5656702b8d docs: fix readme link (#28770)
SQL Llama2 Template -> LangChain Extract
2024-12-17 18:51:01 +00:00
Bagatur
e4d3ccf62f json mode standard test (#25497)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 18:47:34 +00:00
ccurme
24bf24270d docs: reference ExperimentalMarkdownTextSplitter (#28768)
Continuing https://github.com/langchain-ai/langchain/pull/27832

---------

Co-authored-by: promptless[bot] <179508745+promptless[bot]@users.noreply.github.com>
Co-authored-by: Frances Liu <francestfls@gmail.com>
2024-12-17 12:56:38 -05:00
Frank Dai
e81433497b community: support Confluence cookies (#28760)
**Description**: Some confluence instances don't support personal access
token, then cookie is a convenient way to authenticate. This PR adds
support for Confluence cookies.

**Twitter handle**: soulmachine
2024-12-17 12:16:36 -05:00
ccurme
b745281eec anthropic[patch]: increase timeouts for integration tests (#28767)
Some tests consistently ran into the 10s limit in CI.
2024-12-17 15:47:17 +00:00
Leonid Ganeline
6479fd8c1c docs: integrations cache table of content (#28755)
Issue: the current
[Cache](https://python.langchain.com/docs/integrations/llm_caching/)
page has an inconsistent heading.
Mixed terms are used; mixed casing; and mixed `selecting`. Excessively
long titles make right-side ToC hard to read and unnecessarily long.

Changes: consitent and more-readable ToC
2024-12-17 10:12:49 -05:00
Vinit Kudva
a00258ec12 chroma: fix persistence if client_settings is passed in (#25199)
…ent path given.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 10:03:02 -05:00
Omri Eliyahu Levy
f8883a1321 partners/voyageai: enable setting output dimension (#28740)
Voyage has introduced voyage-3-large and voyage-code-3, which feature
different output dimensions by leveraging a technique called "Matryoshka
Embeddings" (see blog -
https://blog.voyageai.com/2024/12/04/voyage-code-3/).
These two models are available in various sizes: [256, 512, 1024, 2048]
(https://docs.voyageai.com/docs/embeddings#model-choices).

This PR adds the option to set the required output dimension.
2024-12-17 10:02:00 -05:00
Swastik-Swarup-Dash
0afc284920 fix:Agent reactgpt4 or gpt 4o as input model #28747 (#28764)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:35:09 +00:00
Zapiron
0c11aee486 docs: small grammar changes for conceptual guide page (#28765)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:27:55 +00:00
German Martin
3a1d05394d community: Apache AGE wrapper. Ensure Node Uniqueness by ID. (#28759)
**Description:**

The Apache AGE graph integration incorrectly handled node merging,
allowing duplicate nodes with different IDs but the same type and other
properties. Unlike
[Neo4j](cdf6202156/libs/community/langchain_community/graphs/neo4j_graph.py (L47)),
[Memgraph](cdf6202156/libs/community/langchain_community/graphs/memgraph_graph.py (L50)),
[Kuzu](cdf6202156/libs/community/langchain_community/graphs/kuzu_graph.py (L253)),
and
[Gremlin](cdf6202156/libs/community/langchain_community/graphs/gremlin_graph.py (L165)),
it did not use the node ID as the primary identifier for merging.

This inconsistency caused data integrity issues and unexpected behavior
when users expected updates to specific nodes by ID.

**Solution:**
This PR modifies the `node_insert_query` to `MERGE` nodes based on label
and ID *only* and updates properties with `SET`, aligning the behavior
with other graph database integrations. The `_format_properties` method
was also modified to handle id overrides.

**Impact:**

This fix ensures data integrity by preventing duplicate nodes, and
provides a consistent behavior across graph database integrations.
2024-12-17 09:21:59 -05:00
gsa9989
cdf6202156 cosmosdbnosql: Added Cosmos DB NoSQL Semantic Cache Integration with tests and jupyter notebook (#24424)
* Added Cosmos DB NoSQL Semantic Cache Integration with tests and
jupyter notebook

---------

Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 21:57:05 -05:00
Brian Burgin
27a9056725 community: Fix ChatLiteLLMRouter runtime issues (#28163)
**Description:** Fix ChatLiteLLMRouter ctor validation and model_name
parameter
**Issue:** #19356, #27455, #28077
**Twitter handle:** @bburgin_0
2024-12-16 18:17:39 -05:00
Eugene Yurtsev
234d49653a docs: Create custom embeddings (#20398)
Guidelines on how to create custom embeddings

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:57:51 -05:00
Mikhail Khludnev
00deacc67e docs, external: introduce langchain-localai (#28751)
Thank you for contributing to LangChain!

Referring to https://github.com/mkhludnev/langchain-localai

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 22:22:37 +00:00
Erick Friis
d4b5e7ef22 community: recommend RedisVectorStore over Redis (#28749) 2024-12-16 21:08:30 +00:00
Hiros
8f5e72de05 community: Correctly handle multi-element rich text (#25762)
**Description:**

- Add _concatenate_rich_text method to combine all elements in rich text
arrays
- Update load_page method to use _concatenate_rich_text for rich text
properties
- Ensure all text content is captured, including inline code and
formatted text
- Add unit tests to verify correct handling of multi-element rich text
This fix prevents truncation of content after backticks or other
formatting elements.

 **Issue:**

Using Notion DB Loader, the text for `richtext` and `title` is truncated
after 1st element was loaded as Notion Loader only read the first
element.

**Dependencies:** any dependencies required for this change
None.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 20:20:27 +00:00
Antonio Lanza
b2102b8cc4 text-splitters: Inconsistent results with NLTKTextSplitter's add_start_index=True (#27782)
This PR closes #27781

# Problem
The current implementation of `NLTKTextSplitter` is using
`sent_tokenize`. However, this `sent_tokenize` doesn't handle chars
between 2 tokenized sentences... hence, this behavior throws errors when
we are using `add_start_index=True`, as described in issue #27781. In
particular:
```python
from nltk.tokenize import sent_tokenize

output1 = sent_tokenize("Innovation drives our success. Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output1)
output2 = sent_tokenize("Innovation drives our success.        Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output2)
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
```

# Solution
With this new `use_span_tokenize` parameter, we can use NLTK to create
sentences (with `span_tokenize`), but also add extra chars to be sure
that we still can map the chunks to the original text.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-16 19:53:15 +00:00
Tari Yekorogha
d262d41cc0 community: added FalkorDB vector store support i.e implementation, test, docs an… (#26245)
**Description:** Added support for FalkorDB Vector Store, including its
implementation, unit tests, documentation, and an example notebook. The
FalkorDB integration allows users to efficiently manage and query
embeddings in a vector database, with relevance scoring and maximal
marginal relevance search. The following components were implemented:

- Core implementation for FalkorDBVector store.
- Unit tests ensuring proper functionality and edge case coverage.
- Example notebook demonstrating an end-to-end setup, search, and
retrieval using FalkorDB.

**Twitter handle:** @tariyekorogha

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:37:55 +00:00
Aaron Pham
12fced13f4 chore(community): update to OpenLLM 0.6 (#24609)
Update to OpenLLM 0.6, which we decides to make use of OpenLLM's
OpenAI-compatible endpoint. Thus, OpenLLM will now just become a thin
wrapper around OpenAI wrapper.

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

---------

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 14:30:07 -05:00
Lvlvko
5c17a4ace9 community: support Hunyuan Embedding (#23160)
## description

- I refactor `Chathunyuan` using tencentcloud sdk because I found the
original one can't work in my application
- I add `HunyuanEmbeddings` using tencentcloud sdk
- Both of them are extend the basic class of langchain. I have fully
tested them in my application

## Dependencies
- tencentcloud-sdk-python

---------

Co-authored-by: centonhuang <centonhuang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:27:19 +00:00
Harrison Chase
de7996c2ca core: add kwargs support to VectorStore (#25934)
has been missing the passthrough until now

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 18:57:57 +00:00
Henry Tu
87c50f99e5 docs: cerebras Update Llama 3.1 70B to Llama 3.3 70B (#28746)
This PR updates the docs for the Cerebras integration to use Llama 3.3
70b instead of Llama 3.1 70b.

cc: @efriis
2024-12-16 18:43:35 +00:00
Lorenzo
b79a1156ed community: correct return type of get_files_from_directory in github tool (#27885)
### About:
- **Description:** the _get_files_from_directory_ method return a
string, but it's used in other methods that expect a List[str]
- **Issue:** None
- **Dependencies:** None

This pull request import a new method _list_files_ with the old logic of
_get_files_from_directory_, but it return a List[str] at the end.
The behavior of _ get_files_from_directory_ is not changed.
2024-12-16 10:30:33 -08:00
Sheepsta300
580a8d53f9 community: Add configurable VisualFeatures to the AzureAiServicesImageAnalysisTool (#27444)
Thank you for contributing to LangChain!

- [ ] **PR title**: community: Add configurable `VisualFeatures` to the
`AzureAiServicesImageAnalysisTool`


- [ ] **PR message**:  
- **Description:** The `AzureAiServicesImageAnalysisTool` is a good
service and utilises the Azure AI Vision package under the hood.
However, since the creation of this tool, new `VisualFeatures` have been
added to allow the user to request other image specific information to
be returned. Currently, the tool offers neither configuration of which
features should be return nor does it offer any newer feature types. The
aim of this PR is to address this and expose more of the Azure Service
in this integration.
- **Dependencies:** no new dependencies in the main class file,
azure.ai.vision.imageanalysis added to extra test dependencies file.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Although no tests exist for already implemented Azure Service tools,
I've created 3 unit tests for this class that test initialisation and
credentials, local file analysis and a test for the new changes/
features option.


- [ ] **Lint and test**: All linting has passed.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 18:30:04 +00:00
Erick Friis
1c120e9615 core: xml output parser tags docstring (#28745) 2024-12-16 18:25:16 +00:00
Ana
ebab2ea81b Fix Azure National Cloud authentication using token (RBAC) (Generated by Ana - AI SDE) (#25843)
This pull request addresses the issue with authenticating Azure National
Cloud using token (RBAC) in the AzureSearch vectorstore implementation.

## Changes

- Modified the `_get_search_client` method in `azuresearch.py` to pass
`additional_search_client_options` to the `SearchIndexClient` instance.

## Implementation Details

The patch updates the `SearchIndexClient` initialization to include the
`additional_search_client_options` parameter:

```python
index_client: SearchIndexClient = SearchIndexClient(
    endpoint=endpoint,
    credential=credential,
    user_agent=user_agent,
    **additional_search_client_options
)
```

This change allows the `audience` parameter to be correctly passed when
using Azure National Cloud, fixing the authentication issues with
GovCloud & RBAC.

This patch was generated by [Ana - AI SDE](https://openana.ai/), an
AI-powered software development assistant.

This is a fix for [Issue
25823](https://github.com/langchain-ai/langchain/issues/25823)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 18:22:24 +00:00
chenzimin
169d419581 community: Remove all other keys in ChatLiteLLM and add api_key (#28097)
Thank you for contributing to LangChain!

- **PR title**: "community: Remove all other keys in ChatLiteLLM and add
api_key"


- **PR message**: Currently, no api_key are passed to LiteLLM, and
LiteLLM only takes on api_key parameter. Therefore I removed all current
`*_api_key` attributes (They are not used), and added `api_key` that is
passed to ChatLiteLLM.
  - Should fix issue #27826

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:54:29 +00:00
German Martin
d5d18c62b3 community: Apache AGE wrapper additional edge cases. (#28151)
Description: 
Current AGEGraph() implementation does some custom wrapping for graph
queries. The method here is _wrap_query() as it parse the field from the
original query to add some SQL context to it.
This improves the current parsing logic to cover additional edge cases
that are added to the test coverage, basically if any Node property name
or value has the "return" literal in it will break the graph / SQL
query.
We discovered this while dealing with real world datasets, is not an
uncommon scenario and I think it needs to be covered.
2024-12-16 11:28:01 -05:00
Rock2z
768e4a7fd4 [community][fix] Compatibility support to bump up wikibase-rest-api-client version (#27316)
**Description:**

This PR addresses the `TypeError: sequence item 0: expected str
instance, FluentValue found` error when invoking `WikidataQueryRun`. The
root cause was an incompatible version of the
`wikibase-rest-api-client`, which caused the tool to fail when handling
`FluentValue` objects instead of strings.

The current implementation only supports `wikibase-rest-api-client<0.2`,
but the latest version is `0.2.1`, where the current implementation
breaks. Additionally, the error message advises users to install the
latest version: [code
reference](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py#L125C25-L125C32).
Therefore, this PR updates the tool to support the latest version of
`wikibase-rest-api-client`.

Key changes:
- Updated the handling of `FluentValue` objects to ensure compatibility
with the latest `wikibase-rest-api-client`.
- Removed the restriction to `wikibase-rest-api-client<0.2` and updated
to support the latest version (`0.2.1`).

**Issue:**

Fixes [#24093](https://github.com/langchain-ai/langchain/issues/24093) –
`TypeError: sequence item 0: expected str instance, FluentValue found`.

**Dependencies:**

- Upgraded `wikibase-rest-api-client` to the latest version to resolve
the issue.

---------

Co-authored-by: peiwen_zhang <peiwen_zhang@email.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 16:22:18 +00:00
André Quintino
a26c786bc5 community: refactor opensearch query constructor to use wildcard instead of match in the contain comparator (#26653)
- **Description:** Changed the comparator to use a wildcard query
instead of match. This modification allows for partial text matching on
analyzed fields, which improves the flexibility of the search by
performing full-text searches that aren't limited to exact matches.
- **Issue:** The previous implementation used a match query, which
performs exact matches on analyzed fields. This approach limited the
search capabilities by requiring the query terms to align with the
indexed text. The modification to use a wildcard query instead addresses
this limitation. The wildcard query allows for partial text matching,
which means the search can return results even if only a portion of the
term matches the text. This makes the search more flexible and suitable
for use cases where exact matches aren't necessary or expected, enabling
broader full-text searches across analyzed fields.
In short, the problem was that match queries were too restrictive, and
the change to wildcard queries enhances the ability to perform partial
matches.
- **Dependencies:** none
- **Twitter handle:** @Andre_Q_Pereira

---------

Co-authored-by: André Quintino <andre.quintino@tui.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 11:16:34 -05:00
Davi Schumacher
0f9b4bf244 community[patch]: update dynamodb chat history to update instead of overwrite (#22397)
**Description:**
The current implementation of `DynamoDBChatMessageHistory` updates the
`History` attribute for a given chat history record by first extracting
the existing contents into memory, appending the new message, and then
using the `put_item` method to put the record back. This has the effect
of overwriting any additional attributes someone may want to include in
the record, like chat session metadata.

This PR suggests changing from using `put_item` to using `update_item`
instead which will keep any other attributes in the record untouched.
The change is backward compatible since
1. `update_item` is an "upsert" operation, creating the record if it
doesn't already exist, otherwise updating it
2. It only touches the db insert call and passes the exact same
information. The rest of the class is left untouched

**Dependencies:**
None

**Tests and docs:**
No unit tests currently exist for the `DynamoDBChatMessageHistory`
class. This PR adds the file
`libs/community/tests/unit_tests/chat_message_histories/test_dynamodb_chat_message_history.py`
to test the `add_message` and `clear` methods. I wanted to use the moto
library to mock DynamoDB calls but I could not get poetry to resolve it
so I mocked those calls myself in the test. Therefore, no test
dependencies were added.

The change was tested on a test DynamoDB table as well. The first three
images below show the current behavior. First a message is added to chat
history, then a value is inserted in the record in some other attribute,
and finally another message is added to the record, destroying the other
attribute.

![using_put_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/426acd62-fe29-42f4-b75f-863fb8b3fb21)

![using_put_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/f8a1c864-7114-4fe3-b487-d6f9252f8f92)

![using_put_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/8b691e08-755e-4877-8969-0e9769e5d28a)

The next three images show the new behavior. Once again a value is added
to an attribute other than the History attribute, but now when the
followup message is added it does not destroy that other attribute. The
History attribute itself is unaffected by this change.

![using_update_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/3e0d76ed-637e-41cd-82c7-01a86c468634)

![using_update_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/52585f9b-71a2-43f0-9dfc-9935aa59c729)

![using_update_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/f94c8147-2d6f-407a-9a0f-86b94341abff)

The doc located at `docs/docs/integrations/memory/aws_dynamodb.ipynb`
required no changes and was tested as well.
2024-12-16 10:38:00 -05:00
Christophe Bornet
6ddd5dbb1e community: Add FewShotSQLTool (#28232)
The `FewShotSQLTool` gets some SQL query examples from a
`BaseExampleSelector` for a given question.
This is useful to provide [few-shot
examples](https://python.langchain.com/docs/how_to/sql_prompting/#few-shot-examples)
capability to an SQL agent.

Example usage:
```python
from langchain.agents.agent_toolkits.sql.prompt import SQL_PREFIX

embeddings = OpenAIEmbeddings()

example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples,
    embeddings,
    AstraDB,
    k=5,
    input_keys=["input"],
    collection_name="lc_few_shots",
    token=ASTRA_DB_APPLICATION_TOKEN,
    api_endpoint=ASTRA_DB_API_ENDPOINT,
)

few_shot_sql_tool = FewShotSQLTool(
    example_selector=example_selector,
    description="Input to this tool is the input question, output is a few SQL query examples related to the input question. Always use this tool before checking the query with sql_db_query_checker!"
)

agent = create_sql_agent(
    llm=llm, 
    db=db, 
    prefix=SQL_PREFIX + "\nYou MUST get some example queries before creating the query.", 
    extra_tools=[few_shot_sql_tool]
)

result = agent.invoke({"input": "How many artists are there?"})
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 15:37:21 +00:00
Mohammad Mohtashim
8d746086ab Added bind_tools support for ChatMLX along with small fix in _stream (#28743)
- **Description:** Added Support for `bind_tool` as requested in the
issue. Plus two issue in `_stream` were fixed:
    - Corrected the Positional Argument Passing for `generate_step`
    - Accountability if `token` returned by `generate_step` is integer.
- **Issue:** #28692
2024-12-16 09:52:49 -05:00
Jorge Piedrahita Ortiz
558b65ea32 community: SamabaStudio Tool Calling and Structured Output (#28025)
Description: Add tool calling and structured output support for
SambaStudio chat models, docs included

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 06:15:19 +00:00
Hristiyan Genchev
c87f24d85d docs: correct variable name from formatted_docs to docs_content (#28735)
- **Description:** Fixed incorrect variable name formatted_docs,
replacing it with docs_content to ensure correct functionality.
2024-12-15 22:09:58 -08:00
clairebehue
fb44e74ca4 community: fix AzureSearch Oauth with azure_ad_access_token (#26995)
**Description:** 
AzureSearch vector store: create a wrapper class on
`azure.core.credentials.TokenCredential` (which is not-instantiable) to
fix Oauth usage with `azure_ad_access_token` argument

**Issue:** [the issue it
fixes](https://github.com/langchain-ai/langchain/issues/26216)

 **Dependencies:** None

- [x] **Lint and test**

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:56:45 +00:00
SirSmokeAlot
29305cd948 community: O365Toolkit - send_event - fixed timezone error (#25876)
**Description**: Fixed formatting start and end time
**Issue**: The old formatting resulted everytime in an timezone error
**Dependencies**: /
**Twitter handle**: /

---------

Co-authored-by: Yannick Opitz <yannick.opitz@gob.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:32:28 +00:00
Erick Friis
4f6ccb7080 text-splitters: extended-tests without socket (#28736) 2024-12-16 05:19:50 +00:00
Erick Friis
8ec1c72e03 text-splitters: test without socket (#28732) 2024-12-15 22:10:35 +00:00
Tibor Reiss
690aa02c31 docs[experimental]: Make docs clearer and add min_chunk_size (#26398)
Fixes #26171:

- added some clarification text for the keyword argument
`breakpoint_threshold_amount`
- added min_chunk_size: together with `breakpoint_threshold_amount`, too
small/big chunk sizes can be avoided

Note: the langchain-experimental was moved to a separate repo, so only
the doc change stays here.
2024-12-15 13:43:48 -08:00
Aayush Kataria
d417e4b372 Community: Azure CosmosDB No Sql Vector Store: Full Text and Hybrid Search Support (#28716)
Thank you for contributing to LangChain!

- Added [full
text](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/full-text-search)
and [hybrid
search](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/hybrid-search)
support for Azure CosmosDB NoSql Vector Store
- Added a new enum called CosmosDBQueryType which supports the following
values:
    - VECTOR = "vector"
    - FULL_TEXT_SEARCH = "full_text_search"
    - FULL_TEXT_RANK = "full_text_rank"
    - HYBRID = "hybrid"
- User now needs to provide this query_type to the similarity_search
method for the vectorStore to make the correct query api call.
- Added a couple of work arounds as for the FULL_TEXT_RANK and HYBRID
query functions we don't support parameterized queries right now. I have
added TODO's in place, and will remove these work arounds by end of
January.
- Added necessary test cases and updated the 


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-15 13:26:32 -08:00
Mohammad Mohtashim
4c1871d9a8 community: Passing the model_kwargs correctly while maintaing backward compatability (#28439)
- **Description:** `Model_Kwargs` was not being passed correctly to
`sentence_transformers.SentenceTransformer` which has been corrected
while maintaing backward compatability
- **Issue:** #28436

---------

Co-authored-by: MoosaTae <sadhis.tae@gmail.com>
Co-authored-by: Sadit Wongprayon <101176694+MoosaTae@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-15 20:34:29 +00:00
nhols
a3851cb3bc community: FAISS vectorstore - consistent Document id field (#28728)
make sure id field of Documents in `FAISS` docstore have the same id as
values in `index_to_docstore_id`, implement `get_by_ids` method
2024-12-15 12:23:49 -08:00
Bagatur
a0534ae62a community[patch]: Release 0.3.12 (#28725) 2024-12-14 22:13:20 +00:00
Bagatur
089e659e03 langchain[patch]: Release 0.3.12 (#28724) 2024-12-14 20:02:18 +00:00
Bagatur
679e3a9970 text-splitters[patch]: Release 0.3.3 (#28723) 2024-12-14 19:20:22 +00:00
ccurme
23b433f683 infra: fix notebook tests (#28722)
Bump unstructured to pick up resolution of
https://github.com/Unstructured-IO/unstructured/issues/3795
2024-12-14 15:13:19 +00:00
Erick Friis
387284c259 core: release 0.3.25 (#28718) 2024-12-14 02:22:28 +00:00
Nawaf Alharbi
decd77c515 community: fix an issue with deepinfra integration (#28715)
Thank you for contributing to LangChain!

- [x] **PR title**: langchain: add URL parameter to ChatDeepInfra class

- [x] **PR message**: add URL parameter to ChatDeepInfra class
- **Description:** This PR introduces a url parameter to the
ChatDeepInfra class in LangChain, allowing users to specify a custom
URL. Previously, the URL for the DeepInfra API was hardcoded to
"https://stage.api.deepinfra.com/v1/openai/chat/completions", which
caused issues when the staging endpoint was not functional. The _url
method was updated to return the value from the url parameter, enabling
greater flexibility and addressing the problem. out!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:15:29 +00:00
Ben Chambers
008efada2c [community]: Render documents to graphviz (#24830)
- **Description:** Adds a helper that renders documents with the
GraphVectorStore metadata fields to Graphviz for visualization. This is
helpful for understanding and debugging.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:02:09 +00:00
Leonid Ganeline
fc8006121f docs: integrations W&B update (#28059)
Issue: Here is an ambiguity about W&B integrations. There are two
existing provider pages.
Fix: Added the "root" W&B provider page. Added there the references to
the documentation in the W&B site. Cleaned up formats in existing pages.
Added one more integration reference.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-14 00:47:16 +00:00
Erick Friis
288f204758 docs, community: aerospike docs update (#28717)
Co-authored-by: Jesse Schumacher <jschumacher@aerospike.com>
Co-authored-by: Jesse S <jschmidt@aerospike.com>
Co-authored-by: dylan <dwelch@aerospike.com>
2024-12-14 00:27:37 +00:00
ronidas39
bd008baee0 docs: Update additional_resources/tutorials.mdx (#28005)
Added Langchain complete tutorial playlist from total technology zonne
channel .In this playlist every video is focusing one specific use case
and hands on demo.All tutorials are equally good for every levels .

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 00:21:30 +00:00
Vimpas
337fed80a5 community: 🐛 PDF Filter Type Error (#27154)
Thank you for contributing to LangChain!

 **PR title**: "community: fix  PDF Filter Type Error"


  - **Description:** fix  PDF Filter Type Error"
  - **Issue:** the issue #27153 it fixes,
  - **Dependencies:** no
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 23:30:29 +00:00
Ryan Parker
12111cb922 community: fallback on core async atransform_documents method for MarkdownifyTransformer (#27866)
# Description
Implements the `atransform_documents` method for
`MarkdownifyTransformer` using the `asyncio` built-in library for
concurrency.

Note that this is mainly for API completeness when working with async
frameworks rather than for performance, since the `markdownify` function
is not I/O bound because it works with `Document` objects already in
memory.

# Issue
Fixes #27865

# Dependencies
No new dependencies added, but
[`markdownify`](https://github.com/matthewwithanm/python-markdownify) is
required since this PR updates the `markdownify` integration.

# Tests and docs
- Tests added
- I did not modify the docstrings since they already described the basic
functionality, and [the API docs also already included a
description](https://python.langchain.com/api_reference/community/document_transformers/langchain_community.document_transformers.markdownify.MarkdownifyTransformer.html#langchain_community.document_transformers.markdownify.MarkdownifyTransformer.atransform_documents).
If it would be helpful, I would be happy to update the docstrings and/or
the API docs.

# Lint and test
- [x] format
- [x] lint
- [x] test

I ran formatting with `make format`, linting with `make lint`, and
confirmed that tests pass using `make test`. Note that some unit tests
pass in CI but may fail when running `make_test`. Those unit tests are:
- `test_extract_html` (and `test_extract_html_async`)
- `test_strip_tags` (and `test_strip_tags_async`)
- `test_convert_tags` (and `test_convert_tags_async`)

The reason for the difference is that there are trailing spaces when the
tests are run in the CI checks, and no trailing spaces when run with
`make test`. I ensured that the tests pass in CI, but they may fail with
`make test` due to the addition of trailing spaces.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:32:22 +00:00
Manuel
af2e0a7ede partners: add 'model' alias for consistency in embedding classes (#28374)
**Description:** This PR introduces a `model` alias for the embedding
classes that contain the attribute `model_name`, to ensure consistency
across the codebase, as suggested by a moderator in a previous PR. The
change aligns the usage of attribute names across the project (see for
example
[here](65deeddd5d/libs/partners/groq/langchain_groq/chat_models.py (L304))).
**Issue:** This PR addresses the suggestion from the review of issue
#28269.
**Dependencies:**  None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:30:00 +00:00
Erick Friis
3107d78517 huggingface: fix standard test lint (#28714) 2024-12-13 22:18:54 +00:00
Kaiwei Zhang
b909d54e70 chroma[patch]: Update logic for assigning ids 2024-12-13 21:58:34 +00:00
ccurme
9c55c75eb5 docs: dropdowns for embeddings and vector stores (#28713) 2024-12-13 16:48:02 -05:00
Karthik Bharadhwaj
498f0249e2 community[minor]: Opensearch hybridsearch implementation (#25375)
community: add hybrid search in opensearch

# Langchain OpenSearch Hybrid Search Implementation

## Implementation of Hybrid Search: 

I have taken LangChain's OpenSearch integration to the next level by
adding hybrid search capabilities. Building on the existing
OpenSearchVectorSearch class, I have implemented Hybrid Search
functionality (which combines the best of both keyword and semantic
search). This new functionality allows users to harness the power of
OpenSearch's advanced hybrid search features without leaving the
familiar LangChain ecosystem. By blending traditional text matching with
vector-based similarity, the enhanced class delivers more accurate and
contextually relevant results. It's designed to seamlessly fit into
existing LangChain workflows, making it easy for developers to upgrade
their search capabilities.

In implementing the hybrid search for OpenSearch within the LangChain
framework, I also incorporated filtering capabilities. It's important to
note that according to the OpenSearch hybrid search documentation, only
post-filtering is supported for hybrid queries. This means that the
filtering is applied after the hybrid search results are obtained,
rather than during the initial search process.

**Note:** For the implementation of hybrid search, I strictly followed
the official OpenSearch Hybrid search documentation and I took
inspiration from
https://github.com/AndreasThinks/langchain/tree/feature/opensearch_hybrid_search
Thanks Mate!  

### Experiments

I conducted few experiments to verify that the hybrid search
implementation is accurate and capable of reproducing the results of
both plain keyword search and vector search.

Experiment - 1
Hybrid Search
Keyword_weight: 1, vector_weight: 0

I conducted an experiment to verify the accuracy of my hybrid search
implementation by comparing it to a plain keyword search. For this test,
I set the keyword_weight to 1 and the vector_weight to 0 in the hybrid
search, effectively giving full weightage to the keyword component. The
results from this hybrid search configuration matched those of a plain
keyword search, confirming that my implementation can accurately
reproduce keyword-only search results when needed. It's important to
note that while the results were the same, the scores differed between
the two methods. This difference is expected because the plain keyword
search in OpenSearch uses the BM25 algorithm for scoring, whereas the
hybrid search still performs both keyword and vector searches before
normalizing the scores, even when the vector component is given zero
weight. This experiment validates that my hybrid search solution
correctly handles the keyword search component and properly applies the
weighting system, demonstrating its accuracy and flexibility in
emulating different search scenarios.


Experiment - 2
Hybrid Search
keyword_weight = 0.0, vector_weight = 1.0

For experiment-2, I took the inverse approach to further validate my
hybrid search implementation. I set the keyword_weight to 0 and the
vector_weight to 1, effectively giving full weightage to the vector
search component (KNN search). I then compared these results with a pure
vector search. The outcome was consistent with my expectations: the
results from the hybrid search with these settings exactly matched those
from a standalone vector search. This confirms that my implementation
accurately reproduces vector search results when configured to do so. As
with the first experiment, I observed that while the results were
identical, the scores differed between the two methods. This difference
in scoring is expected and can be attributed to the normalization
process in hybrid search, which still considers both components even
when one is given zero weight. This experiment further validates the
accuracy and flexibility of my hybrid search solution, demonstrating its
ability to effectively emulate pure vector search when needed while
maintaining the underlying hybrid search structure.



Experiment - 3
Hybrid Search - balanced

keyword_weight = 0.5, vector_weight = 0.5

For experiment-3, I adopted a balanced approach to further evaluate the
effectiveness of my hybrid search implementation. In this test, I set
both the keyword_weight and vector_weight to 0.5, giving equal
importance to keyword-based and vector-based search components. This
configuration aims to leverage the strengths of both search methods
simultaneously. By setting both weights to 0.5, I intended to create a
scenario where the hybrid search would consider lexical matches and
semantic similarity equally. This balanced approach is often ideal for
many real-world applications, as it can capture both exact keyword
matches and contextually relevant results that might not contain the
exact search terms.

Kindly verify the notebook for the experiments conducted!  

**Notebook:**
https://github.com/karthikbharadhwajKB/Langchain_OpenSearch_Hybrid_search/blob/main/Opensearch_Hybridsearch.ipynb

### Instructions to follow for Performing Hybrid Search:

**Step-1: Instantiating OpenSearchVectorSearch Class:**
```python
opensearch_vectorstore = OpenSearchVectorSearch(
    index_name=os.getenv("INDEX_NAME"),
    embedding_function=embedding_model,
    opensearch_url=os.getenv("OPENSEARCH_URL"),
    http_auth=(os.getenv("OPENSEARCH_USERNAME"),os.getenv("OPENSEARCH_PASSWORD")),
    use_ssl=False,
    verify_certs=False,
    ssl_assert_hostname=False,
    ssl_show_warn=False
)
```

**Parameters:**
1. **index_name:** The name of the OpenSearch index to use.
2. **embedding_function:** The function or model used to generate
embeddings for the documents. It's assumed that embedding_model is
defined elsewhere in the code.
3. **opensearch_url:** The URL of the OpenSearch instance.
4. **http_auth:** A tuple containing the username and password for
authentication.
5. **use_ssl:** Set to False, indicating that the connection to
OpenSearch is not using SSL/TLS encryption.
6. **verify_certs:** Set to False, which means the SSL certificates are
not being verified. This is often used in development environments but
is not recommended for production.
7. **ssl_assert_hostname:** Set to False, disabling hostname
verification in SSL certificates.
8. **ssl_show_warn:** Set to False, suppressing SSL-related warnings.

**Step-2: Configure Search Pipeline:**

To initiate hybrid search functionality, you need to configures a search
pipeline first.

**Implementation Details:**

This method configures a search pipeline in OpenSearch that:
1. Normalizes the scores from both keyword and vector searches using the
min-max technique.
2. Applies the specified weights to the normalized scores.
3. Calculates the final score using an arithmetic mean of the weighted,
normalized scores.


**Parameters:**

* **pipeline_name (str):** A unique identifier for the search pipeline.
It's recommended to use a descriptive name that indicates the weights
used for keyword and vector searches.
* **keyword_weight (float):** The weight assigned to the keyword search
component. This should be a float value between 0 and 1. In this
example, 0.3 gives 30% importance to traditional text matching.
* **vector_weight (float):** The weight assigned to the vector search
component. This should be a float value between 0 and 1. In this
example, 0.7 gives 70% importance to semantic similarity.

```python
opensearch_vectorstore.configure_search_pipelines(
    pipeline_name="search_pipeline_keyword_0.3_vector_0.7",
    keyword_weight=0.3,
    vector_weight=0.7,
)
```

**Step-3: Performing Hybrid Search:**

After creating the search pipeline, you can perform a hybrid search
using the `similarity_search()` method (or) any methods that are
supported by `langchain`. This method combines both `keyword-based and
semantic similarity` searches on your OpenSearch index, leveraging the
strengths of both traditional information retrieval and vector embedding
techniques.

**parameters:**
* **query:** The search query string.
* **k:** The number of top results to return (in this case, 3).
* **search_type:** Set to `hybrid_search` to use both keyword and vector
search capabilities.
* **search_pipeline:** The name of the previously created search
pipeline.

```python
query = "what are the country named in our database?"

top_k = 3

pipeline_name = "search_pipeline_keyword_0.3_vector_0.7"

matched_docs = opensearch_vectorstore.similarity_search_with_score(
                query=query,
                k=top_k,
                search_type="hybrid_search",
                search_pipeline = pipeline_name
            )

matched_docs
```

twitter handle: @iamkarthik98

---------

Co-authored-by: Karthik Kolluri <karthik.kolluri@eidosmedia.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:34:12 -05:00
Philippe PRADOS
f3fb5a9c68 community[minor]: Fix json._validate_metadata_func() (#22842)
JSONparse, in _validate_metadata_func(), checks the consistency of the
_metadata_func() function. To do this, it invokes it and makes sure it
receives a dictionary in response. However, during the call, it does not
respect future calls, as shown on line 100. This generates errors if,
for example, the function is like this:
```python
        def generate_metadata(json_node:Dict[str,Any],kwargs:Dict[str,Any]) -> Dict[str,Any]:
             return {
                "source": url,
                "row": kwargs['seq_num'],
                "question":json_node.get("question"),
            }
        loader = JSONLoader(
            file_path=file_path,
            content_key="answer",
            jq_schema='.[]',
            metadata_func=generate_metadata,
            text_content=False)
```
To avoid this, the verification must comply with the specifications.
This patch does just that.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 21:24:20 +00:00
Keiichi Hirobe
67fd554512 core[patch]: throw exception indexing code if deletion fails in vectorstore (#28103)
The delete methods in the VectorStore and DocumentIndex interfaces
return a status indicating the result. Therefore, we can assume that
their implementations don't throw exceptions but instead return a result
indicating whether the delete operations have failed. The current
implementation doesn't check the returned value, so I modified it to
throw an exception when the operation fails.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:14:27 -05:00
Keiichi Hirobe
258b3be5ec core[minor]: add new clean up strategy "scoped_full" to indexing (#28505)
~Note that this PR is now Draft, so I didn't add change to `aindex`
function and didn't add test codes for my change.
After we have an agreement on the direction, I will add commits.~

`batch_size` is very difficult to decide because setting a large number
like >10000 will impact VectorDB and RecordManager, while setting a
small number will delete records unnecessarily, leading to redundant
work, as the `IMPORTANT` section says.
On the other hand, we can't use `full` because the loader returns just a
subset of the dataset in our use case.

I guess many people are in the same situation as us.

So, as one of the possible solutions for it, I would like to introduce a
new argument, `scoped_full_cleanup`.
This argument will be valid only when `claneup` is Full. If True, Full
cleanup deletes all documents that haven't been updated AND that are
associated with source ids that were seen during indexing. Default is
False.

This change keeps backward compatibility.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 20:35:25 +00:00
ccurme
4802c31a53 docs: update intro page (#28639) 2024-12-13 15:24:14 -05:00
Eugene Yurtsev
ce90b25313 core[patch]: Update error message in indexing code for unreachable code assertion (#28712)
Minor update for error message that should never be triggered
2024-12-13 20:21:14 +00:00
Keiichi Hirobe
da28cf1f54 core[patch]: Reverts PR #25754 and add unit tests (#28702)
I reported the bug 2 weeks ago here:
https://github.com/langchain-ai/langchain/issues/28447

I believe this is a critical bug for the indexer, so I submitted a PR to
revert the change and added unit tests to prevent similar bugs from
being introduced in the future.

@eyurtsev Could you check this?
2024-12-13 15:13:06 -05:00
ScriptShi
b0a298894d community[minor]: Add TablestoreVectorStore (#25767)
Thank you for contributing to LangChain!

- [x] **PR title**:  community: add TablestoreVectorStore



- [x] **PR message**: 
    - **Description:** add TablestoreVectorStore
    - **Dependencies:** none


- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. a test for the integration: yes
  2. an example notebook showing its use: yes

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-13 11:17:28 -08:00
Erick Friis
86b3c6e81c community: make old stub for QuerySQLDataBaseTool private to skip api ref (#28711) 2024-12-13 10:43:23 -08:00
Martin Triska
05ebe1e66b Community: add modified_since argument to O365BaseLoader (#28708)
## What are we doing in this PR
We're adding `modified_since` optional argument to `O365BaseLoader`.
When set, O365 loader will only load documents newer than
`modified_since` datetime.

## Why?
OneDrives / Sharepoints can contain large number of documents. Current
approach is to download and parse all files and let indexer to deal with
duplicates. This can be prohibitively time-consuming. Especially when
using OCR-based parser like
[zerox](fa06188834/libs/community/langchain_community/document_loaders/pdf.py (L948)).
This argument allows to skip documents that are older than known time of
indexing.

_Q: What if a file was modfied during last indexing process?
A: Users can set the `modified_since` conservatively and indexer will
still take care of duplicates._




If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 17:30:17 +00:00
UV
c855d434c5 DOC: Fixed conflicting info on ChatOllama structured output support (#28701)
This PR resolves the conflicting information in the Chat models
documentation regarding structured output support for ChatOllama.

- The Featured Providers table has been updated to reflect the correct
status.
- Structured output support for ChatOllama was introduced on Dec 6,
2024.
- A note has been added to ensure users update to the latest Ollama
version for structured outputs.

**Issue:** Fixes #28691
2024-12-13 17:24:59 +00:00
Bagatur
fa06188834 community[patch]: fix QuerySQLDatabaseTool name (#28659)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-12 19:16:03 -08:00
Bagatur
94c22c3f48 rfc: dropdown for chat models (#28673) 2024-12-12 19:14:39 -08:00
Erick Friis
48ab91b520 docs: more useful vercel warnings (#28699) 2024-12-13 03:07:24 +00:00
Erick Friis
f60110107c docs: ganalytics in api ref (#28697) 2024-12-12 23:55:59 +00:00
Michael Chin
28cb2cefc6 docs: Fix stack diagram in community README (#28685)
- **Description:** The stack diagram illustration in the community
README fails to render due to an invalid branch reference. This PR
replaces the broken image link with a valid one referencing master
branch.
2024-12-12 13:33:50 -08:00
Botong Zhu
13c3c4a210 community: fixes json loader not getting texts with json standard (#27327)
This PR fixes JSONLoader._get_text not converting objects to json string
correctly.
If an object is serializable and is not a dict, JSONLoader will use
python built-in str() method to convert it to string. This may cause
object converted to strings not following json standard. For example, a
list will be converted to string with single quotes, and if json.loads
try to load this string, it will cause error.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:33:45 +00:00
Lorenzo
4149c0dd8d community: add method to create branch and list files for gitlab tool (#27883)
### About

- **Description:** In the Gitlab utilities used for the Gitlab tool
there are no methods to create branches, list branches and files, as
this is already done for Github
- **Issue:** None
- **Dependencies:** None

This Pull request add the methods:
- create_branch
- list_branches_in_repo
- set_active_branch
- list_files_in_main_branch
- list_files_in_bot_branch
- list_files_from_directory

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:11:35 +00:00
Prathamesh Nimkar
ca054ed1b1 community: ChatSnowflakeCortex - Add streaming functionality (#27753)
Description:
snowflake.py
Add _stream and _stream_content methods to enable streaming
functionality
fix pydantic issues and added functionality with the overall langchain
version upgrade
added bind_tools method for agentic workflows support through langgraph
updated the _generate method to account for agentic workflows support
through langgraph
cosmetic changes to comments and if conditions

snowflake.ipynb
Added _stream example
cosmetic changes to comments
fixed lint errors

check_pydantic.sh
Decreased counter from 126 to 125 as suggested when formatting

---------

Co-authored-by: Prathamesh Nimkar <prathamesh.nimkar@snowflake.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 18:35:40 -08:00
Wang, Yi
d834c6b618 huggingface: fix tool argument serialization in _convert_TGI_message_to_LC_message (#26075)
Currently `_convert_TGI_message_to_LC_message` replaces `'` in the tool
arguments, so an argument like "It's" will be converted to `It"s` and
could cause a json parser to fail.

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-12-11 18:34:32 -08:00
Lakindu Boteju
5a31792bf1 community: Add support for cross-region inference profile IDs in Bedrock Anthropic Claude token cost calculation (#28167)
This change modifies the token cost calculation logic to support
cross-region inference profile IDs for Anthropic Claude models. Instead
of explicitly listing all regional variants of new inference profile IDs
in the cost dictionaries, the code now extracts a base model ID from the
input model ID (or inference profile ID), making it more maintainable
and automatically supporting new regional variants.

These inference profile IDs follow the format:
`<region>.<vendor>.<model-name>` (e.g.,
`us.anthropic.claude-3-haiku-xxx`, `eu.anthropic.claude-3-sonnet-xxx`).

Cross-region inference profiles are system-defined identifiers that
enable distributing model inference requests across multiple AWS
regions. They help manage unplanned traffic bursts and enhance
resilience during peak demands without additional routing costs.

References for Amazon Bedrock's cross-region inference profiles:-
-
https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html
-
https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 02:33:50 +00:00
fatmelon
d1e0ec7b55 community: VectorStores: Azure Cosmos DB Mongo vCore with DiskANN (#27329)
# Description
Add a new vector index type `diskann` to Azure Cosmos DB Mongo vCore
vector store. Paper of DiskANN can be found here [DiskANN: Fast Accurate
Billion-point Nearest Neighbor Search on a Single
Node](https://proceedings.neurips.cc/paper_files/paper/2019/file/09853c7fb1d3f8ee67a61b6bf4a7f8e6-Paper.pdf).

## Sample Usage
```python
from pymongo import MongoClient

# INDEX_NAME = "izzy-test-index-2"
# NAMESPACE = "izzy_test_db.izzy_test_collection"
# DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")

client: MongoClient = MongoClient(CONNECTION_STRING)
collection = client[DB_NAME][COLLECTION_NAME]

model_deployment = os.getenv(
    "OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada"
)
model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")

vectorstore = AzureCosmosDBVectorSearch.from_documents(
    docs,
    openai_embeddings,
    collection=collection,
    index_name=INDEX_NAME,
)

# Read more about these variables in detail here. https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search
maxDegree = 40
dimensions = 1536
similarity_algorithm = CosmosDBSimilarityType.COS
kind = CosmosDBVectorSearchType.VECTOR_DISKANN
lBuild = 20

vectorstore.create_index(
            dimensions=dimensions,
            similarity=similarity_algorithm,
            kind=kind ,
            max_degree=maxDegree,
            l_build=lBuild,
        )
```

## Dependencies
No additional dependencies were added

---------

Co-authored-by: Yang Qiao (from Dev Box) <yangqiao@microsoft.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:54:04 +00:00
manukychen
ba9b95cd23 Community: Adding bulk_size as a setable param for OpenSearchVectorSearch (#28325)
Description:
When using langchain.retrievers.parent_document_retriever.py with
vectorstore is OpenSearchVectorSearch, I found that the bulk_size param
I passed into OpenSearchVectorSearch class did not work on my
ParentDocumentRetriever.add_documents() function correctly, it will be
overwrite with int 500 the function which OpenSearchVectorSearch class
had (e.g., add_texts(), add_embeddings()...).

So I made this PR requset to fix this, thanks!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:45:22 +00:00
Erick Friis
0af5ad8262 docs: provider list from packages.yml (#28677) 2024-12-12 00:12:30 +00:00
Ayantunji Timilehin
a4713cab47 FIX: typos in docs (#28679)
- **Twitter handle:**@timi471
2024-12-11 16:06:04 -08:00
xintoteai
45f9c9ae88 langchain: fixed weaviate (v4) vectorstore import for self-query retriever (#28675)
Co-authored-by: Xin Heng <xin.heng@gmail.com>
2024-12-11 15:53:41 -08:00
Thomas van Dongen
ee640d6bd3 community: fixed bug in model2vec embedding code (#28670)
This PR fixes a bug with the current implementation for Model2Vec
embeddings where `embed_documents` does not work as expected.

- **Description**: the current implementation uses `encode_as_sequence`
for encoding documents. This is incorrect, as `encode_as_sequence`
creates token embeddings and not mean embeddings. The normal `encode`
function handles both single and batched inputs and should be used
instead. The return type was also incorrect, as encode returns a NumPy
array. This PR converts the embedding to a list so that the output is
consistent with the Embeddings ABC.
2024-12-11 15:50:56 -08:00
Brian Sharon
b20230c800 community: use correct id_key when deleting by id in LanceDB wrapper (#28655)
- **Description:** The current version of the `delete` method assumes
that the id field will always be called `id`.
- **Issue:** n/a
- **Dependencies:** n/a
- **Twitter handle:** ugh, Twitter :D 

---

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:49:35 +00:00
Huy Nguyen
8780f7a2ad Fix typo in doc for: Custom Functions & Pass Through Arguments pages (#28663)
- [x] Fix typo in Custom Output Parser doc
2024-12-11 15:47:14 -08:00
Mohammad Mohtashim
fa155a422f [Community]: requests_kwargs not being used in _fetch (#28646)
- **Description:** `requests_kwargs` is not being passed to `_fetch`
which is fetching pages asynchronously. In this PR, making sure that we
are passing `requests_kwargs` to `_fetch` just like `_scrape`.
- **Issue:** #28634

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:46:54 +00:00
bjoaquinc
8c37808d47 docs: added caution notes on Jina and LocalAI docs about openai sdk version compatibility (#28662)
- [ ] Main note
- **Description:** I added notes on the Jina and LocalAI pages telling
users that they must be using this integrations with openai sdk version
0.x, because if they dont they will get an error saying that "openai has
no attribute error". This PR was recommended by @efriis
    - **Issue:** warns people about the issue in #28529 
    - **Dependencies:** None
    - **Twitter handle:** JoaqCore



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-11 15:46:32 -08:00
Mohammad Mohtashim
a37afbe353 mistral[minor]: Added Retrying Mechanism in case of Request Rate Limit Error for MistralAIEmbeddings (#27818)
- **Description:**: In the event of a Rate Limit Error from the
MistralAI server, the response JSON raises a KeyError. To address this,
a simple retry mechanism has been implemented to handle cases where the
request limit is exceeded.
  - **Issue:** #27790

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-11 17:53:42 -05:00
Vincent Zhang
df5008fe55 community[minor]: FAISS Filter Function Enhancement with Advanced Query Operators (#28207)
## Description
We are submitting as a team of four for a project. Other team members
are @RuofanChen03, @LikeWang10067, @TANYAL77.

This pull requests expands the filtering capabilities of the FAISS
vectorstore by adding MongoDB-style query operators indicated as
follows, while including comprehensive testing for the added
functionality.
- $eq (equals)
- $neq (not equals)
- $gt (greater than)
- $lt (less than)
- $gte (greater than or equal)
- $lte (less than or equal)
- $in (membership in list)
- $nin (not in list)
- $and (all conditions must match)
- $or (any condition must match)
- $not (negation of condition)


## Issue
This closes https://github.com/langchain-ai/langchain/issues/26379.


## Sample Usage
```python
import faiss
import asyncio
from langchain_community.vectorstores import FAISS
from langchain.schema import Document
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
documents = [
    Document(page_content="Process customer refund request", metadata={"schema_type": "financial", "handler_type": "refund",}),
    Document(page_content="Update customer shipping address", metadata={"schema_type": "customer", "handler_type": "update",}),
    Document(page_content="Process payment transaction", metadata={"schema_type": "financial", "handler_type": "payment",}),
    Document(page_content="Handle customer complaint", metadata={"schema_type": "customer","handler_type": "complaint",}),
    Document(page_content="Process invoice payment", metadata={"schema_type": "financial","handler_type": "payment",})
]

async def search(vectorstore, query, schema_type, handler_type, k=2):
    schema_filter = {"schema_type": {"$eq": schema_type}}
    handler_filter = {"handler_type": {"$eq": handler_type}}
    combined_filter = {
        "$and": [
            schema_filter,
            handler_filter,
        ]
    }
    base_retriever = vectorstore.as_retriever(
        search_kwargs={"k":k, "filter":combined_filter}
    )
    return await base_retriever.ainvoke(query)

async def main():
    vectorstore = FAISS.from_texts(
        texts=[doc.page_content for doc in documents],
        embedding=embeddings,
        metadatas=[doc.metadata for doc in documents]
    )
    
    def printt(title, documents):
        print(title)
        if not documents:
            print("\tNo documents found.")
            return
        for doc in documents:
            print(f"\t{doc.page_content}. {doc.metadata}")

    printt("Documents:", documents)
    printt('\nquery="process payment", schema_type="financial", handler_type="payment":', await search(vectorstore, query="process payment", schema_type="financial", handler_type="payment", k=2))
    printt('\nquery="customer update", schema_type="customer", handler_type="update":', await search(vectorstore, query="customer update", schema_type="customer", handler_type="update", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="refund":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="refund", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="foobar":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="foobar", k=2))
    print()

if __name__ == "__main__":asyncio.run(main())
```

## Output
```
Documents:
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Handle customer complaint. {'schema_type': 'customer', 'handler_type': 'complaint'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="process payment", schema_type="financial", handler_type="payment":
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="customer update", schema_type="customer", handler_type="update":
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}

query="refund process", schema_type="financial", handler_type="refund":
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}

query="refund process", schema_type="financial", handler_type="foobar":
	No documents found.

```

---------

Co-authored-by: ruofan chen <ruofan.is.awesome@gmail.com>
Co-authored-by: RickyCowboy <like.wang@mail.utoronto.ca>
Co-authored-by: Shanni Li <tanya.li@mail.utoronto.ca>
Co-authored-by: RuofanChen03 <114096642+ruofanchen03@users.noreply.github.com>
Co-authored-by: Like Wang <102838708+likewang10067@users.noreply.github.com>
2024-12-11 17:52:22 -05:00
Erick Friis
b9dd4f2985 docs: box to package table (#28676) 2024-12-11 13:01:00 -08:00
like
3048a9a26d community: tongyi multimodal response format fix to support langchain (#28645)
Description: The multimodal(tongyi) response format "message": {"role":
"assistant", "content": [{"text": "图像"}]}}]} is not compatible with
LangChain.
Dependencies: No

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 21:13:26 +00:00
Bagatur
d0e662e43b community[patch]: Release 0.3.11 (#28658) 2024-12-10 20:51:13 +00:00
Bagatur
91227ad7fd langchain[patch]: Release 0.3.11 (#28657) 2024-12-10 12:28:14 -08:00
Bagatur
1fbd86a155 core[patch]: Release 0.3.24 (#28656) 2024-12-10 20:19:21 +00:00
Bagatur
e6a62d8422 core,langchain,community[patch]: allow langsmith 0.2 (#28598) 2024-12-10 18:50:58 +00:00
ccurme
bc4dc7f4b1 ollama[patch]: permit streaming for tool calls (#28654)
Resolves https://github.com/langchain-ai/langchain/issues/28543

Ollama recently
[released](https://github.com/ollama/ollama/releases/tag/v0.4.6) support
for streaming tool calls. Previously we would override the `stream`
parameter if tools were passed in.

Covered in standard tests here:
c1d348e95d/libs/standard-tests/langchain_tests/integration_tests/chat_models.py (L893-L897)

Before, the test generates one message chunk:
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:49:04.468487Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 525471208,
            'load_duration': 19701000,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 473000000,
            'message': Message(
                role='assistant',
                content='',
                images=None,
                tool_calls=[
                    ToolCall(
                        function=Function(name='magic_function', arguments={'input': 3})
                    )
                ]
            )
        },
        id='run-552bbe0f-8fb2-4105-ada1-fa38c1db444d',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'type': 'tool_call',
            },
        ],
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        },
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'index': None,
                'type': 'tool_call_chunk',
            }
        ]
    )
]
```

After, it generates two (tool call in one, response metadata in
another):
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={},
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'type': 'tool_call',
            },
        ],
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'index': None,
                'type': 'tool_call_chunk',
            },
        ],
    ),
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:46:43.278436Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 514282750,
            'load_duration': 16894458,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 464000000,
            'message': Message(
                role='assistant', content='', images=None, tool_calls=None
            ),
        },
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        }
    ),
]
```
2024-12-10 12:54:37 -05:00
Tomaz Bratanic
704059466a Fix graph example documentation (#28653) 2024-12-10 17:46:50 +00:00
Johannes Mohren
c1d348e95d doc-loader: retain Azure Doc Intelligence API metadata in Document parser (#28382)
**Description**:
This PR modifies the doc_intelligence.py parser in the community package
to include all metadata returned by the Azure Doc Intelligence API in
the Document object. Previously, only the parsed content (markdown) was
retained, while other important metadata such as bounding boxes (bboxes)
for images and tables was discarded. These image bboxes are crucial for
supporting use cases like multi-modal RAG workflows when using Azure Doc
Intelligence.

The change ensures that all information returned by the Azure Doc
Intelligence API is preserved by setting the metadata attribute of the
Document object to the entire result returned by the API, rather than an
empty dictionary. This extends the parser's utility for complex use
cases without breaking existing functionality.

**Issue**:
This change does not address a specific issue number, but it resolves a
critical limitation in supporting multimodal workflows when using the
LangChain wrapper for the Azure API.

**Dependencies**:
No additional dependencies are required for this change.

---------

Co-authored-by: jmohren <johannes.mohren@aol.de>
2024-12-10 11:22:58 -05:00
Alex Tonkonozhenko
0d20c314dd Confluence Loader: Fix CQL loading (#27620)
fix #12082

<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->
2024-12-10 11:05:23 -05:00
Katarina Supe
aba2711e7f community: update Memgraph integration (#27017)
**Description:**
- **Memgraph** no longer relies on `Neo4jGraphStore` but **implements
`GraphStore`**, just like other graph databases.
- **Memgraph** no longer relies on `GraphQAChain`, but implements
`MemgraphQAChain`, just like other graph databases.
- The refresh schema procedure has been updated to try using `SHOW
SCHEMA INFO`. The fallback uses Cypher queries (a combination of schema
and Cypher) → **LangChain integration no longer relies on MAGE
library**.
- The **schema structure** has been reformatted. Regardless of the
procedures used to get schema, schema structure is the same.
- The `add_graph_documents()` method has been implemented. It transforms
`GraphDocument` into Cypher queries and creates a graph in Memgraph. It
implements the ability to use `baseEntityLabel` to improve speed
(`baseEntityLabel` has an index on the `id` property). It also
implements the ability to include sources by creating a `MENTIONS`
relationship to the source document.
- Jupyter Notebook for Memgraph has been updated.
- **Issue:** /
- **Dependencies:** /
- **Twitter handle:** supe_katarina (DX Engineer @ Memgraph)

Closes #25606
2024-12-10 10:57:21 -05:00
ccurme
5c6e2cbcda ollama[patch]: support structured output (#28629)
- Bump minimum version of `ollama` to 0.4.4 (which also addresses
https://github.com/langchain-ai/langchain/issues/28607).
- Support recently-released [structured
output](https://ollama.com/blog/structured-outputs) feature. This can be
accessed by calling `.with_structured_output` with
`method="json_schema"` (choice of name
[mirrors](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)
what we have for OpenAI's structured output feature).

`ChatOllama` previously implemented `.with_structured_output` via the
[base
implementation](ec9b41431e/libs/core/langchain_core/language_models/chat_models.py (L1117)).
2024-12-10 10:36:00 -05:00
Bagatur
24292c4a31 core[patch]: Release 0.3.23 (#28648) 2024-12-10 10:01:16 +00:00
Bagatur
e24f86e55f core[patch]: return ToolMessage from tool (#28605) 2024-12-10 09:59:38 +00:00
hsm207
d0e95971f5 langchain-weaviate: Remove outdated docs (#28058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Docs on how to do hybrid search with weaviate is covered
[here](https://python.langchain.com/docs/integrations/vectorstores/weaviate/)

@efriis

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 05:00:07 +00:00
Erick Friis
ef2f875dfb core: deprecate PipelinePromptTemplate (#28644) 2024-12-10 03:56:48 +00:00
TamagoTorisugi
0f0df2df60 fix: Set default search_type to 'similarity' in as_retriever method of AzureSearch (#28376)
**Description**
This PR updates the `as_retriever` method in the `AzureSearch` to ensure
that the `search_type` parameter defaults to 'similarity' when not
explicitly provided.

Previously, if the `search_type` was omitted, it did not default to any
specific value. So it was inherited from
`AzureSearchVectorStoreRetriever`, which defaults to 'hybrid'.

This change ensures that the intended default behavior aligns with the
expected usage.

**Issue**
No specific issue was found related to this change.

**Dependencies**
No new dependencies are introduced with this change.

---------

Co-authored-by: prrao87 <prrao87@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:40:04 +00:00
Prashanth Rao
8c6eec5f25 community: KuzuGraph needs allow_dangerous_requests, add graph documents via LLMGraphTransformer (#27949)
- [x] **PR title**: "community: Kuzu - Add graph documents via
LLMGraphTransformer"
- This PR adds a new method `add_graph_documents` to use the
`GraphDocument`s extracted by `LLMGraphTransformer` and store in a Kùzu
graph backend.
- This allows users to transform unstructured text into a graph that
uses Kùzu as the graph store.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:15:28 +00:00
Pooja Kamath
9b7d49f7da docs: Adding Docs for new SQLServer Vector store package (#28173)
**Description:** Adding Documentation for new SQL Server Vector Store
Package.

Changed files -
Added new Vector Store -
docs\docs\integrations\vectorstores\sqlserver.ipynb
 FeatureTable.Js - docs\src\theme\FeatureTables.js
Microsoft.mdx - docs\docs\integrations\providers\microsoft.mdx

Detailed documentation on API -
https://python.langchain.com/api_reference/sqlserver/index.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:00:10 +00:00
Erick Friis
5afeb8b46c infra: merge queue allowed (#28641) 2024-12-09 17:11:15 -08:00
Filip Ratajczak
4e743b5427 Core: google docstring parsing fix (#28404)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: google docstring parsing fix"


- [x] **PR message**:
- **Description:** Added a solution for invalid parsing of google
docstring such as:
    Args:
net_annual_income (float): The user's net annual income (in current year
dollars).
- **Issue:** Previous code would return arg = "net_annual_income
(float)" which would cause exception in
_validate_docstring_args_against_annotations
    - **Dependencies:** None

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:27:25 +00:00
Arnav Priyadarshi
b78b2f7a28 community[fix]: Update Perplexity to pass parameters into API calls (#28421)
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- **Description:** I realized the invocation parameters were not being
passed into `_generate` so I added those in but then realized that the
parameters contained some old fields designed for an older openai client
which I removed. Parameters work fine now.
- **Issue:** Fixes #28229 
- **Dependencies:** No new dependencies.  
- **Twitter handle:** @arch_plane

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:23:31 +00:00
Erick Friis
34ca31e467 docs: integration contrib typo (#28642) 2024-12-09 23:46:31 +00:00
Clément Jumel
cf6d1c0ae7 docs: add Linkup integration documentation (#28366)
## Description

First of all, thanks for the great framework that is LangChain!

At [Linkup](https://www.linkup.so/) we're working on an API to connect
LLMs and agents to the internet and our partner sources. We'd be super
excited to see our API integrated in LangChain! This essentially
consists in adding a LangChain retriever and tool, which is done in our
own [package](https://pypi.org/project/langchain-linkup/). Here we're
simply following the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
and update the documentation of LangChain to mention the Linkup
integration.

We do have tests (both units & integration) in our [source
code](https://github.com/LinkupPlatform/langchain-linkup), and tried to
follow as close as possible the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
which specifically requests to focus on documentation changes for an
integration PR, so I'm not adding tests here, even though the PR
checklist seems to suggest so. Feel free to correct me if I got this
wrong!

By the way, we would be thrilled by being mentioned in the list of
providers which have standalone packages
[here](https://langchain-git-fork-linkupplatform-cj-doc-langchain.vercel.app/docs/integrations/providers/),
is there something in particular for us to do for that? 🙂

## Twitter handle

Linkup_platform
<!--
## PR Checklist

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
--!>
2024-12-09 14:36:25 -08:00
Amir Sadeghi
2c49f587aa community[fix]: could not locate runnable browser (#28289)
set open_browser to false to resolve "could not locate runnable browser"
error while default browser is None

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 21:05:52 +00:00
Martin Triska
75bc6bb191 community: [bugfix] fix source path for office files in O365 (#28260)
# What problem are we fixing?

Currently documents loaded using `O365BaseLoader` fetch source from
`file.web_url` (where `file` is `<class 'O365.drive.File'>`). This works
well for `.pdf` documents. Unfortunately office documents (`.xlsx`,
`.docx` ...) pass their `web_url` in following format:

`https://sharepoint_address/sites/path/to/library/root/Doc.aspx?sourcedoc=%XXXXXXXX-1111-1111-XXXX-XXXXXXXXXX%7D&file=filename.xlsx&action=default&mobileredirect=true`

This obfuscates the path to the file. This PR utilizes the parrent
folder's path and file name to reconstruct the actual location of the
file. Knowing the file's location can be crucial for some RAG
applications (path to the file can carry information we don't want to
loose).

@vbarda Could you please look at this one? I'm @-mentioning you since
we've already closed some PRs together :-)

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 12:34:59 -08:00
Erick Friis
534b8f4364 standard-tests: release 0.3.7 (#28637) 2024-12-09 15:12:18 -05:00
Tomaz Bratanic
6815981578 Switch graphqa example in docs to langgraph (#28574)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 14:46:00 -05:00
Naka Masato
ce3b69aa05 community: add include_labels option to ConfluenceLoader (#28259)
## **Description:**

Enable `ConfluenceLoader` to include labels with `include_labels` option
(`false` by default for backward compatibility). and the labels are set
to `metadata` in the `Document`. e.g. `{"labels": ["l1", "l2"]}`

## Notes

Confluence API supports to get labels by providing `metadata.labels` to
`expand` query parameter

All of the following functions support `expand` in the same way:
- confluence.get_page_by_id
- confluence.get_all_pages_by_label
- confluence.get_all_pages_from_space
- cql (internally using
[/api/content/search](https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-content/#api-wiki-rest-api-content-search-get))

## **Issue:**

No issue related to this PR.

## **Dependencies:** 

No changes.

## **Twitter handle:** 

[@gymnstcs](https://x.com/gymnstcs)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:35:01 +00:00
Rajendra Kadam
242fee11be community[minor] Pebblo: Support for new Pinecone class PineconeVectorStore (#28253)
- **Description:** Support for new Pinecone class PineconeVectorStore in
PebbloRetrievalQA.
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** -

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:33:54 +00:00
Pranav Ramesh Lohar
85114b4f3a docs: Update sql-query doc by fixing spelling mistake of chinhook.db to chinook.db (#28465)
Link (of doc with mistake):
https://python.langchain.com/v0.1/docs/use_cases/sql/quickstart/#:~:text=Now%2C-,Chinhook.db,-is%20in%20our

  - **Description:** speeling mistake in how-to docs of sql-db 
  - **Issue:** just a spelling mistake.
  - **Dependencies:** NA
2024-12-09 14:15:29 -05:00
nikitajoyn
9fcd203556 partners/mistralai: Fix KeyError in Vertex AI stream (#28624)
- **Description:** Streaming response from Mistral model using Vertex AI
raises KeyError when trying to access `choices` key, that the last chunk
doesn't have. The fix is to access the key safely using `get()`.
  - **Issue:** https://github.com/langchain-ai/langchain/issues/27886
  - **Dependencies:**
  - **Twitter handle:**
2024-12-09 14:14:58 -05:00
Huy Nguyen
bdb4cf7cc0 Fix typo in Custom Output Parser doc (#28617)
- [x] Fix typo in Custom Output Parser doc
2024-12-09 14:14:00 -05:00
ccurme
b476fdb54a docs: update readme (#28631) 2024-12-09 13:50:12 -05:00
maang-h
b64d846347 docs: Standardize MoonshotChat docstring (#28159)
- **Description:** Add docstring

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 18:46:25 +00:00
Erick Friis
4c70ffff01 standard-tests: sync/async vectorstore tests conditional (#28636)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 18:02:55 +00:00
ccurme
ffb5c1905a openai[patch]: release 0.2.12 (#28633) 2024-12-09 12:38:13 -05:00
ccurme
6e6061fe73 openai[patch]: bump minimum SDK version (#28632)
Resolves https://github.com/langchain-ai/langchain/issues/28625
2024-12-09 11:28:05 -05:00
Mohammad Mohtashim
ec9b41431e [Core]: Small Docstring Clarification for BaseTool (#28148)
- **Description:** `kwargs` are not being passed to `run` of the
`BaseTool` which has been fixed
- **Issue:** #28114

---------

Co-authored-by: Stevan Kapicic <kapicic.ste1@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:10:19 +00:00
Erick Friis
cef21a0b49 cli: warning on app add (#28619)
instead of #28128
2024-12-09 06:07:14 +00:00
Ankit Dangi
90f162efb6 text-splitters: add pydocstyle linting (#28127)
As seen in #23188, turned on Google-style docstrings by enabling
`pydocstyle` linting in the `text-splitters` package. Each resulting
linting error was addressed differently: ignored, resolved, suppressed,
and missing docstrings were added.

Fixes one of the checklist items from #25154, similar to #25939 in
`core` package. Ran `make format`, `make lint` and `make test` from the
root of the package `text-splitters` to ensure no issues were found.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:01:03 +00:00
Erick Friis
b53f07bfb9 docs: more integration contrib (#28618) 2024-12-09 05:41:08 +00:00
WGNW_MG
eabe587787 community[patch]:Fix for get_openai_callback() return token_cost=0.0 when model is gpt-4o-11-20 (#28408)
- **Description:** update MODEL_COST_PER_1K_TOKENS for new gpt-4o-11-20.
- **Issue:** with latest gpt-4o-11-20, openai callback return
token_cost=0.0
- **Dependencies:** None (just simple dict fix.)
- **Twitter handle:** I Don't Use Twitter. 
- (However..., I have a YouTube channel. Could you upload this there, by
any chance?
https://www.youtube.com/@%EA%B2%9C%EC%B0%BD%EB%B6%80%EA%B3%A0%EB%AC%B8AI%EC%9E%90%EB%AC%B8%EC%84%BC%EC%84%B8)
2024-12-08 20:46:50 -08:00
Fahim Zaman
481c4bfaba core[patch]: Fixed trim functions, and added corresponding unit test for the solved issue (#28429)
- **Description:** 
- Trim functions were incorrectly deleting nodes with more than 1
outgoing/incoming edge, so an extra condition was added to check for
this directly. A unit test "test_trim_multi_edge" was written to test
this test case specifically.
- **Issue:** 
  - Fixes #28411 
  - Fixes https://github.com/langchain-ai/langgraph/issues/1676
- **Dependencies:** 
  - No changes were made to the dependencies

- [x] Unit tests were added to verify the changes.
- [x] Updated documentation where necessary.
- [x] Ran make format, make lint, and make test to ensure compliance
with project standards.

---------

Co-authored-by: Tasif Hussain <tasif006@gmail.com>
2024-12-08 20:45:28 -08:00
Inah Jeon
54fba7e520 docs: change upstage solar model descriptions (#28419)
Thank you for contributing to LangChain!

- [ ] **PR message**:
- **Description:**: We have launched the new **Solar Pro** model, and
the documentation has been updated to include its details and features.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-08 20:43:19 -08:00
funkyrailroad
079c7ea0fc docs: Fix typo in weaviate integration docs (#28425)
- [ ] "docs: Fix typo in weaviate integration docs"
2024-12-08 20:42:00 -08:00
Zapiron
e8508fb4c6 docs: Fixed mini typo in recommend and improve the phrasing (#28438)
Fixed a typo on the word "recommend" and generally improved the phrasing
2024-12-08 20:30:43 -08:00
Zapiron
220b33df7f docs: Fixed broken link in the warning message to @tool API Reference… (#28437)
Fixed the broken hyperlink in the warning of docstring section to the
correct `@tool` API reference
2024-12-08 20:29:08 -08:00
Zapiron
1fc4ac32f0 docs: Resolve incorrect import of AttributeInfo for self-query retriever section (#28446)
Most of the imports from the self-query retriever section seems to
imported `AttributeInfo` from `query_constructor.base` instead of
`query_constructor.schema`, found in the API reference
[here](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.query_constructor.schema.AttributeInfo.html)

This PR resolves the wrong imports from most of the notebooks
2024-12-08 20:23:26 -08:00
Marco Perini
2354bb7bfa partners: 🕷️🦜 ScrapeGraph API Integration (#28559)
Hi Langchain team!

I'm the co-founder and mantainer at
[ScrapeGraphAI](https://scrapegraphai.com/).
By following the integration
[guide](https://python.langchain.com/docs/contributing/how_to/integrations/publish/)
on your site, I have created a new lib called
[langchain-scrapegraph](https://github.com/ScrapeGraphAI/langchain-scrapegraph).

With this PR I would like to integrate Scrapegraph as provider in
Langchain, adding the required documentation files.
Let me know if there are some changes to be made to be properly
integrated both in the lib and in the documentation.

Thank you 🕷️🦜

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 02:38:21 +00:00
Abhinav
317a38b83e community[minor]: Add support for modle2vec embeddings (#28507)
This PR add an embeddings integration for model2vec, the
`Model2vecEmbeddings` class.

- **Description**: [Model2Vec](https://github.com/MinishLab/model2vec)
lets you turn any sentence transformer into a really small static model
and makes running the model faster.
- **Issue**:
- **Dependencies**: model2vec
([pypi](https://pypi.org/project/model2vec/))
- **Twitter handle:**:

- [x] **Add tests and docs**: 
-
[Test](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/libs/community/langchain_community/embeddings/model2vec.py),
[docs](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/docs/docs/integrations/text_embedding/model2vec.ipynb)

- [x] **Lint and test**:

---------

Co-authored-by: Abhinav KM <abhinav.m@zerone-consulting.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-09 02:17:22 +00:00
Mateusz Szewczyk
fbf0704e48 docs: Update IBM documentation (#28503)
Thank you for contributing to LangChain!

PR: Update IBM documentation
2024-12-08 12:40:29 -08:00
Mohammad Mohtashim
524ee6d9ac Invalid tool_choice being passed to ChatLiteLLM (#28198)
- **Description:** Invalid `tool_choice` is given to `ChatLiteLLM` to
`bind_tools` due to it's parent's class default value being pass through
`with_structured_output`.
- **Issue:** #28176
2024-12-07 14:33:40 -05:00
Erick Friis
dd0085a9ff docs: standard tests to markdown, load templates from files (#28603) 2024-12-07 01:37:21 +00:00
Erick Friis
9b848491c8 docs: tool, retriever contributing docs (#28602) 2024-12-07 00:36:55 +00:00
Erick Friis
5e8553c31a standard-tests: retriever docstrings (#28596) 2024-12-07 00:32:19 +00:00
ccurme
d801c6ffc7 tests[patch]: nits (#28601) 2024-12-07 00:13:04 +00:00
Ikko Eltociear Ashimine
a32035d17d docs: update uptrain.ipynb (#28561)
evluate -> evaluate
2024-12-06 19:09:48 -05:00
Erick Friis
07c2ac765a community: release 0.3.10 (#28600) 2024-12-07 00:07:13 +00:00
Erick Friis
4a7dc6ec4c standard-tests: release 0.3.6 (#28599) 2024-12-07 00:05:04 +00:00
ccurme
80a88f8f04 tests[patch]: update API ref for chat models (#28594) 2024-12-06 19:00:14 -05:00
Erick Friis
0eb7ab65f1 multiple: fix xfailed signatures (#28597) 2024-12-06 15:39:47 -08:00
Erick Friis
b7c2029e84 standard-tests: root docstrings (#28595) 2024-12-06 15:14:52 -08:00
Erick Friis
925ca75ca5 docs: format (#28593) 2024-12-06 15:08:25 -08:00
Erick Friis
f943205ebf docs: dont document root init (#28592) 2024-12-06 15:07:53 -08:00
Erick Friis
9e2abcd152 standard-tests: show right classes in api docs (#28591) 2024-12-06 14:48:13 -08:00
Erick Friis
246c10a1cc standard-tests: private members and tools unit troubleshoot (#28590) 2024-12-06 13:52:58 -08:00
Erick Friis
1cedf401a7 docs: enable private docstring submembers sphinx (#28589) 2024-12-06 13:36:34 -08:00
Erick Friis
791d7e965e docs: enable private docstring modules sphinx (#28588) 2024-12-06 13:23:06 -08:00
Erick Friis
4f99952129 docs: enable private docstring members sphinx (#28586) 2024-12-06 13:19:52 -08:00
Bagatur
221ab03fe4 docs: readme/intro nits (#28581) 2024-12-06 12:52:15 -08:00
Erick Friis
e6663b69f3 langchain: release 0.3.10 (#28585) 2024-12-06 20:20:24 +00:00
Erick Friis
c38b845d7e core: fix path test (#28584) 2024-12-06 20:05:18 +00:00
ccurme
2c6bc74cb1 multiple: combine sync/async vector store standard test suites (#28580)
Breaking change in `langchain-tests`.
2024-12-06 14:55:06 -05:00
Bagatur
dda9f90047 core[patch]: Release 0.3.22 (#28582) 2024-12-06 19:36:53 +00:00
ccurme
15cbc36a23 docs[patch]: update contributor docs for integrations (#28576)
- Reformat tabs
- Add code snippets inline
- Add embeddings content
2024-12-06 13:33:24 -05:00
ccurme
f3dc142d3c cli[patch]: implement minimal starter vector store (#28577)
Basically the same as core's in-memory vector store. Removed some
optional methods.
2024-12-06 13:10:22 -05:00
Erick Friis
5277a021c1 docs: raw loader codeblock (#28548)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-06 09:26:34 -08:00
Erick Friis
18386c16c7 core, tests: more tolerant _aget_relevant_documents function (#28462) 2024-12-06 00:49:30 +00:00
Erick Friis
bc636ccc60 cli: release 0.0.35 (#28557) 2024-12-05 16:40:52 -08:00
Erick Friis
7ecf38f4fa cli: create specific files from template (#28556) 2024-12-06 00:32:47 +00:00
Erick Friis
a197e0ba3d docs: custom deprecated coloring, organize css a bit (#28555) 2024-12-05 23:57:54 +00:00
Erick Friis
478def8dcc core: deprecation doc removal (#28553)
![ScreenShot 2024-12-05 at 02 33
43PM@2x](https://github.com/user-attachments/assets/e1ce495b-90ca-41c7-9a65-b403a934675c)
2024-12-05 15:35:28 -08:00
cinqisap
482e8a7855 community: Add support for SAP HANA Vector hnsw index creation (#27884)
**Issue:** Added support for creating indexes in the SAP HANA Vector
engine.
 
**Changes**: 
1. Introduced a new function `create_hnsw_index` in `hanavector.py` that
enables the creation of indexes for SAP HANA Vector.
2. Added integration tests for the index creation function to ensure
functionality.
3. Updated the documentation to reflect the new index creation feature,
including examples and output from the notebook.
4. Fix the operator issue in ` _process_filter_object` function and
change the array argument to a placeholder in the similarity search SQL
statement.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:29:08 +00:00
blaufink
28f8d436f6 mistral: fix of issue #26029 (#28233)
- Description: Azure AI takes an issue with the safe_mode parameter
being set to False instead of None. Therefore, this PR changes the
default value of safe_mode from False to None. This results in it being
filtered out before the request is sent - avoind the extra-parameter
issue described below.

- Issue: #26029

- Dependencies: /

---------

Co-authored-by: blaufink <sebastian.brueckner@outlook.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:28:12 +00:00
Erick Friis
7a96ce1320 docs: deprecated styling (#28550) 2024-12-05 14:05:25 -08:00
ccurme
5519a1c1d3 docs[patch]: improve integration docs (#28547)
Alternative to https://github.com/langchain-ai/langchain/pull/28426

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 16:32:09 -05:00
Erick Friis
66f819c59e infra: run cli tests on test changes (#28542) 2024-12-05 13:09:25 -08:00
Erick Friis
3d5493593b docs: deprecated styling (#28546)
from

![image](https://github.com/user-attachments/assets/52565861-618b-407c-8bb4-f16dce3b5718)

to

![image](https://github.com/user-attachments/assets/fafeef00-6101-42cd-a8c6-ca15808d1e7c)
2024-12-05 12:41:41 -08:00
ccurme
ecdfc98ef6 tests[patch]: run standard tests for embeddings and populate embeddings API ref (#28545)
plus minor updates to chat models and vector store API refs
2024-12-05 19:39:03 +00:00
dwelch-spike
1581857e3d docs: add Aerospike to providers list (#28066)
**Description:** Adds Aerospike to the list of langchain providers and
points users to documentation for the vector store and Python SDK.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 17:29:00 +00:00
WEIQ-beepbeep
1e285cb5f3 docs: Updated incorrected type used for the multiply_by_max function (#28042)
In the `multiply_by_max()` tool, `a` is a scale factor but it is
annotated with a string type.
2024-12-05 09:15:59 -08:00
ccurme
b8e861a63b openai[patch]: add standard tests for embeddings (#28540) 2024-12-05 17:00:27 +00:00
ZhangShenao
d26555c682 [VectorStore] Improvement: Improve chroma vector store (#28524)
- Complete unit test
- Fix spelling error
2024-12-05 11:58:32 -05:00
John
7d44316d92 docs: Add link for how to install extras (#28537)
**Description:** Add link for how to install extras
2024-12-05 11:57:51 -05:00
ccurme
8f9b3b7498 chroma[patch]: fix bug (#28538)
Fix bug introduced in
https://github.com/langchain-ai/langchain/pull/27995

If all document IDs are `""`, the chroma SDK will raise
```
DuplicateIDError: Expected IDs to be unique
```

Caught by [docs
tests](https://github.com/langchain-ai/langchain/actions/runs/12180395579/job/33974633950),
but added a test to langchain-chroma as well.
2024-12-05 15:37:19 +00:00
Erick Friis
ecff9a01e4 cli: release 0.0.34 (#28525) 2024-12-05 15:35:49 +00:00
ccurme
d9e42a1517 langchain[patch]: fix deprecation warning (#28535) 2024-12-05 14:49:10 +00:00
Erick Friis
0f539f0246 standard-tests: release 0.3.5 (#28526) 2024-12-05 00:41:07 -08:00
Erick Friis
43c35d19d4 cli: standard tests in cli, test that they run, skip vectorstore tests (#28521) 2024-12-05 00:38:32 -08:00
Erick Friis
c5acedddc2 anthropic: timeout in tests (10s) (#28488) 2024-12-04 16:03:38 -08:00
ccurme
f459754470 tests[patch]: populate API reference for vector stores (#28520) 2024-12-05 00:02:31 +00:00
Erick Friis
2b360d6a2f infra: scheduled test fix (#28519) 2024-12-04 15:20:56 -08:00
ccurme
8bc2c912b8 chroma[patch]: (nit) simplify test (#28517)
Use `self.get_embeddings` on test class instead of importing embeddings
separately.
2024-12-04 20:22:55 +00:00
Tomaz Bratanic
a0130148bc switch graph semantic layer docs to langgraph (#28513)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-04 14:30:12 -05:00
ccurme
eec55c2550 chroma[patch]: add get_by_ids and fix bug (#28516)
- Run standard integration tests in Chroma
- Add `get_by_ids` method
- Fix bug in `add_texts`: if a list of `ids` is passed but any of them
are None, Chroma will raise an exception. Here we assign a uuid.
2024-12-04 14:00:36 -05:00
Erick Friis
12d74d5bef docs: single security doc (#28515) 2024-12-04 18:15:34 +00:00
Erick Friis
e6a08355a3 docs: more api ref links, add linting step to prevent more (#28495) 2024-12-04 04:19:42 +00:00
wlleiiwang
6151ea78d5 community: implement _select_relevance_score_fn for tencent vectordb (#28036)
implement _select_relevance_score_fn for tencent vectordb
fix use external embedding for tencent vectordb

Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 03:03:00 +00:00
Asi Greenholts
d34bf78f3b community: BM25Retriever preservation of document id (#27019)
Currently this retriever discards document ids

---------

Co-authored-by: asi-cider <88270351+asi-cider@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:36:00 +00:00
Erick Friis
a009249369 infra: release rely on local built in testing (#28492) 2024-12-03 16:35:38 -08:00
peterdhp
bc5ec63d67 community : allow using apikey for PubMedAPIWrapper (#27246)
**Description**: 

> Without an API key, any site (IP address) posting more than 3 requests
per second to the E-utilities will receive an error message. By
including an API key, a site can post up to 10 requests per second by
default.

quoted from A General Introduction to the E-utilities,NCBI :
https://www.ncbi.nlm.nih.gov/books/NBK25497/

I have simply added a api_key parameter to the PubMedAPIWrapper that can
be used to increase the number of requests per second from 3 to 10.

**Twitter handle** : @KORmaori

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:21:22 -08:00
Eric Pinzur
eff8a54756 langchain_chroma: added document.id support (#27995)
Description:
* Added internal `Document.id` support to Chroma VectorStore

Dependencies:
* https://github.com/langchain-ai/langchain/pull/27968 should be merged
first and this PR should be re-based on top of those changes.

Tests:
* Modified/Added tests for `Document.id` support. All tests are passing.


Note: I am not a member of the Chroma team.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:04:27 +00:00
William Smith
15e7353168 langchain_community: updated query constructor for Databricks Vector Search due to LangChainDeprecationWarning: filters was deprecated since langchain-community 0.2.11 and will be removed in 0.3. Please use filter instead. (#27974)
- **Description:** Updated the kwargs for the structured query from
filters to filter due to deprecation of 'filters' for Databricks Vector
Search. Also changed the error messages as the allowed operators and
comparators are different which can cause issues with functions such as
get_query_constructor_prompt()

- **Issue:** Fixes the Key Error for filters due to deprecation in favor
for 'filter':

LangChainDeprecationWarning: DatabricksVectorSearch received a key
`filters` in search_kwargs. `filters` was deprecated since
langchain-community 0.2.11 and will be removed in 0.3. Please use
`filter` instead.

- **Dependencies:** N/A
- **Twitter handle:** N/A

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:03:53 -08:00
miri-bar
6e607bb237 docs: langchain-ai21 update ai21 docs (#28076)
Thank you for contributing to LangChain!

Update docs to match latest langchain-ai21 release.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:01:36 -08:00
Jan Heimes
ef365543cb community: add Needle retriever and document loader integration (#28157)
- [x] **PR title**: "community: add Needle retriever and document loader
integration"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds a new integration for Needle, which
includes:
- **NeedleRetriever**: A retriever for fetching documents from Needle
collections.
- **NeedleLoader**: A document loader for managing and loading documents
into Needle collections.
      - Example notebooks demonstrating usage have been added in:
        - `docs/docs/integrations/retrievers/needle.ipynb`
        - `docs/docs/integrations/document_loaders/needle.ipynb`.
- **Dependencies:** The `needle-python` package is required as an
external dependency for accessing Needle's API. It has been added to the
extended testing dependencies list.
- **Twitter handle:** Feel free to mention me if this PR gets announced:
[needlexai](https://x.com/NeedlexAI).

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Unit tests have been added for both `NeedleRetriever` and
`NeedleLoader` in `libs/community/tests/unit_tests`. These tests mock
API calls to avoid relying on network access.
2. Example notebooks have been added to `docs/docs/integrations/`,
showcasing both retriever and loader functionality.

- [x] **Lint and test**: Run `make format`, `make lint`, and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
  - `make format`: Passed
  - `make lint`: Passed
- `make test`: Passed (requires `needle-python` to be installed locally;
this package is not added to LangChain dependencies).

Additional guidelines:
- [x] Optional dependencies are imported only within functions.
- [x] No dependencies have been added to pyproject.toml files except for
those required for unit tests.
- [x] The PR does not touch more than one package.
- [x] Changes are fully backwards compatible.
- [x] Community additions are not re-imported into LangChain core.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 22:06:25 +00:00
prakashshan50
b0a83071df Update graph_constructing.ipynb (#28489)
Used llm_transformer_tuple instance to create graph documents and
assigned to graph_documents_filtered variable.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-03 16:16:55 -05:00
ccurme
ab831ce05c tests[patch]: populate API reference for chat models (#28487)
Populate API reference for test class properties and test methods for
chat models.

Also:
- Make `standard_chat_model_params` private.
- `pytest.skip` some tests that were previously passed if features are
not supported.
2024-12-03 15:24:54 -05:00
Erick Friis
50ddf13692 infra: configurable scheduled tests (#28486) 2024-12-03 12:06:29 -08:00
Erick Friis
a220ee56cd infra: add 20min timeout to ci steps (#28483) 2024-12-03 10:35:57 -08:00
Erick Friis
c74f34cb41 pinecone: release 0.2.1 (version sequence) (#28485) 2024-12-03 10:22:16 -08:00
Audrey Sage Lorberfeld
926e452f44 partners: update version header for Pinecone integration (#28481)
Just need to update the version header used with Pinecone in
recently-merged method (from [this
PR](https://github.com/langchain-ai/langchain/pull/28320/files#r1867820929)).

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 18:08:56 +00:00
Erick Friis
7315360907 openai: dont populate logit_bias if None (#28482) 2024-12-03 17:54:53 +00:00
Erick Friis
ff675c11f6 partners/pinecone: release 0.2.2 (#28466) 2024-12-03 06:49:35 +00:00
Audrey Sage Lorberfeld
6b7e93d4c7 pinecone: update pinecone client (#28320)
This PR updates the Pinecone client to `5.4.0`, as well as its
dependencies (`pinecone-plugin-inference` and
`pinecone-plugin-interface`).

Note: `pinecone-client` is now simply called `pinecone`.

**Question for reviewer(s):** should this PR also update the `pinecone`
dep in [the root dir's `poetry.lock`
file](https://github.com/langchain-ai/langchain/blob/master/poetry.lock#L6729)?
Was unsure. (I don't believe so b/c it seems pinned to a lower version
likely based on 3rd-party deps (e.g. Unstructured).)

--
TW: @audrey_sage_


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1208693659122374
2024-12-02 22:47:09 -08:00
Erick Friis
000be1f32c tests: init retriever standard tests (#28459) 2024-12-02 23:36:09 +00:00
Erick Friis
42d40d694b partners/openai: release 0.2.11 (#28461) 2024-12-02 23:35:18 +00:00
Erick Friis
9f04416768 openai: set logit_bias to none instead of empty dict by default (#28460) 2024-12-02 15:30:32 -08:00
William FH
ecee41ab72 fix: Handle response metadata in merge_messages_runs (#28453) 2024-12-02 13:56:23 -08:00
lucasiscovici
60021e54b5 community: Add the additonnal kward 'context' for openai (#28351)
- **Description:** 
Add the additonnal kward 'context' for openai into
`convert_dict_to_message` and `convert_message_to_dict` functions.
2024-12-02 16:43:30 -05:00
ccurme
28487597b2 ollama[patch]: release 0.2.1 (#28458)
We inadvertently skipped 0.2.1, so release pipeline
[failed](https://github.com/langchain-ai/langchain/actions/runs/12126964367/job/33810204551).
2024-12-02 21:17:51 +00:00
ccurme
88d6d02b59 ollama[patch]: release 0.2.2 (#28456) 2024-12-02 14:57:30 -05:00
Ülgen Sarıkavak
c953f93c54 infra: Update Poetry version, to current latest (1.8.4) (#28194)
Update all Poetry versions to the current latest, 1.8.4 .

I was checking how lock files are managed and found out that even though
the files are generated - updated with the current latest version of
Poetry, the version used in CI and Dockerfile was outdated.

*
https://github.com/langchain-ai/langchain/pull/28061/files#diff-e00422d37a73d07c174e7838ad7c30f642d06305aff8f9d71e1e84c6897efbffL1
*
https://github.com/langchain-ai/langchain/pull/28070/files#diff-55267c883e58892916d5316bc029725fdeeba5a77e2557cf7667793823d9d9c6L1
*
https://github.com/langchain-ai/langchain/pull/27991/files#diff-9f96b8fd39133c3f1d737e013c9042b065b42ae04b3da76902304f30cec136d8R1

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-02 19:01:13 +00:00
Prithvi Kannan
e5b4f9ad75 docs: Add ChatDatabricks to llm models (#28398)
Thank you for contributing to LangChain!

Add `ChatDatabricks` to the list of LLM models options.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-02 13:19:30 -05:00
Riccardo Cocetta
58d2bfe310 docs: fixed a variable name in embeddings section (#28434)
This is a simple change for a variable name in the Embeddings section of
this document:
https://python.langchain.com/docs/tutorials/retrievers/#embeddings .

The variable name `embeddings_model` seems to be wrong and doesn't match
what follows in the template: `embeddings`.
2024-12-02 12:29:06 -05:00
chistokir
e294e7159a Update mistralai.ipynb (#28440)
Just fixing a broken link

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-02 17:05:28 +00:00
Bagatur
47433485e7 mistral[patch]: Release 0.2.3 (#28452) 2024-12-02 08:26:28 -08:00
Bagatur
49914e959a community[patch]: Release 0.3.9 (#28451) 2024-12-02 16:23:37 +00:00
ccurme
c2f1d022a2 mistral[patch]: ensure tool call IDs in tool messages are correctly formatted (#28422)
Fixes tests for cross-provider compatibility:
https://github.com/langchain-ai/langchain/actions/runs/12085358877/job/33702420504#step:10:376
2024-11-29 13:56:06 +00:00
Alex Thomas
2813e86407 docs: Adds the langchain-neo4j package to the API docs (#28386)
This PR adds the `langchain-neo4j` package to the `libs/packages.yml` so
the API docs can be built.
2024-11-27 12:41:12 -08:00
Bagatur
b7e10bb199 langchain[patch]: Release 0.3.9 (#28399) 2024-11-27 20:06:11 +00:00
ccurme
a8b21afc08 qdrant[patch]: run python 3.13 in CI (#28394) 2024-11-27 12:22:17 -05:00
ccurme
ee6fc3f3f6 nomic[patch]: run python 3.13 in CI (#28393) 2024-11-27 17:08:15 +00:00
Massimiliano Pronesti
83586661d6 partners[chroma]: add retrieval of embedding vectors (#28290)
This PR adds an additional method to `Chroma` to retrieve the embedding
vectors, besides the most relevant Documents. This is sometimes of use
when you need to run a postprocessing algorithm on the retrieved results
based on the vectors, which has been the case for me lately.

Example issue (discussion) requesting this change:
https://github.com/langchain-ai/langchain/discussions/20383

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-27 16:34:02 +00:00
ccurme
733a6ad328 mistral[patch]: run python 3.13 in CI (#28392) 2024-11-27 11:29:04 -05:00
ccurme
b9bf7fd797 couchbase[patch]: run python 3.13 in CI (#28391) 2024-11-27 11:28:21 -05:00
Greg Hinch
5141f25a20 community[patch]: support numpy2 (#28184)
Follows on from #27991, updates the langchain-community package to
support numpy 2 versions

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-27 11:10:58 -05:00
LuisMSotamba
0901f11b0f community: add truncation params when an openai assistant's run is created (#28158)
**Description:** When an OpenAI assistant is invoked, it creates a run
by default, allowing users to set only a few request fields. The
truncation strategy is set to auto, which includes previous messages in
the thread along with the current question until the context length is
reached. This causes token usage to grow incrementally:
consumed_tokens = previous_consumed_tokens + current_consumed_tokens.

This PR adds support for user-defined truncation strategies, giving
better control over token consumption.

**Issue:** High token consumption.
2024-11-27 10:53:53 -05:00
Pratool Bharti
c09000f20e Building RAG agents locally using open source LLMs on Intel CPU (#28302)
**Description:** Added a cookbook that showcase how to build a RAG agent
pipeline locally using open-source LLM and embedding models on Intel
Xeon CPU. It uses Llama 3.1:8B model from Ollama for LLM and
nomic-embed-text-v1.5 from NomicEmbeddings for embeddings. The whole
experiment is developed and tested on Intel 4th Gen Xeon Scalable CPU.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-27 15:40:09 +00:00
TheDannyG
607c60a594 partners/ollama: fix tool calling with nested schemas (#28225)
## Description

This PR addresses the following:

**Fixes Issue #25343:**
- Adds additional logic to parse shallowly nested JSON-encoded strings
in tool call arguments, allowing for proper parsing of responses like
that of Llama3.1 and 3.2 with nested schemas.
 
**Adds Integration Test for Fix:**
- Adds a Ollama specific integration test to ensure the issue is
resolved and to prevent regressions in the future.

**Fixes Failing Integration Tests:**
- Fixes failing integration tests (even prior to changes) caused by
`llama3-groq-tool-use` model. Previously,
tests`test_structured_output_async` and
`test_structured_output_optional_param` failed due to the model not
issuing a tool call in the response. Resolved by switching to
`llama3.1`.

## Issue
Fixes #25343.

## Dependencies
No dependencies.

____

Done in collaboration with @ishaan-upadhyay @mirajismail @ZackSteine.
2024-11-27 10:32:02 -05:00
ccurme
bb83abd037 community[patch]: remove sqlalchemy cap (#28389) 2024-11-27 10:20:36 -05:00
ccurme
51e98a5548 docs[patch]: fix typo in embeddings tab (#28388) 2024-11-27 10:05:06 -05:00
ccurme
42b8ad067d chroma[patch]: test python 3.13 in CI (#28387) 2024-11-27 15:02:40 +00:00
Kunal Pathak
85b8cecb6f docs: fix typo in embedding vectors documentation (#28378)
**Description:** Fixed a grammatical error in the documentation section
on embedding vectors. Replaced "Embedding vectors can be comparing" with
"Embedding vectors can be compared."

**Issue:** N/A (This is a minor documentation fix with no linked issue.)

**Dependencies:** None.
2024-11-27 09:52:40 -05:00
William FH
585da22752 Init embeddings (#28370) 2024-11-27 08:25:10 +00:00
Bagatur
ffe7bd4832 langchain[patch]: init_chat_model provider in model string (#28367)
```python
llm = init_chat_model("openai:gpt-4o")
```
2024-11-27 00:20:25 -08:00
ccurme
8adc4a5bcc langchain[patch]: update deprecation message for agent classes and constructors (#28369) 2024-11-26 16:07:13 -05:00
Kiril Buga
ec205fcee0 Updated docs for the BM25 preprocessing function (#28101)
- [x] **PR title**: "docs: add explanation for preprocessing function"


- [x] **PR message**: 
- **Description:** Extending the BM25 description and demonstrating the
preprocessing function
    - **Dependencies:** nltk
    - **Twitter handle:** @kirilbuga

@efriis
@baskaryan
@vbarda
@ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-26 14:59:15 -05:00
Mohammad Mohtashim
06fafc6651 Community: Marqo Index Setting GET Request Updated according to 2.x API version while keep backward compatability for 1.5.x (#28342)
- **Description:** `add_texts` was using `get_setting` for marqo client
which was being used according to 1.5.x API version. However, this PR
updates the `add_text` accounting for updated response payload for 2.x
and later while maintaining backward compatibility. Plus I have verified
this was the only place where marqo client was not accounting for
updated API version.
  - **Issue:** #28323

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-26 18:26:56 +00:00
willtai
7d95a10ada langchain: Fix Neo4jVector vector store reference from partner package for self query (#28292)
_This should only be merged once neo4j is included under libs/partners._

# **Description:**

Neo4jVector from langchain-community is being moved to langchain-neo4j:
[see
link](https://github.com/langchain-ai/langchain-neo4j/blob/main/libs/neo4j/langchain_neo4j/vectorstores/neo4j_vector.py#L436).

To solve the issue below, this PR adds an attempt to import
`Neo4jVector` from the partner package `langchain-neo4j`, similarly to
the other partner packages.

# **Issue:**
When initializing `SelfQueryRetriever`, the following error is raised:

```
ValueError: Self query retriever with Vector Store type <class 'langchain_neo4j.vectorstores.neo4j_vector.Neo4jVector'> not supported.
```

[See related
issue](https://github.com/langchain-ai/langchain/issues/19748).

# **Dependencies:**
- langchain-neo4j
2024-11-26 13:21:04 -05:00
ccurme
a1c90794e1 ollama[patch]: bump to 0.4.1 in lock file (#28365) 2024-11-26 18:19:31 +00:00
ccurme
74d9d2cba1 ollama[patch]: support ollama 0.4 (#28364)
v0.4 of the Python SDK is already installed via the lock file in CI, but
our current implementation is not compatible with it.

This also addresses an issue introduced in
https://github.com/langchain-ai/langchain/pull/28299. @RyanMagnuson
would you mind explaining the motivation for that change? From what I
can tell the Ollama SDK [does not support
kwargs](6c44bb2729/ollama/_client.py (L286)).
Previously, unsupported kwargs were ignored, but they currently raise
`TypeError`.

Some of LangChain's standard test suite expects `tool_choice` to be
supported, so here we catch it in `bind_tools` so it is ignored and not
passed through to the client.
2024-11-26 12:45:59 -05:00
Bagatur
e9c16552fa openai[patch]: bump core dep (#28361) 2024-11-26 08:37:05 -08:00
Bagatur
e7dc26aefb openai[patch]: Release 0.2.10 (#28360) 2024-11-26 08:30:29 -08:00
ccurme
42b18824c2 openai[patch]: use max_completion_tokens in place of max_tokens (#26917)
`max_tokens` is deprecated:
https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-26 16:30:19 +00:00
Greg Hinch
869c8f5879 langchain[patch]: support numpy 2 (#28183)
Follows on from #27991, updates the langchain package to support numpy 2
versions

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-26 11:20:02 -05:00
ccurme
7b9a0d9ed8 docs: update tutorials (#28219) 2024-11-26 10:43:12 -05:00
ccurme
a97c53e7c2 docs[patch]: fix broken anchor link (#28358) 2024-11-26 10:12:44 -05:00
Richard Hao
c161f7d46f docs(create_sql_agent): fix reStructured Text Markup (#28356)
- **Description:** Lines of code must be indented beneath `..
code-block::` for proper formatting.
https://devguide.python.org/documentation/markup/#showing-code-examples
- **Issue:** The example code block on the `create_sql_agent` document
page is not properly rendered.

https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.sql.base.create_sql_agent.html#langchain_community.agent_toolkits.sql.base.create_sql_agent

<img width="933" alt="image"
src="https://github.com/user-attachments/assets/d764bcad-e412-408b-ab0b-9a78a11188ee">
2024-11-26 09:52:16 -05:00
Mohammad Mohtashim
195ae7baa3 Community: Adding citations in AIMessage for ChatPerplexity (#28321)
**Description**: Adding Citation in response payload of ChatPerplexity
**Issue**: #28108
2024-11-26 09:45:47 -05:00
Ikko Eltociear Ashimine
aa2c17b56c docs: update azure_openai_whisper_parser.ipynb (#28327)
conjuction -> conjunction
2024-11-25 15:59:40 -05:00
ccurme
a5374952f8 community[patch]: fix import in test (#28339)
Library name was updated after
https://github.com/langchain-ai/langchain/pull/27879 branched off
master.
2024-11-25 19:28:01 +00:00
Alex Thomas
5867f25ff3 community[patch]: Neo4j community deprecation (#28130)
Adds deprecation notices for Neo4j components moving to the
`langchain_neo4j` partner package.

- Adds deprecation warnings to all Neo4j-related classes and functions
that have been migrated to the new `langchain_neo4j` partner package
- Updates documentation to reference the new `langchain_neo4j` package
instead of `langchain_community`
2024-11-25 10:34:22 -08:00
Yan
c60695a1c7 community: fixed critical bugs at Writer provider (#27879) 2024-11-25 12:03:37 -05:00
Yelin Zhang
6ed2d387bb docs: fix GOOGLE_API_KEY typo (#28322)
fix small GOOGLE_API_KEY markdown formatting typo
2024-11-25 09:45:22 -05:00
ccurme
a83357dc5a community[patch]: release 0.3.8 (#28316) 2024-11-23 08:21:21 -05:00
ccurme
82bb0cdfff langchain[patch]: release 0.3.8 (#28315) 2024-11-23 13:02:10 +00:00
ccurme
f5f1149257 core[patch]: release 0.3.21 (#28314) 2024-11-23 12:46:56 +00:00
Erick Friis
7170a4e3e1 docs: standard test api link (#28309) 2024-11-23 04:02:56 +00:00
Eugene Yurtsev
563587e14f langchain[patch]: Compat with pydantic 2.10 (#28307)
pydantic compat 2.10 for langchain
2024-11-23 03:21:27 +00:00
Eugene Yurtsev
a813d11c14 core[patch]: Compat pydantic 2.10 (#28308)
pydantic 2.10 compat for langchain-core
2024-11-22 21:44:55 -05:00
ZhangShenao
ed84d48eef [Doc] Improvement: fix import statement for qdrant (#28286)
- fix import statement for qdrant
- issue: https://github.com/langchain-ai/langchain/issues/28012

#28012
2024-11-22 21:43:01 -05:00
Erick Friis
a3296479a0 docs: integration asyncio mode (#28306) 2024-11-23 02:18:53 +00:00
Erick Friis
39fd0fd196 infra: more rst (#28305) 2024-11-22 17:50:42 -08:00
ccurme
25a636c597 langchain[patch]: update deprecation message for MapReduceChain (#28304)
Link migration guide first.
2024-11-23 00:47:52 +00:00
Erick Friis
242e9fc865 infra: install standard tests in docs build (#28303) 2024-11-22 15:49:10 -08:00
ccurme
203d20caa5 community[patch]: fix errors introduced by pydantic 2.10 (#28297) 2024-11-22 17:50:13 -05:00
Erick Friis
aa7fa80e1e partners/ollama: release 0.2.2rc1 (#28300) 2024-11-22 22:25:05 +00:00
Erick Friis
7277794a59 ollama: include kwargs in requests (#28299)
courtesy of @ryanmagnuson
2024-11-22 14:15:42 -08:00
Pat Patterson
2ee37a1c7b community: list valid values for LanceDB constructor's mode argument (#28296)
**Description:**

Currently, the docstring for `LanceDB.__init__()` provides the default
value for `mode`, but not the list of valid values. This PR adds that
list to the docstring.

**Issue:**

N/A

**Dependencies:**

N/A

**Twitter handle:**

`@metadaddy`

[Leaving as a reminder: If no one reviews your PR within a few days,
please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda,
hwchase17.]
2024-11-22 15:40:06 -05:00
ccurme
697dda5052 core[patch]: release 0.3.20 (#28293) 2024-11-22 14:04:29 -05:00
ccurme
a433039a56 core[patch]: support final AIMessage responses in tool_example_to_messages (#28267)
We have a test
[test_structured_few_shot_examples](ad4333ca03/libs/standard-tests/langchain_tests/integration_tests/chat_models.py (L546))
in standard integration tests that implements a version of tool-calling
few shot examples that works with ~all tested providers. The formulation
supported by ~all providers is: `human message, tool call, tool message,
AI reponse`.

Here we update
`langchain_core.utils.function_calling.tool_example_to_messages` to
support this formulation.

The `tool_example_to_messages` util is undocumented outside of our API
reference. IMO, if we are testing that this function works across all
providers, it can be helpful to feature it in our guides. The structured
few-shot examples we document at the moment require users to implement
this function and can be simplified.
2024-11-22 15:38:49 +00:00
Manuel
a5fcbe69eb docs: correct HuggingFaceEmbeddings documentation model param (#28269)
- **Description:** Corrected the parameter name in the
HuggingFaceEmbeddings documentation under integrations/text_embedding/
from model to model_name to align with the actual code usage in the
langchain_huggingface package.
- **Issue:** Fixes #28231
- **Dependencies:** None
2024-11-22 09:59:33 -05:00
Erick Friis
65deeddd5d docs: poetry publish 3 (#28280) 2024-11-22 05:14:28 +00:00
Erick Friis
29f8a79ebe groq,openai,mistralai: fix unit tests (#28279) 2024-11-22 04:54:01 +00:00
Erick Friis
9a717c9b32 docs: poetry publish 2 (#28277)
- **docs: poetry publish**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
2024-11-21 20:49:38 -08:00
Erick Friis
4ccb3e64c7 cli: release 0.0.33 (#28278) 2024-11-21 20:13:37 -08:00
Prithvi Kannan
2917f8573f docs: Update langchain docs to new Databricks package (#28274)
Thank you for contributing to LangChain!

Ctrl+F to find instances of `langchain-databricks` and replace with
`databricks-langchain`.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
2024-11-21 20:03:28 -08:00
Erick Friis
49254cde70 docs: poetry publish (#28275) 2024-11-22 03:10:03 +00:00
Erick Friis
f173b72e35 api-docs: add standard tests package to build 2 (#28273) 2024-11-21 15:40:48 -08:00
Erick Friis
45402d1a20 api-docs: add standard tests package to build (#28272) 2024-11-21 15:35:47 -08:00
Erick Friis
b3ee1f8713 core: add space at end of error message link (#28270) 2024-11-21 22:19:59 +00:00
Erick Friis
5bc2df3060 standard-tests: troubleshooting docstrings (#28268) 2024-11-21 22:05:31 +00:00
Erick Friis
ad4333ca03 infra: disable vertex api build (#28266) 2024-11-21 10:37:17 -08:00
Erick Friis
69a706adff infra: fix api docs build (#28264) 2024-11-21 10:25:52 -08:00
ccurme
56499cf58b openai[patch]: unskip test and relax tolerance in embeddings comparison (#28262)
From what I can tell response using SDK is not deterministic:
```python
import numpy as np
import openai

documents = ["disallowed special token '<|endoftext|>'"]
model = "text-embedding-ada-002"

direct_output_1 = (
    openai.OpenAI()
    .embeddings.create(input=documents, model=model)
    .data[0]
    .embedding
)

for i in range(10):
    direct_output_2 = (
        openai.OpenAI()
        .embeddings.create(input=documents, model=model)
        .data[0]
        .embedding
    )
    print(f"{i}: {np.isclose(direct_output_1, direct_output_2).all()}")
```
```
0: True
1: True
2: True
3: True
4: False
5: True
6: True
7: True
8: True
9: True
```

See related discussion here:
https://community.openai.com/t/can-text-embedding-ada-002-be-made-deterministic/318054

Found the same result using `"text-embedding-3-small"`.
2024-11-21 10:23:10 -08:00
Priyanshi Garg
f5f53d1101 community: fix compatibility issue in kinetica chat model integration for Pydantic 2 (#28252)
Fixed a compatibility issue in the `load_messages_from_context()`
function for the Kinetica chat model integration. The issue was caused
by stricter validation introduced in Pydantic 2.
2024-11-21 09:33:00 -05:00
Erick Friis
96c67230aa docs: standard test version badge (#28247) 2024-11-21 04:00:04 +00:00
Erick Friis
d1108607f4 multiple: push deprecation removals to 1.0 (#28236) 2024-11-20 19:56:29 -08:00
Erick Friis
4f76246cf2 standard-tests: release 0.3.4 (#28245) 2024-11-20 19:35:58 -08:00
Erick Friis
4bdf1d7d1a standard-tests: fix decorator init test (#28246) 2024-11-21 03:35:43 +00:00
Erick Friis
60e572f591 standard-tests: tool tests (#28244) 2024-11-20 19:26:16 -08:00
Erick Friis
35e6052df5 infra: remove stale dockerfiles from repo (#28243)
deleting the following docker things from monorepo. they aren't
currently usable because of old dependencies, and I'd rather avoid
people using them / having to maintain them

- /docker
- this folder has a compose file that spins up postgres,pgvector
(separate from postgres and very stale version),mongo instance with
default user/password that we've gotten security pings about before. not
worth having
- also spins up a custom dockerfile with onttotext/graphdb - not even
sure what that is
- /libs/langchain/dockerfile + dev.dockerfile
  - super old poetry version, doesn't implement the right thing anymore
- .github/workflows/_release_docker.yml, langchain_release_docker.yml
  - not used anymore, not worth having an alternate release path
2024-11-21 00:05:01 +00:00
Erick Friis
161ab736ce standard-tests: release 0.3.3 (#28242) 2024-11-20 23:47:02 +00:00
Erick Friis
8738973267 docs: vectorstore standard tests (#28241) 2024-11-20 23:38:08 +00:00
Eugene Yurtsev
2acc83f146 mistralai[patch]: 0.2.2 release (#28240)
mistralai 0.2.2 release
2024-11-20 22:18:15 +00:00
Eugene Yurtsev
1a66175e38 mistral[patch]: Propagate tool call id (#28238)
mistralai-large-2411 requires tool call id

Older models accept tool call id if its provided

mistral-large-2407 
mistral-large-2402
2024-11-20 17:02:30 -05:00
shroominic
dee72c46c1 community: Outlines integration (#27449)
In collaboration with @rlouf I build an
[outlines](https://dottxt-ai.github.io/outlines/latest/) integration for
langchain!

I think this is really useful for doing any type of structured output
locally.
[Dottxt](https://dottxt.co) spend alot of work optimising this process
at a lower level
([outlines-core](https://pypi.org/project/outlines-core/0.1.14/) written
in rust) so I think this is a better alternative over all current
approaches in langchain to do structured output.
It also implements the `.with_structured_output` method so it should be
a drop in replacement for a lot of applications.

The integration includes:
- **Outlines LLM class**
- **ChatOutlines class**
- **Tutorial Cookbooks**
- **Documentation Page**
- **Validation and error messages** 
- **Exposes Outlines Structured output features**
- **Support for multiple backends**
- **Integration and Unit Tests**

Dependencies: `outlines` + additional (depending on backend used)

I am not sure if the unit-tests comply with all requirements, if not I
suggest to just remove them since I don't see a useful way to do it
differently.

### Quick overview:

Chat Models:
<img width="698" alt="image"
src="https://github.com/user-attachments/assets/05a499b9-858c-4397-a9ff-165c2b3e7acc">

Structured Output:
<img width="955" alt="image"
src="https://github.com/user-attachments/assets/b9fcac11-d3e5-4698-b1ae-8c4cb3d54c45">

---------

Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-11-20 16:31:31 -05:00
Mikelarg
2901fa20cc community: Add deprecation warning for GigaChat integration in langchain-community (#28022)
- **Description:** We have released the
[langchain-gigachat](https://github.com/ai-forever/langchain-gigachat?tab=readme-ov-file)
with new GigaChat integration that support's function/tool calling. This
PR deprecated legacy GigaChat class in community package.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 21:03:47 +00:00
Renzo-vS
567dc1e422 community: fix duplicate content (#28003)
Thank you for reading my first PR!

**Description:**
Deduplicate content in AzureSearch vectorstore.
Currently, by default, the content of the retrieval is placed both in
metadata and page_content of a Document.
This PR removes the content from metadata, and leaves it in
page_content.

**Issue:**:
Previously, the content was popped from result before metadata was
populated.
In #25828 , the order was changed which leads to a response with
duplicated content.
This was not the intention of that PR and seems undesirable.

Looking forward to seeing my contribution in the next version!

Cheers, 
Renzo
2024-11-20 12:49:03 -08:00
Jorge Piedrahita Ortiz
abaea28417 community: SamabanovaCloud tool calling and Structured output (#27967)
**Description:** Add tool calling and structured output support for
SambaNovaCloud chat models, docs included

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 19:12:08 +00:00
ccurme
cb32bab69d docs: update notebook env dependencies (#28221) 2024-11-20 14:10:42 -05:00
af su
7c7ee07d30 huggingface[fix]: HuggingFaceEndpointEmbeddings model parameter passing error when async embed (#27953)
This change refines the handling of _model_kwargs in POST requests.
Instead of nesting _model_kwargs as a dictionary under the parameters
key, it is now directly unpacked and merged into the request's JSON
payload. This ensures that the model parameters are passed correctly and
avoids unnecessary nesting.E. g.:

```python
import asyncio

from langchain_huggingface.embeddings import HuggingFaceEndpointEmbeddings

embedding_input = ["This input will get multiplied" * 10000]

embeddings = HuggingFaceEndpointEmbeddings(
    model="http://127.0.0.1:8081/embed",
    model_kwargs={"truncate": True},
)

# Truncated parameters in synchronized methods are handled correctly
embeddings.embed_documents(texts=embedding_input)
# The truncate parameter is not handled correctly in the asynchronous method,
# and 413 Request Entity Too Large is returned.
asyncio.run(embeddings.aembed_documents(texts=embedding_input))
```

Co-authored-by: af su <saf@zjuici.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 19:08:56 +00:00
Eric Pinzur
923ef85105 langchain_chroma: fixed integration tests (#27968)
Description:
* I'm planning to add `Document.id` support to the Chroma VectorStore,
but first I wanted to make sure all the integration tests were passing
first. They weren't. This PR fixes the broken tests.
* I found 2 issues:
* This change (from a year ago, exactly :) ) for supporting multi-modal
embeddings:
https://docs.trychroma.com/deployment/migration#migration-to-0.4.16---november-7,-2023
* This change https://github.com/langchain-ai/langchain/pull/27827 due
to an update in the chroma client.
  
Also ran `format` and `lint` on the changes.

Note: I am not a member of the Chroma team.
2024-11-20 11:05:02 -08:00
CLOVA Studio 개발
218b4e073e community: fix some features on Naver ChatModel & embedding model (#28228)
# Description

- adding stopReason to response_metadata to call stream and astream
- excluding NCP_APIGW_API_KEY input required validation
- to remove warning Field "model_name" has conflict with protected
namespace "model_".

cc. @vbarda
2024-11-20 10:35:41 -08:00
Erick Friis
4da35623af docs: formatting fix (#28235) 2024-11-20 18:08:47 +00:00
Erick Friis
43e24cd4a1 docs, standard-tests: property tags, support tool decorator (#28234) 2024-11-20 17:19:03 +00:00
Soham Das
4027da1b6e docs: fix typo in migration guide: migrate_agent.ipynb (#28227)
PR Title: `docs: fix typo in migration guide`

PR Message:
- **Description**: This PR fixes a small typo in the "How to Migrate
from Legacy LangChain Agents to LangGraph" guide. "In this cases" -> "In
this case"
- **Issue**: N/A (no issue linked for this typo fix)
- **Dependencies**: None
- **Twitter handle**: N/A
2024-11-20 11:44:22 -05:00
Erick Friis
16918842bf docs: add conceptual testing docs (#28205) 2024-11-19 22:46:26 +00:00
Lance Martin
6bda89f9a1 Clarify bind tools takes a list (#28222)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-19 12:59:10 -08:00
William FH
197b885911 [CLI] Relax constraints (#28218) 2024-11-19 09:31:56 -08:00
Eugene Yurtsev
5599a0a537 core[minor]: Add other langgraph packages to sys_info (#28190)
Add other langgraph packages to sys_info output
2024-11-19 09:20:25 -05:00
Erick Friis
97f752c92d docs: more standard test stubs (#28202) 2024-11-19 03:59:52 +00:00
Erick Friis
0a06732d3e docs: links in integration contrib (#28200) 2024-11-19 03:25:47 +00:00
Erick Friis
0dbaf05bb7 standard-tests: rename langchain_standard_tests to langchain_tests, release 0.3.2 (#28203) 2024-11-18 19:10:39 -08:00
Erick Friis
24eea2e398 infra: allow non-langchainai packages (#28199) 2024-11-19 01:43:08 +00:00
Erick Friis
d9d689572a openai: release 0.2.9, o1 streaming (#28197) 2024-11-18 23:54:38 +00:00
Erick Friis
cbeb8601d6 docs: efficient rebuild (#28195)
if you run `make build start` in one tab, then start editing files, you
can efficient rebuild notebooks with `make generate-files md-sync
render`
2024-11-18 22:09:16 +00:00
ccurme
018f4102f4 docs: fix embeddings tabs (#28193)
- Update fake embeddings to deterministic fake embeddings
- Fix indentation
2024-11-18 16:00:20 -05:00
Mahdi Massahi
6dfea7e508 docs: fixed a typo (#28191)
**Description**: removed the redundant phrase (typo)
2024-11-18 15:46:47 -05:00
Eugene Yurtsev
3a63055ce2 docs[patch]: Add missing link to streaming concepts page (#28189)
Add missing streaming concept
2024-11-18 14:35:10 -05:00
ccurme
a1db744b20 docs: add component tabs to integration landing pages (#28142)
- Add to embedding model tabs
- Add tabs for vector stores
- Add "hello world" examples in integration landing pages using tabs
2024-11-18 13:34:35 -05:00
Erick Friis
c26b3575f8 docs: community integration guide clarification (#28186) 2024-11-18 17:58:07 +00:00
Erick Friis
093f24ba4d docs: standard test update (#28185) 2024-11-18 17:49:21 +00:00
Talha Munir
0c051e57e0 docs: fix grammatical error in delegation to sync methods (#28165)
### **Description**  
Fixed a grammatical error in the documentation section about the
delegation to synchronous methods to improve readability and clarity.

### **Issue**  
No associated issue.

### **Dependencies**  
No additional dependencies required.

### **Twitter handle**  
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-18 16:27:30 +00:00
DreamOfStars
22a8652ecc langchain: add missing punctuation in react_single_input.py (#28161)
- [x] **PR title**: "langchain: add missing punctuation in
react_single_input.py"

- [x] **PR message**: 
- **Description:** Add missing single quote to line 12: "Invalid Format:
Missing 'Action:' after 'Thought:"
2024-11-18 09:38:48 -05:00
Eugene Yurtsev
76e210a349 docs: link to langgraph platform (#28150)
Link to langgraph platform
2024-11-16 22:37:58 -05:00
Eric Pinzur
0a57fc0016 community: OpenSearchVectorStore: use engine set at init() time by default (#28147)
Description:
* Updated the OpenSearchVectorStore to use the `engine` parameter
captured at `init()` time as the default when adding documents to the
store.

Formatted, Linted, and Tested.
2024-11-16 17:07:42 -05:00
Zapiron
e6fe8cc2fb docs: Fix wrong import of AttributeInfo (#28155)
Fix wrong import of `AttributeInfo` from
`langchain.chains.query_constructor.base` to
`langchain.chains.query_constructor.schema`
2024-11-16 16:59:35 -05:00
Zapiron
0b2bea4c0e docs: Resolve incorrect import for AttributeInfo (#28154)
`AttributeInfo` is incorrectly imported from
`langchain.chains.query_constructor.base` instead of
`langchain.chains.query_constructor.schema`
2024-11-16 16:57:55 -05:00
Alexey Morozov
3b602d0453 docs: Added missing installation for required packages in tutorial notebooks (#28156)
**Description:** some of the required packages are missing in the
installation cell in tutorial notebooks. So I added required packages to
installation cell or created latter one if it was not presented in the
notebook at all.

Tested in colab: "Kernel" -> "Run all cells". All the notebooks under
`docs/tutorials` run as expected without `ModuleNotFoundError` error.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-16 21:51:30 +00:00
Zapiron
2de59d0b3e docs: Fixed mini typo (#28149)
Fix mini typo from objets to objects
2024-11-16 16:31:31 -05:00
Erick Friis
709c418022 docs: how to contribute integrations (#28143) 2024-11-15 14:52:17 -08:00
Erick Friis
683644320b docs: reorg sidebar (#27978) 2024-11-15 14:28:18 -08:00
Piyush Jain
c48fdbba6a docs:Moved AWS tab ahead in the list as per integration telemetry (#28144)
Moving ahead per integration telemetry
2024-11-15 22:24:27 +00:00
Erick Friis
364fd5e17f infra: release standard test case (#28140) 2024-11-15 11:58:28 -08:00
Erick Friis
6d2004ee7d multiple: langchain-standard-tests -> langchain-tests (#28139) 2024-11-15 11:32:04 -08:00
Erick Friis
409c7946ac docs, standard-tests: how to standard test a custom tool, imports (#27931) 2024-11-15 10:49:14 -08:00
alex shengzhi li
39fcb476fd community: add reka chat model integration (#27379) 2024-11-15 13:37:14 -05:00
Erick Friis
d3252b7417 core: release 0.3.19 (#28137) 2024-11-15 18:15:28 +00:00
ccurme
585479e1ff docs: add legacy LLM page to concepts index (#28135)
This page was previously not discoverable.
2024-11-15 13:06:48 -05:00
Jorge Piedrahita Ortiz
39956a3ef0 community: sambanovacloud llm integration (#27526)
- **Description:** SambaNovaCloud llm integration added, previously only
chat model integration

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 16:58:11 +00:00
Elham Badri
d696728278 partners/ollama: Enabled Token Level Streaming when Using Bind Tools for ChatOllama (#27689)
**Description:** The issue concerns the unexpected behavior observed
using the bind_tools method in LangChain's ChatOllama. When tools are
not bound, the llm.stream() method works as expected, returning
incremental chunks of content, which is crucial for real-time
applications such as conversational agents and live feedback systems.
However, when bind_tools([]) is used, the streaming behavior changes,
causing the output to be delivered in full chunks rather than
incrementally. This change negatively impacts the user experience by
breaking the real-time nature of the streaming mechanism.
**Issue:** #26971

---------

Co-authored-by: 4meyDam1e <amey.damle@mail.utoronto.ca>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 11:36:27 -05:00
ccurme
776e3271e3 standard-tests[patch]: add test for async tool calling (#28133) 2024-11-15 16:09:50 +00:00
Vadym Barda
ed4952e475 core[patch]: add caching to get_function_nonlocals (#28131) 2024-11-15 07:53:53 -08:00
ccurme
74438f3ae8 docs: add links to concept guides in how-tos (#28118) 2024-11-15 09:44:11 -05:00
ccurme
ef2dc9eae5 docs: update "quickstart" tutorial (#28096)
- Update language / add links in places
- De-emphasize output parsers
- remove deployment section
2024-11-14 14:38:45 -05:00
ccurme
f1222739f8 core[patch]: support numpy 2 (#27991) 2024-11-14 13:08:57 -05:00
Zapiron
cff70c2d67 docs: Add hyperlink to immediately show the table at the bottom of th… (#28102)
Added a hyperlink which can be clicked so users can immediately see the
table and find out the various example selector methods
2024-11-14 09:52:18 -05:00
Zapiron
4b641f87ae English Update and fixed a duplicate "the" (#27981)
Fixed a duplicate "the" in the documentation and made the documentation
generally easier to understand
2024-11-13 14:36:56 -05:00
Erick Friis
f6d34585f0 docs: throw on broken anchors (#27773)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 14:29:27 -05:00
Zapiron
7bd9c8cba3 docs: Updated link to ensure reference to the correct header for ToolNode (#28088)
When `ToolNode` hyperlink is clicked, it does not automatically scroll
to the section due to incorrect reference to the heading / id in the
LangGraph documentation
2024-11-13 14:19:55 -05:00
ccurme
940e93e891 docs: add docs on StrOutputParser (#28089)
Think it's worth adding a quick guide and including in the table in the
concepts page. `StrOutputParser` can make it easier to deal with the
union type for message content. For example, ChatAnthropic with bound
tools will generate string content if there are no tool calls and
`list[dict]` content otherwise.

I'm also considering removing the output parser section from the
["quickstart"
tutorial](https://python.langchain.com/docs/tutorials/llm_chain/); we
can link to this guide instead.
2024-11-13 14:16:50 -05:00
Vadym Barda
6ec688cf2b xai[patch]: update core (#28092) 2024-11-13 17:51:51 +00:00
Artur Barseghyan
2ab5673eb1 docs: Add example using TypedDict in structured outputs how-to guide (#27415)
For me, the [Pydantic
example](https://python.langchain.com/docs/how_to/structured_output/#choosing-between-multiple-schemas)
does not work (tested on various Python versions from 3.10 to 3.12, and
`Pydantic` versions from 2.7 to 2.9).

The `TypedDict` example (added in this PR) does.

----

Additionally, fixed an error in [Using PydanticOutputParser
example](https://python.langchain.com/docs/how_to/structured_output/#using-pydanticoutputparser).

Was:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke(query).to_string())
```

Corrected to:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke({"query": query}).to_string())
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 16:53:37 +00:00
Bharat Ramanathan
3e972faf81 community: chore warn deprecate the tracer (#27159)
- **Description:**: This PR deprecates the wandb tracer in favor of the
new
[WeaveTracer](https://weave-docs.wandb.ai/guides/integrations/langchain#using-weavetracer)
in W&B
- **Dependencies:** No dependencies, just a deprecation warning.
- **Twitter handle:** @parambharat


@baskaryan
2024-11-13 11:33:34 -05:00
Erick Friis
76e0127539 core: release 0.3.18 (#28070) 2024-11-13 16:19:13 +00:00
Eric Pinzur
eadc2f6a90 core: added DeleteResponse to the module (#28069)
Description:
* added `DeleteResponse` to the `langchain_core.indexing` module, for
implementing DocumentIndex classes.
2024-11-13 11:08:08 -05:00
ZhangShenao
c89e7ce8b5 core[patch]: Update doc-strings in callbacks (#28073)
- Fix api docs
2024-11-13 11:07:15 -05:00
Tom Pham
965286db3e docs: fix spelling error (#28075)
Fix spelling error in docs
2024-11-13 11:06:13 -05:00
Zapiron
892694d735 docs: Fixed broken link for AI models introduction (#28079)
Fixed broken redirect to the introduction to AI models in the Forefront
platform
2024-11-13 11:03:40 -05:00
Vruddhi Shah
beef4c4d62 Proofreading and Editing Report for Migration Guide (#28084)
Corrections and Suggestions for Migrating LangChain Code Documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-13 11:03:09 -05:00
Zapiron
2cec957274 docs: Fix missing space between the words API Reference (#28087)
Added an expected space between the words APIReference
2024-11-13 11:02:46 -05:00
Zapiron
da7c79b794 DOCS: Concept Section Improvements & Updates (#27733)
Edited mainly the `Concepts` section in the LangChain documentation.

Overview:
* Updated some explanations to make the point more clear / Add missing
words for some documentations.
* Rephrased some sentences to make it shorter and more concise.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-11-13 11:01:27 -05:00
Zapiron
02de346f6d docs: Fixed additional 'the' and remove 'turns' to make explanation clearer (#28082)
Fixed additional 'the' and remove the word 'turns' as it would make
explanation clearer

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-13 15:15:39 +00:00
Zapiron
298ebeee4e docs: Fixed broken link for Cloudfare docs for the models available (#28080)
Fixed the broken redirect to see all the cloudfare models
2024-11-13 10:07:33 -05:00
Zapiron
8241c0df23 docs: Fixed wrong link redirect from JS ToolMessage to Python ToolMes… (#28083)
Fixed the link to ToolMessage from the JS documentation to Python
documentation
2024-11-13 10:05:19 -05:00
Zapiron
77c8a5c70c docs: Fixed broken link to the Luminous model family introduction (#28078)
The Luminous Model hyperlink at the start of the model is broken.
Fixed it to update it with the latest link used by the integration
2024-11-13 10:04:50 -05:00
Vadym Barda
09e85c7c4b xai[patch]: update dependencies (#28067) 2024-11-12 16:15:17 -05:00
am-kinetica
a646f1c383 Handled empty search result handling and updated the notebook (#27914)
- [ ] **PR title**: "community: updated Kinetica vectorstore"

  - **Description:** Handled empty search results
  - **Issue:** used to throw error if the search results were empty

@efriis
2024-11-12 13:03:49 -08:00
ccurme
00e7b2dada anthropic[patch]: add examples to API ref (#28065) 2024-11-12 20:17:02 +00:00
Vadym Barda
48ee322a78 partners: add xAI chat integration (#28032) 2024-11-12 15:11:29 -05:00
ccurme
2898b95ca7 anthropic[major]: release 0.3.0 (#28063) 2024-11-12 14:58:00 -05:00
ccurme
5eaa0e8c45 openai[patch]: release 0.2.8 (#28062) 2024-11-12 14:57:11 -05:00
ccurme
15b7dd3ad7 community[patch]: release 0.3.7 (#28061) 2024-11-12 19:54:58 +00:00
ccurme
5460096086 core[patch]: release 0.3.17 (#28060) 2024-11-12 19:38:56 +00:00
ccurme
1538ee17f9 anthropic[major]: support python 3.13 (#27916)
Last week Anthropic released version 0.39.0 of its python sdk, which
enabled support for Python 3.13. This release deleted a legacy
`client.count_tokens` method, which we currently access during init of
the `Anthropic` LLM. Anthropic has replaced this functionality with the
[client.beta.messages.count_tokens()
API](https://github.com/anthropics/anthropic-sdk-python/pull/726).

To enable support for `anthropic >= 0.39.0` and Python 3.13, here we
drop support for the legacy token counting method, and add support for
the new method via `ChatAnthropic.get_num_tokens_from_messages`.

To fully support the token counting API, we update the signature of
`get_num_tokens_from_message` to accept tools everywhere.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-11-12 14:31:07 -05:00
Syed Hyder Zaidi
759b6ed17a docs: Fix typo in Tavily Search example (#28034)
Changed "demon" to "demo" in the code comment for clarity.

PR Title
docs: Fix typo in Tavily Search example

PR Message
Description:
This PR fixes a typo in the code comment of the Tavily Search
documentation. Changed "demon" to "demo" for clarity and to avoid
confusion.

Issue:
No specific issue was mentioned, but this is a minor improvement in
documentation.

Dependencies:
No additional dependencies required.
2024-11-12 13:58:13 -05:00
ZhangShenao
ca7375ac20 Improvement[Community]Improve Embeddings API (#28038)
- Fix `BaichuanTextEmbeddings` api url
- Remove unused params in api doc
- Fix word spelling
2024-11-12 13:57:35 -05:00
Aditya Anand
e290736696 Update streaming.mdx (#28055)
fix: correct grammar in documentation for streaming modes

Updated sentence to clarify usage of "choose" in "When using the stream
and astream methods with LangGraph, you can choose one or more streaming
modes..." for better readability.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 16:43:12 +00:00
Aditya Anand
f9212c77e7 DOC: Fix typo in documentation for streaming modes, correcting 'witte… (#28052)
…n' to 'written' in 'Emit custom output written using LangGraph’s
StreamWriter.'

### Changes:
- Corrected the typo in the phrase 'Emit custom output witten using
LangGraph’s StreamWriter.' to 'Emit custom output written using
LangGraph’s StreamWriter.'
- Enhanced the clarity of the documentation surrounding LangGraph’s
streaming modes, specifically around the StreamWriter functionality.
- Provided additional context and emphasis on the role of the
StreamWriter class in handling custom output.

### Issue Reference:
- GitHub issue: https://github.com/langchain-ai/langchain/issues/28051

This update addresses the issue raised regarding the incorrect spelling
and aims to improve the clarity of the streaming mode documentation for
better user understanding.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**:
**Description:**  
Fixed a typo in the documentation for streaming modes, changing "witten"
to "written" in the phrase "Emit custom output witten using LangGraph’s
StreamWriter."
**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 11:42:30 -05:00
Bagatur
139881b108 openai[patch]: fix azure oai stream check (#28048) 2024-11-12 15:42:06 +00:00
Bagatur
9611f0b55d openai[patch]: Release 0.2.7 (#28047) 2024-11-12 15:16:15 +00:00
Bagatur
5c14e1f935 community[patch]: Release 0.3.6 (#28046) 2024-11-12 15:15:07 +00:00
Bagatur
9ebd7ebed8 core[patch]: Release 0.3.16 (#28045) 2024-11-12 14:57:15 +00:00
Changyong Um
9484cc0962 community[docs]: modify parameter for the LoRA adapter on the vllm page (#27930)
**Description:** 
This PR modifies the documentation regarding the configuration of the
VLLM with the LoRA adapter. The updates aim to provide clear
instructions for users on how to set up the LoRA adapter when using the
VLLM.

- before
```python
VLLM(..., enable_lora=True)
```
- after
```python
VLLM(..., 
    vllm_kwargs={
        "enable_lora": True
    }
)
```
This change clarifies that users should use the vllm_kwargs to enable
the LoRA adapter.

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-11-11 15:41:56 -05:00
Zapiron
0b85f9035b docs: Makes the phrasing more smooth and reasoning more clear (#28020)
Updated the phrasing and reasoning on the "abstraction not receiving
much development" part of the documentation

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-11 17:17:29 +00:00
Zapiron
f695b96484 docs:Fixed missing hyperlink and changed AI to LLMs for clarity (#28006)
Changed "AI" to "LLM" in a paragraph
Fixed missing hyperlink for the structured output point
2024-11-11 12:14:29 -05:00
Choy Fuguan
c0f3777657 docs: removed bolding from header (#28001)
removed extra ** after heading two
2024-11-11 12:13:02 -05:00
Salman Faroz
44df79cf52 Correcting AzureOpenAI initialization (#28014) 2024-11-11 12:10:59 -05:00
Hammad Randhawa
57fc62323a docs : Update sql_qa.ipynb (#28026)
Text Documentation Bug:

Changed DSL query to SQL query.
2024-11-11 12:04:09 -05:00
ccurme
922b6b0e46 docs: update some cassettes (#28010) 2024-11-09 21:04:18 +00:00
ccurme
8e91c7ceec docs: add cross-links (#28000)
Mainly to improve visibility of integration pages.
2024-11-09 08:57:58 -05:00
Bagatur
33dbfba08b openai[patch]: default to invoke on o1 stream() (#27983) 2024-11-08 19:12:59 -08:00
Bagatur
503f2487a5 docs: intro nit (#27998) 2024-11-08 11:51:17 -08:00
ccurme
ff2152b115 docs: update tutorials index and add get started guides (#27996) 2024-11-08 14:47:32 -05:00
Eric Pinzur
c421997caa community[patch]: Added type hinting to OpenSearch clients (#27946)
Description:
* When working with OpenSearchVectorSearch to make
OpenSearchGraphVectorStore (coming soon), I noticed that there wasn't
type hinting for the underlying OpenSearch clients. This fixes that
issue.
* Confirmed tests are still passing with code changes.

Note that there is some additional code duplication now, but I think
this approach is cleaner overall.
2024-11-08 11:04:57 -08:00
Zapiron
4c2392e55c docs: fix link in custom tools guide (#27975)
Fixed broken link in tools documentation for `BaseTool`
2024-11-08 09:40:15 -05:00
Zapiron
85925e3164 docs: fix link in tool-calling guide (#27976)
Fix broken BaseTool link in documentation
2024-11-08 09:39:27 -05:00
Zapiron
138f360b25 docs: fix typo in PDF loader guide (#27977)
Fixed duplicate "py" in hyperlink to `pypdf` docs
2024-11-08 09:38:32 -05:00
Saad Makrod
b509747c7f Community: Google Books API Tool (#27307)
## Description

As proposed in our earlier discussion #26977 we have introduced a Google
Books API Tool that leverages the Google Books API found at
[https://developers.google.com/books/docs/v1/using](https://developers.google.com/books/docs/v1/using)
to generate book recommendations.

### Sample Usage

```python
from langchain_community.tools import GoogleBooksQueryRun
from langchain_community.utilities import GoogleBooksAPIWrapper

api_wrapper = GoogleBooksAPIWrapper()
tool = GoogleBooksQueryRun(api_wrapper=api_wrapper)

tool.run('ai')
```

### Sample Output

```txt
Here are 5 suggestions based off your search for books related to ai:

1. "AI's Take on the Stigma Against AI-Generated Content" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. "AI's Take on the Stigma Against AI-Generated Content" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, "AI's Take on the Stigma Against AI-Generated Content" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.
Read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api

2. "AI Strategies For Web Development" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.
Read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api

3. "Artificial Intelligence for Students" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts – AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World
Read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api

4. "The AI Book" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in today’s Financial Services Industry · The future state of financial services and capital markets – what’s next for the real-world implementation of AITech? · The innovating customer – users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the ‘unbundled corporation’ & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important
Read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api

5. "Artificial Intelligence in Society" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.
Read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api
```

## Issue 

This closes #27276 

## Dependencies

No additional dependencies were added

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 15:29:35 -08:00
Massimiliano Pronesti
be3b7f9bae cookbook: add Anthropic's contextual retrieval (#27898)
Hi there, this PR adds a notebook implementing Anthropic's proposed
[Contextual
retrieval](https://www.anthropic.com/news/contextual-retrieval) to
langchain's cookbook.
2024-11-07 14:48:01 -08:00
Erick Friis
733e43eed0 docs: new stack diagram (#27972) 2024-11-07 22:46:56 +00:00
Erick Friis
a073c4c498 templates,docs: leave templates in v0.2 (#27952)
all template installs will now have to declare `--branch v0.2` to make
clear they aren't compatible with langchain 0.3 (most have a pydantic v1
setup). e.g.

```
langchain-cli app add pirate-speak --branch v0.2
```
2024-11-07 22:23:48 +00:00
Erick Friis
8807e6986c docs: ignore case production fork master (#27971) 2024-11-07 13:55:21 -08:00
Shawn Lee
6f368e9eab community: handle chatdeepinfra jsondecode error (#27603)
Fixes #27602 

Added error handling to return empty dict if args is empty string or
None.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 13:47:19 -08:00
CLOVA Studio 개발
0588bab33e community: fix ClovaXEmbeddings document API link address (#27957)
- **Description:** 404 error occurs because `API reference` link address
path is incorrect on
`langchain/docs/docs/integrations/text_embedding/naver.ipynb`
- **Issue:** fix `API reference` link address correct path.

@vbarda @efriis
2024-11-07 13:46:01 -08:00
Akshata
05fd6a16a9 Add ChatModels wrapper for Cloudflare Workers AI (#27645)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: chat models wrapper for Cloudflare
Workers AI"


- [x] **PR message**:
- **Description:** Add chat models wrapper for Cloudflare Workers AI.
Enables Langgraph intergration via ChatModel for tool usage, agentic
usage.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-07 15:34:24 -05:00
Erick Friis
8a5b9bf2ad box: migrate to repo (#27969) 2024-11-07 10:19:22 -08:00
ccurme
1ad49957f5 docs[patch]: update cassettes for sql/csv notebook (#27966) 2024-11-07 11:48:45 -05:00
ccurme
a747dbd24b anthropic[patch]: remove retired model from tests (#27965)
`claude-instant` was [retired
yesterday](https://docs.anthropic.com/en/docs/resources/model-deprecations).
2024-11-07 16:16:29 +00:00
Aksel Joonas Reedi
2cb39270ec community: bytes as a source to AzureAIDocumentIntelligenceLoader (#26618)
- **Description:** This PR adds functionality to pass in in-memory bytes
as a source to `AzureAIDocumentIntelligenceLoader`.
- **Issue:** I needed the functionality, so I added it.
- **Dependencies:** NA
- **Twitter handle:** @akseljoonas if this is a big enough change :)

---------

Co-authored-by: Aksel Joonas Reedi <aksel@klippa.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:40:21 +00:00
Martin Triska
7a9149f5dd community: ZeroxPDFLoader (#27800)
# OCR-based PDF loader

This implements [Zerox](https://github.com/getomni-ai/zerox) PDF
document loader.
Zerox utilizes simple but very powerful (even though slower and more
costly) approach to parsing PDF documents: it converts PDF to series of
images and passes it to a vision model requesting the contents in
markdown.

It is especially suitable for complex PDFs that are not parsed well by
other alternatives.

## Example use:
```python
from langchain_community.document_loaders.pdf import ZeroxPDFLoader

os.environ["OPENAI_API_KEY"] = "" ## your-api-key

model = "gpt-4o-mini" ## openai model
pdf_url = "https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf"

loader = ZeroxPDFLoader(file_path=pdf_url, model=model)
docs = loader.load()
```

The Zerox library supports wide range of provides/models. See Zerox
documentation for details.

- **Dependencies:** `zerox`
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-11-07 03:14:57 +00:00
Dmitriy Prokopchuk
53b0a99f37 community: Memcached LLM Cache Integration (#27323)
## Description
This PR adds support for Memcached as a usable LLM model cache by adding
the ```MemcachedCache``` implementation relying on the
[pymemcache](https://github.com/pinterest/pymemcache) client.

Unit test-wise, the new integration is generally covered under existing
import testing. All new functionality depends on pymemcache if
instantiated and used, so to comply with the other cache implementations
the PR also adds optional integration tests for ```MemcachedCache```.

Since this is a new integration, documentation is added for Memcached as
an integration and as an LLM Cache.

## Issue
This PR closes #27275 which was originally raised as a discussion in
#27035

## Dependencies
There are no new required dependencies for langchain, but
[pymemcache](https://github.com/pinterest/pymemcache) is required to
instantiate the new ```MemcachedCache```.

## Example Usage
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:07:59 +00:00
Siddharth Murching
cfff2a057e community: Update UC toolkit documentation to use LangGraph APIs (#26778)
- **Description:** Update UC toolkit documentation to show an example of
using recommended LangGraph agent APIs before the existing LangChain
AgentExecutor example. Tested by manually running the updated example
notebook
- **Dependencies:** No new dependencies

---------

Signed-off-by: Sid Murching <sid.murching@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:47:41 +00:00
ZhangShenao
c2072d909a Improvement[Partner] Improve qdrant vector store (#27251)
- Add static method decorator
- Add args for api doc
- Fix word spelling

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:42:41 +00:00
Baptiste Pasquier
81f7daa458 community: add InfinityRerank (#27043)
**Description:** 

- Add a Reranker for Infinity server.

**Dependencies:** 

This wrapper uses
[infinity_client](https://github.com/michaelfeil/infinity/tree/main/libs/client_infinity/infinity_client)
to connect to an Infinity server.

**Tests and docs**

- integration test: test_infinity_rerank.py
- example notebook: infinity_rerank.ipynb
[here](https://github.com/baptiste-pasquier/langchain/blob/feat/infinity-rerank/docs/docs/integrations/document_transformers/infinity_rerank.ipynb)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 17:26:30 -08:00
Erick Friis
2494deb2a4 infra: remove google creds from release and integration test workflows (#27950) 2024-11-07 00:31:10 +00:00
Martin Triska
90189f5639 community: Allow other than default parsers in SharePointLoader and OneDriveLoader (#27716)
## What this PR does?

### Currently `O365BaseLoader` (and consequently both derived loaders)
are limited to `pdf`, `doc`, `docx` files.
- **Solution: here we introduce _handlers_ attribute that allows for
custom handlers to be passed in. This is done in _dict_ form:**

**Example:**
```python
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
# PR for DocumentLoaderAsParser here: https://github.com/langchain-ai/langchain/pull/27749
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# create dictionary mapping file types to handlers (parsers)
handlers = {
    "doc": MsWordParser()
    "pdf": PDFMinerParser()
    "txt": TextParser()
    "xlsx": xlsx_parser
}
loader = SharePointLoader(document_library_id="...",
                            handlers=handlers # pass handlers to SharePointLoader
                            )
documents = loader.load()

# works the same in OneDriveLoader
loader = OneDriveLoader(document_library_id="...",
                            handlers=handlers
                            )
```
This dictionary is then passed to `MimeTypeBasedParser` same as in the
[current
implementation](5a2cfb49e0/libs/community/langchain_community/document_loaders/parsers/registry.py (L13)).


### Currently `SharePointLoader` and `OneDriveLoader` are separate
loaders that both inherit from `O365BaseLoader`
However both of these implement the same functionality. The only
differences are:
- `SharePointLoader` requires argument `document_library_id` whereas
`OneDriveLoader` requires `drive_id`. These are just different names for
the same thing.
  - `SharePointLoader` implements significantly more features.
- **Solution: `OneDriveLoader` is replaced with an empty shell just
renaming `drive_id` to `document_library_id` and inheriting from
`SharePointLoader`**

**Dependencies:** None
**Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 17:44:34 -05:00
takahashi
482c168b3e langchain_core: add file_type option to make file type default as png (#27855)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **description**
langchain_core.runnables.graph_mermaid.draw_mermaid_png calls this
function, but the Mermaid API returns JPEG by default. To be consistent,
add the option `file_type` with the default `png` type.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
With this small change, I didn't add tests and docs.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more:
One long sentence was divided into two.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 22:37:07 +00:00
Roman Solomatin
0f85dea8c8 langchain-huggingface: use separate kwargs for queries and docs (#27857)
Now `encode_kwargs` used for both for documents and queries and this
leads to wrong embeddings. E. g.:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?",)
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # output: tensor([[0.8421, 0.3317]], dtype=torch.float64)
```
But from the [model
card](https://huggingface.co/dunzhang/stella_en_400M_v5#sentence-transformers)
expexted like this:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False}
    query_encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
        query_encode_kwargs=query_encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?", )
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # tensor([[0.8398, 0.2990]], dtype=torch.float64)
```
2024-11-06 17:35:39 -05:00
Bagatur
60123bef67 docs: fix trim_messages docstring (#27948) 2024-11-06 22:25:13 +00:00
murrlincoln
14f1827953 docs: Adding notebook for cdp agentkit toolkit (#27910)
- **Description:** Adding in the first pass of documentation for the CDP
Agentkit Toolkit
    - **Issue:** N/a
    - **Dependencies:** cdp-langchain
    - **Twitter handle:** @CoinbaseDev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: John Peterson <john.peterson@coinbase.com>
2024-11-06 13:28:27 -08:00
Eric Pinzur
ea0ad917b0 community: added Document.id support to opensearch vectorstore (#27945)
Description:
* Added support of Document.id on OpenSearch vector store
* Added tests cases to match
2024-11-06 15:04:09 -05:00
Hammad Randhawa
75aa82fedc docs: Completed sentence under the heading "Instantiating a Browser … (#27944)
…Toolkit" in "playwright.ipynb" integration.

- Completed the incomplete sentence in the Langchain Playwright
documentation.

- Enhanced documentation clarity to guide users on best practices for
instantiating browser instances with Langchain Playwright.

Example before:
> "It's always recommended to instantiate using the from_browser method
so that the

Example after:
> "It's always recommended to instantiate using the `from_browser`
method so that the browser context is properly initialized and managed,
ensuring seamless interaction and resource optimization."

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 19:55:00 +00:00
Bagatur
67ce05a0a7 core[patch]: make oai tool description optional (#27756) 2024-11-06 18:06:47 +00:00
Bagatur
b2da3115ed docs: document init_chat_model standard params (#27812) 2024-11-06 09:50:07 -08:00
Dobiichi-Origami
395674d503 community: re-arrange function call message parse logic for Qianfan (#27935)
the [PR](https://github.com/langchain-ai/langchain/pull/26208) two month
ago has a potential bug which causes malfunction of `tool_call` for
`QianfanChatEndpoint` waiting for fix
2024-11-06 09:58:16 -05:00
Erick Friis
41b7a5169d infra: starter codeowners file (#27929) 2024-11-05 16:43:11 -08:00
ccurme
66966a6e72 openai[patch]: release 0.2.6 (#27924)
Some additions in support of [predicted
outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs)
feature:
- Bump openai sdk version
- Add integration test
- Add example to integration docs

The `prediction` kwarg is already plumbed through model invocation.
2024-11-05 23:02:24 +00:00
Erick Friis
a8c473e114 standard-tests: ci pipeline (#27923) 2024-11-05 20:55:38 +00:00
Erick Friis
c3b75560dc infra: release note grep order of operations (#27922) 2024-11-05 12:44:36 -08:00
Erick Friis
b3c81356ca infra: release note compute 2 (#27921) 2024-11-05 12:04:41 -08:00
Erick Friis
bff2a8b772 standard-tests: add tools standard tests (#27899) 2024-11-05 11:44:34 -08:00
SHJUN
f6b2f82099 community: chroma error patch(attribute changed on chroma) (#27827)
There was a change of attribute name which was "max_batch_size". It's
now "get_max_batch_size" method.
I want to use "create_batches" which is right down below.

Please check this PR link.
reference: https://github.com/chroma-core/chroma/pull/2305

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Prithvi Kannan <46332835+prithvikannan@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Jun Yamog <jkyamog@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ono-hiroki <86904208+ono-hiroki@users.noreply.github.com>
Co-authored-by: Dobiichi-Origami <56953648+Dobiichi-Origami@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Duy Huynh <vndee.huynh@gmail.com>
Co-authored-by: Rashmi Pawar <168514198+raspawar@users.noreply.github.com>
Co-authored-by: sifatj <26035630+sifatj@users.noreply.github.com>
Co-authored-by: Eric Pinzur <2641606+epinzur@users.noreply.github.com>
Co-authored-by: Daniel Vu Dao <danielvdao@users.noreply.github.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
Co-authored-by: Stéphane Philippart <wildagsx@gmail.com>
2024-11-05 19:43:11 +00:00
Tomaz Bratanic
a3bbbe6a86 update llm graph transformer documentation (#27905) 2024-11-05 11:54:26 -05:00
Erick Friis
31f4fb790d standard-tests: release 0.3.0 (#27900) 2024-11-04 17:29:15 -08:00
Erick Friis
ba5cba04ff infra: get min versions (#27896) 2024-11-04 23:46:13 +00:00
Bagatur
6973f7214f docs: sidebar capitalization (#27894) 2024-11-04 22:09:32 +00:00
Stéphane Philippart
4b8cd7a09a community: Use new OVHcloud batch embedding (#26209)
- **Description:** change to do the batch embedding server side and not
client side
- **Twitter handle:** @wildagsx

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 16:40:30 -05:00
Erick Friis
a54f390090 infra: fix prev tag output (#27892) 2024-11-04 12:46:23 -08:00
Erick Friis
75f80c2910 infra: fix prev tag condition (#27891) 2024-11-04 12:42:22 -08:00
Ofer Mendelevitch
d7c39e6dbb community: update Vectara integration (#27869)
Thank you for contributing to LangChain!

- **Description:** Updated Vectara integration
- **Issue:** refresh on descriptions across all demos and added UDF
reranker
- **Dependencies:** None
- **Twitter handle:** @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:40:39 +00:00
Erick Friis
14a71a6e77 infra: fix prev tag calculation (#27890) 2024-11-04 12:38:39 -08:00
Daniel Vu Dao
5745f3bf78 docs: Update messages.mdx (#27856)
### Description
Updates phrasing for the header of the `Messages` section.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:36:31 +00:00
sifatj
e02a5ee03e docs: Update VectorStore as_retriever method url in qa_chat_history_how_to.ipynb (#27844)
**Description**: Update VectorStore `as_retriever` method api reference
url in `qa_chat_history_how_to.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:34:50 +00:00
sifatj
dd1711f3c2 docs: Update max_marginal_relevance_search api reference url in multi_vector.ipynb (#27843)
**Description**: Update VectorStore `max_marginal_relevance_search` api
reference url in `multi_vector.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:31:36 +00:00
sifatj
aa1f46a03a docs: Update VectorStore .as_retriever method url in vectorstore_retriever.ipynb (#27842)
**Description**: Update VectorStore `.as_retriever` method url in
`vectorstore_retriever.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:28:11 +00:00
Eric Pinzur
8eb38622a6 community: fixed bug in GraphVectorStoreRetriever (#27846)
Description:

This fixes an issue that mistakenly created in
https://github.com/langchain-ai/langchain/pull/27253. The issue
currently exists only in `langchain-community==0.3.4`.

Test cases were added to prevent this issue in the future.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:17 +00:00
sifatj
eecf95df9b docs: Update VectorStore api reference url in rag.ipynb (#27841)
**Description**: Update VectorStore api reference url in `rag.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:03 +00:00
sifatj
50563400fb docs: Update broken vectorstore urls in retrievers.ipynb (#27838)
**Description**: Update outdated `VectorStore` api reference urls in
`retrievers.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:26:03 +00:00
Bagatur
dfa83531ad qdrant,nomic[minor]: bump core deps (#27849) 2024-11-04 20:19:50 +00:00
Erick Friis
4e5cc84d40 infra: release tag compute (#27836) 2024-11-04 12:16:51 -08:00
Rashmi Pawar
f86a09f82c Add nvidia as provider for embedding, llm (#27810)
Documentation: Add NVIDIA as integration provider

cc: @mattf @dglogo

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 19:45:51 +00:00
Erick Friis
0c62684ce1 Revert "infra: add neo4j to package list" (#27887)
Reverts langchain-ai/langchain#27833

Wait for release
2024-11-04 18:18:38 +00:00
Erick Friis
bcf499df16 infra: add neo4j to package list (#27833) 2024-11-04 09:24:04 -08:00
Duy Huynh
a487ec47f4 community: set default output_token_limit value for PowerBIToolkit to fix validation error (#26308)
### Description:
This PR sets a default value of `output_token_limit = 4000` for the
`PowerBIToolkit` to fix the unintentionally validation error.

### Problem:
When attempting to run a code snippet from [Langchain's PowerBI toolkit
documentation](https://python.langchain.com/v0.1/docs/integrations/toolkits/powerbi/)
to interact with a `PowerBIDataset`, the following error occurs:

```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for QueryPowerBITool
output_token_limit
  none is not an allowed value (type=type_error.none.not_allowed)
```

### Root Cause:
The issue arises because when creating a `QueryPowerBITool`, the
`output_token_limit` parameter is unintentionally set to `None`, which
is the current default for `PowerBIToolkit`. However, `QueryPowerBITool`
expects a default value of `4000` for `output_token_limit`. This
unintended override causes the error.


17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L63)

17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L72-L79)

17659ca2cd/libs/community/langchain_community/tools/powerbi/tool.py (L39)

### Solution:
To resolve this, the default value of `output_token_limit` is now
explicitly set to `4000` in `PowerBIToolkit` to prevent the accidental
assignment of `None`.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 14:34:27 +00:00
Dobiichi-Origami
f7ced5b211 community: read function call from tool_calls for Qianfan (#26208)
I added one more 'elif' to read tool call message from `tool_calls`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-04 14:33:32 +00:00
ono-hiroki
b7d549ae88 docs: fix undefined 'data' variable in document_loader_csv.ipynb (#27872)
**Description:** 
This PR addresses an issue in the CSVLoader example where data is not
defined, causing a NameError. The line `data = loader.load()` is added
to correctly assign the output of loader.load() to the data variable.
2024-11-04 14:10:56 +00:00
Bagatur
3b0b7cfb74 chroma[minor]: release 0.2.0 (#27840) 2024-11-01 18:12:00 -07:00
Jun Yamog
830cad7bc0 core: fix CommaSeparatedListOutputParser to handle columns that may contain commas in it (#26365)
- **Description:**
Currently CommaSeparatedListOutputParser can't handle strings that may
contain commas within a column. It would parse any commas as the
delimiter.
Ex. 
"foo, foo2", "bar", "baz"

It will create 4 columns: "foo", "foo2", "bar", "baz"

This should be 3 columns:

"foo, foo2", "bar", "baz"

- **Dependencies:**
Added 2 additional imports, but they are built in python packages.

import csv
from io import StringIO

- **Twitter handle:** @jkyamog

- [ ] **Add tests and docs**: 
1. added simple unit test test_multiple_items_with_comma

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-01 22:42:24 +00:00
Erick Friis
9fedb04dd3 docs: INVALID_CHAT_HISTORY redirect (#27845) 2024-11-01 21:35:11 +00:00
Erick Friis
03a3670a5e infra: remove some special cases (#27839) 2024-11-01 21:13:43 +00:00
Bagatur
002e1c9055 airbyte: remove from master (#27837) 2024-11-01 13:59:34 -07:00
Bagatur
ee63d21915 many: use core 0.3.15 (#27834) 2024-11-01 20:35:55 +00:00
Prithvi Kannan
c3c638cd7b docs: Reference new databricks-langchain package (#27828)
Thank you for contributing to LangChain!

Update references in Databricks integration page to reference our new
partner package databricks-langchain
https://github.com/databricks/databricks-ai-bridge/tree/main/integrations/langchain

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
2024-11-01 10:21:19 -07:00
sifatj
33d445550e docs: update VectorStore api reference url in retrievers.ipynb (#27814)
**Description:** Update outdated `VectorStore` api reference url in
Vector store subsection of `retrievers.ipynb`
2024-11-01 15:44:26 +00:00
sifatj
9a4a630e40 docs: Update Retrievers and Runnable links in Retrievers subsection of retrievers.ipynb (#27815)
**Description:** Update outdated links for `Retrievers` and `Runnable`
in Retrievers subsection of `retrievers.ipynb`
2024-11-01 15:42:30 +00:00
Zapiron
b0dfff4cd5 Fixed broken link for TokenTextSplitter (#27824)
Fixed the broken redirect link for `TokenTextSplitter` section
2024-11-01 11:32:07 -04:00
William FH
b4cb2089a2 langchain[patch]: Add warning in react agent (#26980) 2024-10-31 22:29:34 +00:00
Eugene Yurtsev
2f6254605d docs: fix more links (#27809)
Fix more broken links
2024-10-31 17:15:46 -04:00
Ant White
e3ea365725 core: use friendlier names for duplicated nodes in mermaid output (#27747)
Thank you for contributing to LangChain!

- [x] **PR title**: "core: use friendlier names for duplicated nodes in
mermaid output"

- **Description:** When generating the Mermaid visualization of a chain,
if the chain had multiple nodes of the same type, the reid function
would replace their names with the UUID node_id. This made the generated
graph difficult to understand. This change deduplicates the nodes in a
chain by appending an index to their names.
- **Issue:** None
- **Discussion:**
https://github.com/langchain-ai/langchain/discussions/27714
- **Dependencies:** None

- [ ] **Add tests and docs**:  
- Currently this functionality is not covered by unit tests, happy to
add tests if you'd like


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

# Example Code:
```python
from langchain_core.runnables import RunnablePassthrough

def fake_llm(prompt: str) -> str: # Fake LLM for the example
    return "completion"

runnable = {
    'llm1':  fake_llm,
    'llm2':  fake_llm,
} | RunnablePassthrough.assign(
    total_chars=lambda inputs: len(inputs['llm1'] + inputs['llm2'])
)

print(runnable.get_graph().draw_mermaid(with_styles=False))
```

# Before
```mermaid
graph TD;
	Parallel_llm1_llm2_Input --> 0b01139db5ed4587ad37964e3a40c0ec;
	0b01139db5ed4587ad37964e3a40c0ec --> Parallel_llm1_llm2_Output;
	Parallel_llm1_llm2_Input --> a98d4b56bd294156a651230b9293347f;
	a98d4b56bd294156a651230b9293347f --> Parallel_llm1_llm2_Output;
	Parallel_total_chars_Input --> Lambda;
	Lambda --> Parallel_total_chars_Output;
	Parallel_total_chars_Input --> Passthrough;
	Passthrough --> Parallel_total_chars_Output;
	Parallel_llm1_llm2_Output --> Parallel_total_chars_Input;
```

# After
```mermaid
graph TD;
	Parallel_llm1_llm2_Input --> fake_llm_1;
	fake_llm_1 --> Parallel_llm1_llm2_Output;
	Parallel_llm1_llm2_Input --> fake_llm_2;
	fake_llm_2 --> Parallel_llm1_llm2_Output;
	Parallel_total_chars_Input --> Lambda;
	Lambda --> Parallel_total_chars_Output;
	Parallel_total_chars_Input --> Passthrough;
	Passthrough --> Parallel_total_chars_Output;
	Parallel_llm1_llm2_Output --> Parallel_total_chars_Input;
```
2024-10-31 16:52:00 -04:00
Eugene Yurtsev
71f590de50 docs: fix more broken links (#27806)
Fix some broken links
2024-10-31 19:46:39 +00:00
Neli Hateva
c572d663f9 docs: Ontotext GraphDB QA Chain Update Documentation (Fix versions of libraries) (#27783)
- **Description:** Update versions of libraries in the Ontotext GraphDB
QA Chain Documentation
 - **Issue:** N/A
 - **Dependencies:** N/A
 - **Twitter handle:** @OntotextGraphDB
2024-10-31 15:23:16 -04:00
L
8ef0df3539 feat: add batch request support for text-embedding-v3 model (#26375)
PR title: “langchain: add batch request support for text-embedding-v3
model”

PR message:

• Description: This PR introduces batch request support for the
text-embedding-v3 model within LangChain. The new functionality allows
users to process multiple text inputs in a single request, improving
efficiency and performance for high-volume applications.
	•	Issue: This PR addresses #<issue_number> (if applicable).
• Dependencies: No new external dependencies are required for this
change.
• Twitter handle: If announced on Twitter, please mention me at
@yourhandle.

Add tests and docs:

1. Added unit tests to cover the batch request functionality, ensuring
it operates without requiring network access.
2. Included an example notebook demonstrating the batch request feature,
located in docs/docs/integrations.

Lint and test: All required formatting and linting checks have been
performed using make format and make lint. The changes have been
verified with make test to ensure compatibility.

Additional notes:

	•	The changes are fully backwards compatible.
• No modifications were made to pyproject.toml, ensuring no new
dependencies were added.
• The update only affects the langchain package and does not involve
other packages.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-31 18:56:22 +00:00
putao520
2545fbe709 fix "WARNING: Received notification from DBMS server: {severity: WARN… (#27112)
…ING} {code: Neo.ClientNotification.Statement.FeatureDeprecationWarning}
{category: DEPRECATION} {title: This feature is deprecated and will be
removed in future versions.} {description: CALL subquery without a
variable scope clause is now deprecated." this warning

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: putao520 <putao520@putao282.com>
2024-10-31 18:47:25 +00:00
Ankan Mahapatra
905f43377b Update word_document.py | Fixed metadata["source"] for web paths (#27220)
The metadata["source"] value for the web paths was being set to
temporary path (/tmp).

Fixed it by creating a new variable self.original_file_path, which will
store the original path.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-31 18:37:41 +00:00
Daniel Birn
389771ccc0 community: fix @embeddingKey in azure cosmos db no sql (#27377)
I will keep this PR as small as the changes made.

**Description:** fixes a fatal bug syntax error in
AzureCosmosDBNoSqlVectorSearch
**Issue:** #27269 #25468
2024-10-31 18:36:02 +00:00
Bagatur
06420de2e7 integrations[patch]: bump core to 0.3.15 (#27805) 2024-10-31 11:27:05 -07:00
Erick Friis
54cb80c778 docs: experimental case, use yq action (#27798) 2024-10-31 11:21:48 -07:00
W. Gustavo Cevallos
f94125a325 community: Update Polygon.io API (#27552)
**Description:** 
Update the wrapper to support the Polygon API if not you get an error. I
keeped `STOCKBUSINESS` for retro-compatbility with older endpoints /
other uses
Old Code:
```
 if status not in ("OK", "STOCKBUSINESS"):
    raise ValueError(f"API Error: {data}")

```
API Respond:
```
API Error: {'results': {'P': 0.22, 'S': 0, 'T': 'ZOM', 'X': 5, 'p': 0.123, 'q': 0, 's': 200, 't': 1729614422813395456, 'x': 1, 'z': 1}, 'status': 'STOCKSBUSINESS', 'request_id': 'XXXXXX'}
```

- **Issue:** N/A Polygon API update
- **Dependencies:** N/A
- **Twitter handle:** @wgcv

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-10-31 18:14:06 +00:00
Wang
621f78babd community: [fix] add missing tool_calls kwargs of delta message in openai adapter (#27492)
- **Description:** add missing tool_calls kwargs of delta message in
openai adapter, then tool call will work correctly via adapter's stream
chat completion
- **Issue:** Fixes
https://github.com/langchain-ai/langchain/issues/25436
- **Dependencies:** None
2024-10-31 14:07:17 -04:00
Tao Wang
25a1031871 community: Fix a validation error for MoonshotChat (#27801)
- **Description:** Change `MoonshotCommon.client` type from
`_MoonshotClient` to `Any`.
- **Issue:** Fix the issue #27058
- **Dependencies:** No
- **Twitter handle:** TaoWang2218

In PR #17100, the implementation for Moonshot was added, which defined
two classes:

- `MoonshotChat(MoonshotCommon, ChatOpenAI)` in
`langchain_community.chat_models.moonshot`;
- Here, `validate_environment()` assigns **client** as
`openai.OpenAI().chat.completions`
- Note that **client** here is actually a member variable defined in
`ChatOpenAI`;
- `MoonshotCommon` in `langchain_community.llms.moonshot`;
- And here, `validate_environment()` assigns **_client** as
`_MoonshotClient`;
- Note that this is the underscored **_client**, which is defined within
`MoonshotCommon` itself;

At this time, there was no conflict between the two, one being `client`
and the other `_client`.

However, in PR #25878 which fixed #24390, `_client` in `MoonshotCommon`
was changed to `client`. Since then, a conflict in the definition of
`client` has arisen between `MoonshotCommon` and `MoonshotChat`, which
caused `pydantic` validation error.

To fix this issue, the type of `client` in `MoonshotCommon` should be
changed to `Any`.

Signed-off-by: Tao Wang <twang2218@gmail.com>
2024-10-31 14:00:16 -04:00
Bagatur
e4e2aa0b78 core[patch]: update image util err msg (#27803) 2024-10-31 10:56:43 -07:00
Bagatur
181bcd0577 core[patch]: Release 0.3.15 (#27802) 2024-10-31 10:35:02 -07:00
Bagatur
c1e742347f core[patch]: rm image loading (#27797) 2024-10-31 10:34:51 -07:00
ZhangShenao
ad0387ac97 Improvement [docs] Improve api docs (#27787)
- Add missing param
- Remove unused param

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-31 16:56:44 +00:00
Changyong Um
d9163e7afa community[docs]: Add content for the Lora adapter in the VLLM page. (#27788)
**Description:**
I added code for lora_request in the community package, but I forgot to
add content to the VLLM page. So, I will do that now. #27731

---------

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-10-31 12:44:35 -04:00
ccurme
0172d938b4 community: add AzureOpenAIWhisperParser (#27796)
Commandeered from https://github.com/langchain-ai/langchain/pull/26757.

---------

Co-authored-by: Sheepsta300 <128811766+Sheepsta300@users.noreply.github.com>
2024-10-31 12:37:41 -04:00
ccurme
b631b0a596 community[patch]: cap SQLAlchemy and update deps (#27792)
SQLAlchemy 2.0.36 introduces a regression when creating a table in
DuckDB.

Relevant issues:
- In SQLAlchemy repo (resolution is to update DuckDB):
https://github.com/sqlalchemy/sqlalchemy/discussions/12011
- In DuckDB repo (PR is open):
https://github.com/Mause/duckdb_engine/issues/1128

Plan is to track these issues and remove cap when resolved.
2024-10-31 14:19:09 +00:00
Erick Friis
8ad7adad87 infra: build api docs from package listing (#27774) 2024-10-30 21:31:01 -07:00
JiaranI
3952ee31b8 ollama: add pydocstyle linting for ollama (#27686)
Description: add lint docstrings for ollama module
Issue: the issue https://github.com/langchain-ai/langchain/issues/23188
@baskaryan

test: ruff check passed.
<img width="311" alt="e94c68ffa93dd518297a95a93de5217"
src="https://github.com/user-attachments/assets/e96bf721-e0e3-44de-a50e-206603de398e">

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-31 03:06:55 +00:00
Aayush Kataria
a8a33b2dc6 LangChain-Community - AzureCosmos Mongo vCore: Bug Fix when the data doesn't contain metadata field (#27772)
Thank you for contributing to LangChain!
- **Description:** Adding an empty metadata field when metadata is not
present in the data
- **Issue:** This PR fixes the issue when the data items doesn't contain
the metadata field. This happens when there is already data in the
container, or cx uses CosmosDB Python SDK to insert data.
- **Dependencies:** No dependencies required

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-30 20:05:25 -07:00
Rave Harpaz
8d8d85379f community: OCI Generative AI tool calling bug fix (#26910)
- [x] **PR title**: 
  "community: OCI Generative AI tool calling bug fix 


- [x] **PR message**: 
- **Description:** bug fix for streaming chat responses with tool calls.
Update to PR 24693
    - **Issue:** chat response content is repeated when streaming
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: NA


- [x] **Lint and test**: make format, make lint and make test we run
successfully

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-31 02:35:25 +00:00
Erick Friis
128b07208e community: release 0.3.4 (#27769) 2024-10-30 17:48:03 -07:00
Bagatur
6691202998 anthropic[patch]: allow multiple sys not at start (#27725) 2024-10-30 23:56:47 +00:00
Erick Friis
1ed3cd252e langchain: release 0.3.6 (#27768) 2024-10-30 23:50:42 +00:00
Sergey Ryabov
8180637345 community[patch]: Fix Playwright Tools bug with Pydantic schemas (#27050)
- Add tests for Playwright tools schema serialization
- Introduce base empty args Input class for BaseBrowserTool

Test Plan: `poetry run pytest
tests/unit_tests/tools/playwright/test_all.py`

Fixes #26758

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-30 23:45:36 +00:00
Bagatur
92024d0d7d infra: turn off release attestations (#27765) 2024-10-30 15:22:31 -07:00
Bagatur
deb4320d29 core[patch]: Release 0.3.14 (#27764) 2024-10-30 21:47:33 +00:00
Bagatur
5d337326b0 core[patch]: make get_all_basemodel_annotations public (#27761) 2024-10-30 14:43:29 -07:00
Bagatur
94ea950c6c core[patch]: support bedrock converse -> openai tool (#27754) 2024-10-30 12:20:39 -07:00
Lorenzo
3dfdb3e6fb community: prevent gitlab commit on main branch for Gitlab tool (#27750)
### About

- **Description:** In the Gitlab utilities used for the Gitlab tool
there is no check to prevent pushing to the main branch, as this is
already done for Github (for example here:
5a2cfb49e0/libs/community/langchain_community/utilities/github.py (L587)).
This PR add this check as already done for Github.
- **Issue:** None
- **Dependencies:** None
2024-10-30 18:50:13 +00:00
Sam Julien
0a472e2a2d community: Add Writer integration (#27646)
**Description:** Add support for Writer chat models   
**Issue:** N/A
**Dependencies:** Add `writer-sdk` to optional dependencies.
**Twitter handle:** Please tag `@samjulien` and `@Get_Writer`

**Tests and docs**
- [x] Unit test
- [x] Example notebook in `docs/docs/integrations` directory.

**Lint and test**
- [x] Run `make format` 
- [x] Run `make lint`
- [x] Run `make test`

---------

Co-authored-by: Johannes <tolstoy.work@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-30 18:06:05 +00:00
ccurme
595dc592c9 docs: run how-to guides in CI (#27615)
Add how-to guides to [Run notebooks
job](https://github.com/langchain-ai/langchain/actions/workflows/run_notebooks.yml)
and fix existing notebooks.

- As with tutorials, cassettes must be updated when HTTP calls in guides
change (by running existing
[script](https://github.com/langchain-ai/langchain/blob/master/docs/scripts/update_cassettes.sh)).
- Cassettes now total ~62mb over 474 files.
- `docs/scripts/prepare_notebooks_for_ci.py` lists a number of notebooks
that do not run (e.g., due to requiring additional infra, slowness,
requiring `input()`, etc.).
2024-10-30 12:35:38 -04:00
ccurme
88bfd60b03 infra: specify python max version of 3.12 for some integration packages (#27740) 2024-10-30 12:24:48 -04:00
fayvor
3b956b3a97 community: Update Replicate LLM and fix tests (#27655)
**Description:** 
- Fix bug in Replicate LLM class, where it was looking for parameter
names in a place where they no longer exist in pydantic 2, resulting in
the "Field required" validation error described in the issue.
- Fix Replicate LLM integration tests to:
  - Use active models on Replicate.
- Use the correct model parameter `max_new_tokens` as shown in the
[Replicate
docs](https://replicate.com/docs/guides/language-models/how-to-use#minimum-and-maximum-new-tokens).
  - Use callbacks instead of deprecated callback_manager.

**Issue:** #26937 

**Dependencies:** n/a

**Twitter handle:** n/a

---------

Signed-off-by: Fayvor Love <fayvor@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-30 16:07:08 +00:00
Yuki Watanabe
e593e017d2 Update compatibility table for ChatDatabricks (#27676)
`ChatDatabricks` added support for structured output and JSON mode in
the last release. This PR updates the feature table accordingly.

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-10-30 11:56:55 -04:00
ccurme
bd5ea18a6c groq[patch]: update standard tests (#27744)
- Add xfail on integration test (fails [> 50% of the
time](https://github.com/langchain-ai/langchain/actions/workflows/scheduled_test.yml));
- Remove xfail on passing unit test.
2024-10-30 15:50:51 +00:00
hmn falahi
98bb3a02bd docs: Add OpenAIAssistantV2Runnable docstrings (#27402)
- **Description:** add/improve docstrings of OpenAIAssistantV2Runnable
- **Issue:** the issue #21983

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-30 15:35:51 +00:00
Luiz F. G. dos Santos
7a29ca6200 community: add new parameters to pass to OpenAIAssistantV2Runnable (#27372)
Thank you for contributing to LangChain!
 
**Description:** Added the model parameters to be passed in the OpenAI
Assistant. Enabled it at the `OpenAIAssistantV2Runnable` class.
 **Issue:** NA
  **Dependencies:** None
  **Twitter handle:** luizf0992
2024-10-30 10:51:03 -04:00
Ankur Singh
0b97135da1 fix the grammar and markdown component (#27657)
## Before

![Screenshot from 2024-10-26
08-47-29](https://github.com/user-attachments/assets/d8ccead1-3ba3-4f67-a29f-ef8b352341cf)

## After


![image](https://github.com/user-attachments/assets/78f36d54-b2d7-4164-b334-8ac41000711e)

## Typo
`(either in PR summary of in a linked issue)` => `either in PR summary
or in a linked issue`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-30 14:47:26 +00:00
Abdesselam Benameur
8fb6708ac4 Fix typo (missing letter) in elasticsearch_retriever.ipynb (#27639)
Fixed a small typo (added a missing "t" in ElasticsearchRetriever docs
page)


https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/#:~:text=It%20is%20possible%20to%20cusomize%20the%20function%20tha%20maps%20an%20Elasticsearch%20result%20(hit)%20to%20a%20LangChain%20document.
2024-10-30 14:38:39 +00:00
随风枫叶
18cfb4c067 community: Add token_usage and model_name metadata to ChatZhipuAI stream() and astream() response (#27677)
Thank you for contributing to LangChain!


- **Description:** Add token_usage and model_name metadata to
ChatZhipuAI stream() and astream() response
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: jianfehuang <jianfehuang@tencent.com>
2024-10-30 10:34:33 -04:00
Martin Gullbrandson
8a5807a6b4 docs: Update Milvus documentation to correctly show how to filter in similarity_search (#27723)
### Description/Issue:

I had problems filtering when setting up a local Milvus db and noticed
that the `filter` option in the `similarity_search` and
`similarity_search_with_score` appeared to do nothing. Instead, the
`expr` option should be used.

The `expr` option is correctly used in the retriever example further
down in the documentation.

The `expr` option seems to be correctly passed on, for example
[here](447c0dd2f0/libs/community/langchain_community/vectorstores/milvus.py (L701))

### Solution:

Update the documentation for the functions mentioned to show intended
behavior.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-30 14:15:11 +00:00
tkubo-heroz
028e0253d8 community: Added anthropic.claude-3-5-sonnet-20241022-v2:0 cost detials (#27728)
Added anthropic.claude-3-5-sonnet-20241022-v2:0 cost detials
2024-10-30 14:01:01 +00:00
Changyong Um
dc171221b3 community[patch]: Fix vLLM integration to apply lora_request (#27731)
**Description:**
- Add the `lora_request` parameter to the VLLM class to support LoRA
model configurations. This enhancement allows users to specify LoRA
requests directly when using VLLM, enabling more flexible and efficient
model customization.

**Issue:**
- No existing issue for `lora_adapter` in VLLM. This PR addresses the
need for configuring LoRA requests within the VLLM framework.
- Reference : [Using LoRA Adapters in
vLLM](https://docs.vllm.ai/en/stable/models/lora.html#using-lora-adapters)


**Example Code :**
Before this change, the `lora_request` parameter was not applied
correctly:

```python
ADAPTER_PATH = "/path/of/lora_adapter"

llm = VLLM(model="Bllossom/llama-3.2-Korean-Bllossom-3B",
           max_new_tokens=512,
           top_k=2,
           top_p=0.90,
           temperature=0.1,
           vllm_kwargs={
               "gpu_memory_utilization":0.5, 
               "enable_lora":True, 
               "max_model_len":1024,
           }
)

print(llm.invoke(
    ["...prompt_content..."], 
    lora_request=LoRARequest("lora_adapter", 1, ADAPTER_PATH)
    ))
```
**Before Change Output:**
```bash
response was not applied lora_request
```
So, I attempted to apply the lora_adapter to
langchain_community.llms.vllm.VLLM.

**current output:**
```bash
response applied lora_request
```

**Dependencies:**
- None

**Lint and test:**
- All tests and lint checks have passed.

---------

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-10-30 13:59:34 +00:00
Nawaz Haider
9d2f6701e1 DOCS: Fixed import of langchain instead of langchain_nvidia_ai_endpoints for ChatNVIDIA (#27734)
* **PR title**: "docs: Replaced langchain import with
langchain-nvidia-ai-endpoints in NVIDIA Endpoints Tab"

* **PR message**:
+ **Description:** Replaced the import of `langchain` with
`langchain-nvidia-ai-endpoints` in the NVIDIA Endpoints Tab to resolve
an error caused by the documentation attempting to import the generic
`langchain` module despite the targeted import.
	+ **Issue:** 
+ **Dependencies:** No additional dependencies introduced; simply
updated the existing import to a more specific module.
	+ **Twitter handle:** https://x.com/nawaz0x1

* **Add tests and docs**:
+ **Applicability:** Not applicable in this case, as the change is a fix
to an existing integration rather than the addition of a new one.
+ **Rationale:** No new functionality or integrations are introduced,
only a corrective import change.

* **Lint and test**:
	+ **Status:** Completed
	+ **Outcome:** 
		- `make format`: **Passed**
		- `make lint`: **Passed**
		- `make test`: **Passed**


![image](https://github.com/user-attachments/assets/fbc1b597-5083-4461-875a-d32ab8ed933c)
2024-10-30 13:57:37 +00:00
Qier LU
8d8e38b090 community[pathch]: Add missing custom content_key handling in Redis vector store (#27736)
This fix an error caused by missing custom content_key handling in Redis
vector store in function similarity_search_with_score.
2024-10-30 13:57:20 +00:00
William FH
5a2cfb49e0 Support message trimming on single messages (#27729)
Permit trimming message lists of length 1
2024-10-30 04:27:52 +00:00
Bagatur
5111063af2 langchain[patch]: Release 0.3.5 (#27727) 2024-10-29 17:06:23 -07:00
Bagatur
8f4423e042 text-splitters[patch]: Release 0.3.1 (#27726) 2024-10-30 00:04:48 +00:00
Prithvi Kannan
0433b114bb docs: Add databricks-langchain package consolidation notice (#27703)
Thank you for contributing to LangChain!

Add notice of upcoming package consolidation of `langchain-databricks`
into `databricks-langchain`.

<img width="1047" alt="image"
src="https://github.com/user-attachments/assets/18eaa394-4e82-444b-85d5-7812be322674">


Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-29 22:00:27 +00:00
Zapiron
447c0dd2f0 docs: Fixed Grammar & Improve reading (#27672)
Updated the documentation to fix some grammar errors

    - **Description:** Some language errors exist in the documentation
    - **Issue:** the issue # Changed the structure of some sentences
2024-10-29 20:19:00 +00:00
Soham Das
913ff1b152 docs: fix typo in query analysis documentation (#27721)
**PR Title**: `docs: fix typo in query analysis documentation`

**Description**: This PR corrects a typo on line 68 in the query
analysis documentation, changing **"pharsings"** to **"phrasings"** for
clarity and accuracy. Only one instance of the typo was fixed in the
last merge, and this PR fixes the second instance.

**Issue**: N/A

**Dependencies**: None

**Additional Notes**: No functional changes were made; this is a
documentation fix only.
2024-10-29 16:15:37 -04:00
Erick Friis
8396ca2990 docs: redis in api docs (#27722) 2024-10-29 20:13:53 +00:00
Mateusz Szewczyk
0606aabfa3 docs: Added WatsonxRerank documentation (#27424)
Thank you for contributing to LangChain!

Changes:
- docs: Added `WatsonxRerank` documentation 
- docs Updated `WatsonxEmbeddings` with docs template
- docs: Updated `ChatWatsonx` with docs template
- docs: Updated `WatsonxLLM` with docs template
- docs: Added `ChatWatsonx` to list with Chat models providers. Added
[test_chat_models_standard](https://github.com/langchain-ai/langchain-ibm/blob/main/libs/ibm/tests/integration_tests/test_chat_models_standard.py)
to `langchain_ibm` tests suite.
- docs: Added `IBM` to list with Embedding models providers. Added
[test_embeddings_standard](https://github.com/langchain-ai/langchain-ibm/blob/main/libs/ibm/tests/integration_tests/test_embeddings_standard.py)
to `langchain_ibm` tests suite.
- docs: Updated `langcahin_ibm` recommended versions compatible with
`LangChain v0.3`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-29 16:57:47 +00:00
Zapiron
9ccd4a6ffb DOC: Tutorial Section Updates (#27675)
Edited various notebooks in the tutorial section to fix:
* Grammatical Errors
* Improve Readability by changing the sentence structure or reducing
repeated words which bears the same meaning
* Edited a code block to follow the PEP 8 Standard
* Added more information in some sentences to make the concept more
clear and reduce ambiguity

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-10-29 14:51:34 +00:00
Harsimran-19
c1d8c33df6 core: JsonOutputParser UTF characters bug (#27306)
**Description:**
This PR fixes an issue where non-ASCII characters in Pydantic field
descriptions were being escaped to their Unicode representations when
using `JsonOutputParser`. The change allows non-ASCII characters to be
preserved in the output, which is especially important for multilingual
support and when working with non-English languages.

**Issue:** Fixes #27256

**Example Code:**
```python
from pydantic import BaseModel, Field
from langchain_core.output_parsers import JsonOutputParser

class Article(BaseModel):
    title: str = Field(description="科学文章的标题")

output_data_structure = Article
parser = JsonOutputParser(pydantic_object=output_data_structure)
print(parser.get_format_instructions())
```
**Previous Output**:
```... "title": {"description": "\\u79d1\\u5b66\\u6587\\u7ae0\\u7684\\u6807\\u9898", "title": "Title", "type": "string"}} ...```

**Current Output**:
```... "title": {"description": "科学文章的标题", "title": "Title", "type":
"string"}} ...```

**Changes made**:
- Modified `json.dumps()` call in
`langchain_core/output_parsers/json.py` to use `ensure_ascii=False`
- Added a unit test to verify Unicode handling

Co-authored-by: Harsimran-19 <harsimran1869@gmail.com>
2024-10-29 14:48:53 +00:00
Andrew Effendi
49517cc1e7 partners/huggingface[patch]: fix HuggingFacePipeline model_id parameter (#27514)
**Description:** Fixes issue with model parameter not getting
initialized correctly when passing transformers pipeline
**Issue:** https://github.com/langchain-ai/langchain/issues/25915
2024-10-29 14:34:46 +00:00
Jeong-Minju
0a465b8032 docs: Fix typo in _action_agent docs section (#27698)
PR Title: docs: Fix typo in _action_agent function docs section

Description: In line 1185, _action_agent function's docs, changing
**".agent"** to **"self.agent"**.

Issue: N/A

Dependencies: None

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-10-29 14:16:42 +00:00
Soham Das
c3021e9322 docs: fix typo in query analysis documentation (#27697)
**PR Title**: `docs: fix typo in query analysis documentation`

**Description**: This PR corrects a typo on line 68 in the query
analysis documentation, changing **"pharsings"** to **"phrasings"** for
clarity and accuracy.

**Issue**: N/A

**Dependencies**: None

**Additional Notes**: No functional changes were made; this is a
documentation fix only.
2024-10-29 14:07:22 +00:00
Neil Vachharajani
eec35672a4 core[patch]: Improve type checking for the tool decorator (#27460)
**Description:**

When annotating a function with the @tool decorator, the symbol should
have type BaseTool. The previous type annotations did not convey that to
type checkers. This patch creates 4 overloads for the tool function for
the 4 different use cases.

1. @tool decorator with no arguments
2. @tool decorator with only keyword arguments
3. @tool decorator with a name argument (and possibly keyword arguments)
4. Invoking tool as function with a name and runnable positional
arguments

The main function is updated to match the overloads. The changes are
100% backwards compatible (all existing calls should continue to work,
just with better type annotations).

**Twitter handle:** @nvachhar

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-29 13:59:56 +00:00
Erick Friis
94e5765416 docs: packages in homepage (#27693) 2024-10-28 20:44:30 +00:00
Erick Friis
583808a7b8 partners/huggingface: release 0.1.1 (#27691) 2024-10-28 13:39:38 -07:00
Erick Friis
6d524e9566 partners/box: release 0.2.2 (#27690) 2024-10-28 12:54:20 -07:00
yahya-mouman
6803cb4f34 openai[patch]: add check for none values when summing token usage (#27585)
**Description:** Fixes None addition issues when an empty value is
passed on

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-28 12:49:43 -07:00
Bagatur
ede953d617 openai[patch]: fix schema formatting util (#27685) 2024-10-28 15:46:47 +00:00
Baptiste Pasquier
440c162b8b community: Fix closed session in Infinity (#26933)
**Description:** 

The `aiohttp.ClientSession` is closed at the end of the with statement,
which causes an error during a second call.

The implemented fix is to define the session directly within the with
block, exactly like in the textembed code:


c6350d636e/libs/community/langchain_community/embeddings/textembed.py (L335-L346)
 
**Issue:** Fix #26932

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-10-27 11:37:21 -04:00
Jorge Piedrahita Ortiz
8895d468cb community: sambastudio llm refactor (#27215)
**Description:** 
    - Sambastudio LLM refactor 
    - Sambastudio openai compatible API support added
    - docs updated
2024-10-27 11:08:15 -04:00
ccurme
fe87e411f2 groq: fix unit test (#27660) 2024-10-26 14:57:23 -04:00
Erick Friis
cdb4b1980a docs: reorganize contributing docs (#27649) 2024-10-25 22:41:54 +00:00
Erick Friis
fbfc6bdade core: test runner improvements (#27654)
when running core tests locally this
- prevents langsmith tracing from being enabled by env vars
- prevents network calls
2024-10-25 15:06:59 -07:00
Gabriel Faundez
ef27ce7a45 docs: add missing import for tools docs (#27650)
## Description

Added missing import from `pydantic` in the tools docs
2024-10-25 21:14:40 +00:00
Vincent Min
7bc4e320f1 core[patch]: improve performance of InMemoryVectorStore (#27538)
**Description:** We improve the performance of the InMemoryVectorStore.
**Isue:** Originally, similarity was computed document by document:
```
for doc in self.store.values():
            vector = doc["vector"]
            similarity = float(cosine_similarity([embedding], [vector]).item(0))
```
This is inefficient and does not make use of numpy vectorization.
This PR computes the similarity in one vectorized go:
```
docs = list(self.store.values())
similarity = cosine_similarity([embedding], [doc["vector"] for doc in docs])
```
**Dependencies:** None
**Twitter handle:** @b12_consulting, @Vincent_Min

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-25 17:07:04 -04:00
Bagatur
d5306899d3 openai[patch]: Release 0.2.4 (#27652) 2024-10-25 20:26:21 +00:00
Erick Friis
247d6bb09d infra: test doc imports 3.12 (#27653) 2024-10-25 13:23:06 -07:00
Erick Friis
600b7bdd61 all: test 3.13 ci (#27197)
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-10-25 12:56:58 -07:00
Bagatur
06df15c9c0 core[patch]: Release 0.3.13 (#27651) 2024-10-25 19:22:44 +00:00
Erick Friis
2683f814f4 docs: contributing index page (#27647) 2024-10-25 17:06:55 +00:00
Rashmi Pawar
83eebf549f docs: Add NVIDIA as provider in v3 integrations (#27254)
### Add NVIDIA as provider in langchain v3 integrations

cc: @sumitkbh @mattf @dglogo

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-25 16:21:22 +00:00
Steve Moss
24605bcdb6 community[patch]: Fix missing protected_namespaces(). (#27610)
- [x] **PR message**:
- **Description:** Fixes warning messages raised due to missing
`protected_namespaces` parameter in `ConfigDict`.
    - **Issue:** https://github.com/langchain-ai/langchain/issues/27609
    - **Dependencies:** No dependencies
    - **Twitter handle:** @gawbul
2024-10-25 02:16:26 +00:00
Eugene Yurtsev
7667ee126f core: remove mustache in extended deps (#27629)
Remove mustache from extended deps -- we vendor the mustache
implementation
2024-10-24 22:12:49 -04:00
Erick Friis
265e0a164a core: add flake8-bandit (S) ruff rules to core (#27368)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-24 22:33:41 +00:00
hippopond
bcff458ae3 DOC: Added notes in ipynb file to advise user to upgrade package langchain_openai. For issue: https://github.com/langchain-ai/langchain/issues/26616 (#27621)
Thank you for contributing to LangChain!

- [X] **PR title**: DOC: Added notes in ipynb file to advice user to
upgrade package langchain_openai.


- [X] 

Added notes from the issue report: to advise the user to upgrade
langchain_openai


Issue: 
https://github.com/langchain-ai/langchain/issues/26616

- [ ] **Add tests and docs**: 

- [ ] **Lint and test**: 

- [ ]

---------

Co-authored-by: Libby Lin <libbylin@Libbys-MacBook-Pro.local>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 21:54:12 +00:00
Nithish Raghunandanan
0623c74560 couchbase: Add document id to vector search results (#27622)
**Description:** Returns the document id along with the Vector Search
results

**Issue:** Fixes https://github.com/langchain-ai/langchain/issues/26860
for CouchbaseVectorStore


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 21:47:36 +00:00
ZhangShenao
455ab7d714 Improvement[Community] Improve Document Loaders and Splitters (#27568)
- Fix word spelling error
- Add static method decorator
- Fix language splitter

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 21:42:16 +00:00
Ed Branch
7345470669 docs: add aws support to how-to-guides (#27450)
This PR adds support to the how-to documentation for using AWS Bedrock
and Sagemaker Endpoints.

Because AWS services above dont presently use API keys to access LLMs
I've amended more of the source code than would normally be expected.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 14:23:32 -07:00
CLOVA Studio 개발
846a75284f community: Add Naver chat model & embeddings (#25162)
Reopened as a personal repo outside the organization.

## Description
- Naver HyperCLOVA X community package 
  - Add chat model & embeddings
  - Add unit test & integration test
  - Add chat model & embeddings docs
- I changed partner
package(https://github.com/langchain-ai/langchain/pull/24252) to
community package on this PR
- Could this
embeddings(https://github.com/langchain-ai/langchain/pull/21890) be
deprecated? We are trying to replace it with embedding
model(**ClovaXEmbeddings**) in this PR.

Twitter handle: None. (if needed, contact with
joonha.jeon@navercorp.com)

---
you can check our previous discussion below:

> one question on namespaces - would it make sense to have these in
.clova namespaces instead of .naver?

I would like to keep it as is, unless it is essential to unify the
package name.
(ClovaX is a branding for the model, and I plan to add other models and
components. They need to be managed as separate classes.)

> also, could you clarify the difference between ClovaEmbeddings and
ClovaXEmbeddings?

There are 3 models that are being serviced by embedding, and all are
supported in the current PR. In addition, all the functionality of CLOVA
Studio that serves actual models, such as distinguishing between test
apps and service apps, is supported. The existing PR does not support
this content because it is hard-coded.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-10-24 20:54:13 +00:00
Hyejun An
6227396e20 partners/HuggingFacePipeline[stream]: Change to use pipeline instead of pipeline.model.generate in stream() (#26531)
## Description

I encountered an error while using the` gemma-2-2b-it model` with the
`HuggingFacePipeline` class and have implemented a fix to resolve this
issue.

### What is Problem

```python
model_id="google/gemma-2-2b-it"


gemma_2_model = AutoModelForCausalLM.from_pretrained(model_id)
gemma_2_tokenizer = AutoTokenizer.from_pretrained(model_id)

gen = pipeline( 
    task='text-generation',
    model=gemma_2_model,
    tokenizer=gemma_2_tokenizer,
    max_new_tokens=1024,
    device=0 if torch.cuda.is_available() else -1,
    temperature=.5,
    top_p=0.7,
    repetition_penalty=1.1,
    do_sample=True,
    )

llm = HuggingFacePipeline(pipeline=gen)

for chunk in llm.stream("Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World."):
    print(chunk, end="", flush=True)
```

This code outputs the following error message:

```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1258: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
  warnings.warn(
Exception in thread Thread-19 (generate):
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1874, in generate
    self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
  File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1266, in _validate_generated_length
    raise ValueError(
ValueError: Input length of input_ids is 31, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_length` or, better yet, setting `max_new_tokens`.
```

In addition, the following error occurs when the number of tokens is
reduced.

```python
for chunk in llm.stream("Hello World"):
    print(chunk, end="", flush=True)
```

```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1258: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1885: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
  warnings.warn(
Exception in thread Thread-20 (generate):
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2024, in generate
    result = self._sample(
  File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2982, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/gemma2/modeling_gemma2.py", line 994, in forward
    outputs = self.model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/gemma2/modeling_gemma2.py", line 803, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 164, in forward
    return F.embedding(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2267, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
```

On the other hand, in the case of invoke, the output is normal:

```
llm.invoke("Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World.")
```
```
'Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World. Hello World.\n\nThis is a simple program that prints the phrase "Hello World" to the console. \n\n**Here\'s how it works:**\n\n* **`print("Hello World")`**: This line of code uses the `print()` function, which is a built-in function in most programming languages (like Python). The `print()` function takes whatever you put inside its parentheses and displays it on the screen.\n* **`"Hello World"`**:  The text within the double quotes (`"`) is called a string. It represents the message we want to print.\n\n\nLet me know if you\'d like to explore other programming concepts or see more examples! \n'
```

### Problem Analysis

- Apparently, I put kwargs in while generating pipelines and it applied
to `invoke()`, but it's not applied in the `stream()`.
- When using the stream, `inputs = self.pipeline.tokenizer (prompt,
return_tensors = "pt")` enters cpu.
  - This can crash when the model is in gpu.

### Solution

Just use `self.pipeline` instead of `self.pipeline.model.generate`.

- **Original Code**

```python
stopping_criteria = StoppingCriteriaList([StopOnTokens()])

inputs = self.pipeline.tokenizer(prompt, return_tensors="pt")
streamer = TextIteratorStreamer(
    self.pipeline.tokenizer,
    timeout=60.0,
    skip_prompt=skip_prompt,
    skip_special_tokens=True,
)
generation_kwargs = dict(
    inputs,
    streamer=streamer,
    stopping_criteria=stopping_criteria,
    **pipeline_kwargs,
)
t1 = Thread(target=self.pipeline.model.generate, kwargs=generation_kwargs)
t1.start()
```

- **Updated Code**

```python
stopping_criteria = StoppingCriteriaList([StopOnTokens()])

streamer = TextIteratorStreamer(
    self.pipeline.tokenizer,
    timeout=60.0,
    skip_prompt=skip_prompt,
    skip_special_tokens=True,
)
generation_kwargs = dict(
    text_inputs= prompt,
    streamer=streamer,
    stopping_criteria=stopping_criteria,
    **pipeline_kwargs,
)
t1 = Thread(target=self.pipeline, kwargs=generation_kwargs)
t1.start()
```

By using the `pipeline` directly, the `kwargs` of the pipeline are
applied, and there is no need to consider the `device` of the `tensor`
made with the `tokenizer`.

> According to the change to use `pipeline`, it was modified to put
`text_inputs=prompts` directly into `generation_kwargs`.

## Issue

None

## Dependencies

None

## Twitter handle

None

---------

Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-10-24 16:49:43 -04:00
Bagatur
655ced84d7 openai[patch]: accept json schema response format directly (#27623)
fix #25460

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 18:19:15 +00:00
Tibor Reiss
20b56a0233 core[patch]: fix repr and str for Serializable (#26786)
Fixes #26499

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-10-24 08:36:35 -07:00
Adarsh Sahu
2d58a8a08d docs: Update structured_outputs.mdx (#27613)
`strightforward` => `straightforward`
`adavanced` => `advanced`
`There a few challenges` => `There are a few challenges`

Documentation Correction:

*
[`docs/docs/concepts/structured_output.mdx`]:
Corrected several typos in the sentence directing users to the API
reference.
2024-10-24 15:13:28 +00:00
Daniel Vu Dao
da6b526770 docs: Update Runnable documentation (#27606)
**Description**
Adds better code formatting for one of the docs.
2024-10-24 15:05:43 +00:00
QiQi
133c1b4f76 docs: Update passthrough.ipynb -- Grammar correction (#27601)
Grammar correction needed in passthrough.ipynb
The sentence is:

"Now you've learned how to pass data through your chains to help to help
format the data flowing through your chains."

There's a redundant "to help", and it could be more succinctly written
as:

"Now you've learned how to pass data through your chains to help format
the data flowing through your chains."
2024-10-24 15:05:06 +00:00
hippopond
61897aef90 docs: Fix for spelling mistake (#27599)
Fixes #26009

Thank you for contributing to LangChain!

- [x] **PR title**: "docs: Correcting spelling mistake"


- [x] **PR message**: 
    - **Description:** Corrected spelling from "trianed" to "trained"
    - **Issue:** the issue #26009 
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [ ] **Add tests and docs**: NA


- [ ] **Lint and test**:

Co-authored-by: Libby Lin <libbylin@Libbys-MacBook-Pro.local>
2024-10-24 15:04:18 +00:00
Eugene Yurtsev
d081a5400a docs: fix more links (#27598)
Fix more links
2024-10-23 21:26:38 -04:00
Lei Zhang
f203229b51 community: Fix the failure of ChatSparkLLM after upgrading to Pydantic V2 (#27418)
**Description:**

The test_sparkllm.py can reproduce this issue.


https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/integration_tests/chat_models/test_sparkllm.py#L66

```
Testing started at 18:27 ...
Launching pytest with arguments test_sparkllm.py::test_chat_spark_llm --no-header --no-summary -q in /Users/zhanglei/Work/github/langchain/libs/community/tests/integration_tests/chat_models

============================= test session starts ==============================
collecting ... collected 1 item

test_sparkllm.py::test_chat_spark_llm 

============================== 1 failed in 0.45s ===============================
FAILED                             [100%]
tests/integration_tests/chat_models/test_sparkllm.py:65 (test_chat_spark_llm)
def test_chat_spark_llm() -> None:
>       chat = ChatSparkLLM(
            spark_app_id="your spark_app_id",
            spark_api_key="your spark_api_key",
            spark_api_secret="your spark_api_secret",
        )  # type: ignore[call-arg]

test_sparkllm.py:67: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../../core/langchain_core/load/serializable.py:111: in __init__
    super().__init__(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

cls = <class 'langchain_community.chat_models.sparkllm.ChatSparkLLM'>
values = {'spark_api_key': 'your spark_api_key', 'spark_api_secret': 'your spark_api_secret', 'spark_api_url': 'wss://spark-api.xf-yun.com/v3.5/chat', 'spark_app_id': 'your spark_app_id', ...}

    @model_validator(mode="before")
    @classmethod
    def validate_environment(cls, values: Dict) -> Any:
        values["spark_app_id"] = get_from_dict_or_env(
            values,
            ["spark_app_id", "app_id"],
            "IFLYTEK_SPARK_APP_ID",
        )
        values["spark_api_key"] = get_from_dict_or_env(
            values,
            ["spark_api_key", "api_key"],
            "IFLYTEK_SPARK_API_KEY",
        )
        values["spark_api_secret"] = get_from_dict_or_env(
            values,
            ["spark_api_secret", "api_secret"],
            "IFLYTEK_SPARK_API_SECRET",
        )
        values["spark_api_url"] = get_from_dict_or_env(
            values,
            "spark_api_url",
            "IFLYTEK_SPARK_API_URL",
            SPARK_API_URL,
        )
        values["spark_llm_domain"] = get_from_dict_or_env(
            values,
            "spark_llm_domain",
            "IFLYTEK_SPARK_LLM_DOMAIN",
            SPARK_LLM_DOMAIN,
        )
    
        # put extra params into model_kwargs
        default_values = {
            name: field.default
            for name, field in get_fields(cls).items()
            if field.default is not None
        }
>       values["model_kwargs"]["temperature"] = default_values.get("temperature")
E       KeyError: 'model_kwargs'

../../../langchain_community/chat_models/sparkllm.py:368: KeyError
``` 

I found that when upgrading to Pydantic v2, @root_validator was changed
to @model_validator. When a class declares multiple
@model_validator(model=before), the execution order in V1 and V2 is
opposite. This is the reason for ChatSparkLLM's failure.

The correct execution order is to execute build_extra first.


https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.16/libs/community/langchain_community/chat_models/sparkllm.py#L302

And then execute validate_environment.


https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.16/libs/community/langchain_community/chat_models/sparkllm.py#L329

The Pydantic community also discusses it, but there hasn't been a
conclusion yet. https://github.com/pydantic/pydantic/discussions/7434

**Issus:** #27416 

**Twitter handle:** coolbeevip

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2024-10-23 21:17:10 -04:00
Andrew Effendi
8f151223ad Community: Fix DuckDuckGo search tool Output Format (#27479)
**Issue:** : https://github.com/langchain-ai/langchain/issues/22961
   **Description:** 

Previously, the documentation for `DuckDuckGoSearchResults` said that it
returns a JSON string, however the code returns a regular string that
can't be parsed as is.
for example running

```python
from langchain_community.tools import DuckDuckGoSearchResults

# Create a DuckDuckGo search instance
search = DuckDuckGoSearchResults()

# Invoke the search
result = search.invoke("Obama")

# Print the result
print(result)
# Print the type of the result
print("Result Type:", type(result))
```
will return
```
snippet: Harris will hold a campaign event with former President Barack Obama in Georgia next Thursday, the first time the pair has campaigned side by side, a senior campaign official said. A week from ..., title: Obamas to hit the campaign trail in first joint appearances with Harris, link: https://www.nbcnews.com/politics/2024-election/obamas-hit-campaign-trail-first-joint-appearances-harris-rcna176034, snippet: Item 1 of 3 Former U.S. first lady Michelle Obama and her husband, former U.S. President Barack Obama, stand on stage during Day 2 of the Democratic National Convention (DNC) in Chicago, Illinois ..., title: Obamas set to hit campaign trail with Kamala Harris for first time, link: https://www.reuters.com/world/us/obamas-set-hit-campaign-trail-with-kamala-harris-first-time-2024-10-18/, snippet: Barack and Michelle Obama will make their first campaign appearances alongside Kamala Harris at rallies in Georgia and Michigan. By Reid J. Epstein Reporting from Ashwaubenon, Wis. Here come the ..., title: Harris Will Join Michelle Obama and Barack Obama on Campaign Trail, link: https://www.nytimes.com/2024/10/18/us/politics/kamala-harris-michelle-obama-barack-obama.html, snippet: Obama's leaving office was "a turning point," Mirsky said. "That was the last time anybody felt normal." A few feet over, a 64-year-old physics professor named Eric Swanson who had grown ..., title: Obama's reemergence on the campaign trail for Harris comes as he ..., link: https://www.cnn.com/2024/10/13/politics/obama-campaign-trail-harris-biden/index.html
Result Type: <class 'str'>
```

After the change in this PR, `DuckDuckGoSearchResults` takes an
additional `output_format = "list" | "json" | "string"` ("string" =
current behavior, default). For example, invoking
`DuckDuckGoSearchResults(output_format="list")` return a list of
dictionaries in the format
```
[{'snippet': '...', 'title': '...', 'link': '...'}, ...]
```
e.g.

```
[{'snippet': "Obama has in a sense been wrestling with Trump's impact since the real estate magnate broke onto the political stage in 2015. Trump's victory the next year, defeating Obama's secretary of ...", 'title': "Obama's fears about Trump drive his stepped-up campaigning", 'link': 'https://www.washingtonpost.com/politics/2024/10/18/obama-trump-anxiety-harris-campaign/'}, {'snippet': 'Harris will hold a campaign event with former President Barack Obama in Georgia next Thursday, the first time the pair has campaigned side by side, a senior campaign official said. A week from ...', 'title': 'Obamas to hit the campaign trail in first joint appearances with Harris', 'link': 'https://www.nbcnews.com/politics/2024-election/obamas-hit-campaign-trail-first-joint-appearances-harris-rcna176034'}, {'snippet': 'Item 1 of 3 Former U.S. first lady Michelle Obama and her husband, former U.S. President Barack Obama, stand on stage during Day 2 of the Democratic National Convention (DNC) in Chicago, Illinois ...', 'title': 'Obamas set to hit campaign trail with Kamala Harris for first time', 'link': 'https://www.reuters.com/world/us/obamas-set-hit-campaign-trail-with-kamala-harris-first-time-2024-10-18/'}, {'snippet': 'Barack and Michelle Obama will make their first campaign appearances alongside Kamala Harris at rallies in Georgia and Michigan. By Reid J. Epstein Reporting from Ashwaubenon, Wis. Here come the ...', 'title': 'Harris Will Join Michelle Obama and Barack Obama on Campaign Trail', 'link': 'https://www.nytimes.com/2024/10/18/us/politics/kamala-harris-michelle-obama-barack-obama.html'}]
Result Type: <class 'list'>
```

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2024-10-23 20:18:11 -04:00
Erick Friis
5e5647b5dd docs: render api ref urls in search (#27594) 2024-10-23 16:18:21 -07:00
Bagatur
948e2e6322 docs: concept nits (#27586) 2024-10-23 14:52:44 -07:00
Eugene Yurtsev
562cf416c2 docs: Update messages.mdx (#27592)
Add missing `.`
2024-10-23 20:18:27 +00:00
Ankur Singh
71e0f4cd62 docs: Fix spelling mistake in concepts (#27589)
`Fore` => `For`

Documentation Correction:

*
[`docs/docs/concepts/async.mdx`](diffhunk://#diff-4959e81c20607c20c7a9c38db4405a687c5d94f24fc8220377701afeee7562b0L40-R40):
Corrected a typo from "Fore" to "For" in the sentence directing users to
the API reference.
2024-10-23 16:10:21 -04:00
Bagatur
968dccee04 core[patch]: convert_to_openai_tool Anthropic support (#27591) 2024-10-23 12:27:06 -07:00
Bagatur
217de4e6a6 langchain[patch]: de-beta init_chat_model (#27558) 2024-10-23 08:35:15 -07:00
Eugene Yurtsev
4466caadba concepts: update llm stub page and re-link (#27567)
Update text llm stub page and re-link content
2024-10-22 23:03:36 -04:00
Eugene Yurtsev
f2dbf01d4a Docs: Re-organize conceptual docs (#27047)
Reorganization of conceptual documentation

---------

Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-10-22 22:08:20 -04:00
Kwan Kin Chan
6d2a76ac05 langchain_huggingface: Fix multiple GPU usage bug in from_model_id function (#23628)
- [ ]  **Description:**   
   - pass the device_map into model_kwargs 
- removing the unused device_map variable in the hf_pipeline function
call
- [ ] **Issue:** issue #13128 
When using the from_model_id function to load a Hugging Face model for
text generation across multiple GPUs, the model defaults to loading on
the CPU despite multiple GPUs being available using the expected format
``` python
llm = HuggingFacePipeline.from_model_id(
    model_id="model-id",
    task="text-generation",
    device_map="auto",
)
```
Currently, to enable multiple GPU , we have to pass in variable in this
format instead
``` python
llm = HuggingFacePipeline.from_model_id(
    model_id="model-id",
    task="text-generation",
    device=None,
    model_kwargs={
        "device_map": "auto",
    }
)
```
This issue arises due to improper handling of the device and device_map
parameters.

- [ ] **Explanation:**
1. In from_model_id, the model is created using model_kwargs and passed
as the model variable of the pipeline function. So at this moment, to
load the model with multiple GPUs, "device_map" needs to be set to
"auto" within model_kwargs. Otherwise, the model defaults to loading on
the CPU.
2. The device_map variable in from_model_id is not utilized correctly.
In the pipeline function's source code of tnansformer:
- The device_map variable is stored in the model_kwargs dictionary
(lines 867-878 of transformers/src/transformers/pipelines/\__init__.py).
```python
    if device_map is not None:
        ......
        model_kwargs["device_map"] = device_map
```
- The model is constructed with model_kwargs containing the device_map
value ONLY IF it is a string (lines 893-903 of
transformers/src/transformers/pipelines/\__init__.py).
```python
    if isinstance(model, str) or framework is None:
        model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
        framework, model = infer_framework_load_model( ... , **model_kwargs, )
```
- Consequently, since a model object is already passed to the pipeline
function, the device_map variable from from_model_id is never used.

3. The device_map variable in from_model_id not only appears unused but
also causes errors. Without explicitly setting device=None, attempting
to load the model on multiple GPUs may result in the following error:
 ```
Device has 2 GPUs available. Provide device={deviceId} to
`from_model_id` to use available GPUs for execution. deviceId is -1
(default) for CPU and can be a positive integer associated with CUDA
device id.
  Traceback (most recent call last):
    File "foo.py", line 15, in <module>
      llm = HuggingFacePipeline.from_model_id(
File
"foo\site-packages\langchain_huggingface\llms\huggingface_pipeline.py",
line 217, in from_model_id
      pipeline = hf_pipeline(
File "foo\lib\site-packages\transformers\pipelines\__init__.py", line
1108, in pipeline
return pipeline_class(model=model, framework=framework, task=task,
**kwargs)
File "foo\lib\site-packages\transformers\pipelines\text_generation.py",
line 96, in __init__
      super().__init__(*args, **kwargs)
File "foo\lib\site-packages\transformers\pipelines\base.py", line 835,
in __init__
      raise ValueError(
ValueError: The model has been loaded with `accelerate` and therefore
cannot be moved to a specific device. Please discard the `device`
argument when creating your pipeline object.
```
This error occurs because, in from_model_id, the default values in from_model_id for device and device_map are -1 and None, respectively. It would passes the statement (`device_map is not None and device < 0`) and keep the device as -1 so the pipeline function later raises an error when trying to move a GPU-loaded model back to the CPU. 
19eb82e68b/libs/community/langchain_community/llms/huggingface_pipeline.py (L204-L213)




If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: vbarda <vadym@langchain.dev>
2024-10-22 21:41:47 -04:00
Prakul
031d0e4725 docs:update to MongoDB Docs (#27531)
**Description:** Update to MongoDB docs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-23 00:21:37 +00:00
Fernando de Oliveira
ab205e7389 partners/openai + community: Async Azure AD token provider support for Azure OpenAI (#27488)
This PR introduces a new `azure_ad_async_token_provider` attribute to
the `AzureOpenAI` and `AzureChatOpenAI` classes in `partners/openai` and
`community` packages, given it's currently supported on `openai` package
as
[AsyncAzureADTokenProvider](https://github.com/openai/openai-python/blob/main/src/openai/lib/azure.py#L33)
type.

The reason for creating a new attribute is to avoid breaking changes.
Let's say you have an existing code that uses a `AzureOpenAI` or
`AzureChatOpenAI` instance to perform both sync and async operations.
The `azure_ad_token_provider` will work exactly as it is today, while
`azure_ad_async_token_provider` will override it for async requests.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-22 21:43:06 +00:00
Bagatur
34684423bf docs: rm Legacy API ref link (#27559) 2024-10-22 14:12:38 -07:00
Savar Bhasin
0cae37b0a9 docs: fix docker command for RedisChatMessageHistory (#27484)
docs: "fix docker command"

- **Description**: The Redis chat message history component requires the
Redis Stack to create indexes. When using only Redis, the following
error occurs: "Unknown command 'FT.INFO', with args beginning with:
'chat_history'".
- **Twitter handle**: savar_bhasin

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-22 19:42:51 +00:00
orkhank
9a277cbe00 community: Update file_path type in JSONLoader.__init__() signature (#27535)
- **Description:** Change the type of the `file_path` argument from `str
| pathlib.Path` to `str | os.PathLike`, since the latter is more widely
used: https://stackoverflow.com/a/58541858
  
This is a very minor fix. I was just annoyed to see the red underline
displayed by Pylance in VS Code: `reportArgumentType`.

![image](https://github.com/user-attachments/assets/719a7f8e-acca-4dfa-89df-925e1d938c71)
  
  The changes do not affect the behavior of the code.
2024-10-22 11:18:36 -07:00
Eric Pinzur
f636c83321 community: Cassandra Vector Store: modernize implementation (#27253)
**Description:** 

This PR updates `CassandraGraphVectorStore` to be based off
`CassandraVectorStore`, instead of using a custom CQL implementation.
This allows users using a `CassandraVectorStore` to upgrade to a
`GraphVectorStore` without having to change their database schema or
re-embed documents.

This PR also updates the documentation of the `GraphVectorStore` base
class and contains native async implementations for the standard graph
methods: `traversal_search` and `mmr_traversal_search` in
`CassandraVectorStore`.

**Issue:** No issue number.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/27078
(already-merged)

**Lint and test**: 
- Lint and tests all pass, including existing
`CassandraGraphVectorStore` tests.
- Also added numerous additional tests based of the tests in
`langchain-astradb` which cover many more scenarios than the existing
tests for `Cassandra` and `CassandraGraphVectorStore`

** BREAKING CHANGE**

Note that this is a breaking change for existing users of
`CassandraGraphVectorStore`. They will need to wipe their database table
and restart.

However:
- The interfaces have not changed. Just the underlying storage
mechanism.
- Any one using `langchain_community.vectorstores.Cassandra` can instead
use `langchain_community.graph_vectorstores.CassandraGraphVectorStore`
and they will gain Graph capabilities without having to re-embed their
existing documents. This is the primary goal of this PR.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-22 18:11:11 +00:00
Vadym Barda
0640cbf2f1 huggingface[patch]: hide client field in HuggingFaceEmbeddings (#27522) 2024-10-21 17:37:07 -04:00
Chun Kang Lu
380449a7a9 core: fix Image prompt template hardcoded template format (#27495)
Fixes #27411 

**Description:** Adds `template_format` to the `ImagePromptTemplate`
class and updates passing in the `template_format` parameter from
ChatPromptTemplate instead of the hardcoded "f-string".
Also updated docs and typing related to `template_format` to be more
up-to-date and specific.

**Dependencies:** None

**Add tests and docs**: Added unit tests to validate fix. Needed to
update `test_chat` snapshot due to adding new attribute
`template_format` in `ImagePromptTemplate`.

---------

Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-10-21 17:31:40 -04:00
bbaltagi-dtsl
403c0ea801 community: fix DallE hidden open_api_key (#26996)
Thank you for contributing to LangChain!

- [ X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ X] 
    - **Issue:** issue #26941


Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-21 19:46:56 +00:00
Erick Friis
c6d088bc15 docs: giscus component strict (#27515) 2024-10-21 11:36:51 -07:00
Erick Friis
6ed92f13d0 infra: azure/mongo api docs build (#27512) 2024-10-21 08:27:46 -07:00
Radi
689e8b7e66 docs: Update chatbot.ipynb (#27422)
- [ ] **PR title**: "docs: Typo fix"

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-21 15:06:28 +00:00
venkatram-dev
2678cda83b docs:tutorials:sql_qa.ipynb: fix typo (#27405)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"

docs:docs:tutorials:sql_qa.ipynb: fix typo

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
Fix typo in docs:docs:tutorials:sql_qa.ipynb


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-21 15:01:23 +00:00
Erez Zohar
8f80dd28d9 docs: typo fix athena.ipynb and glue_catalog.ipynb (#27435)
**Description:** This PR fixes typos in 
```
docs/docs/integrations/document_loaders/athena.ipynb
docs/docs/integrations/document_loaders/glue_catalog.ipynb
```
2024-10-21 15:01:13 +00:00
nodfans
cfcf783cb5 community: fix a typo in planner_prompt.py (#27489)
Description: Fix typo in planner_prompt.py.
2024-10-21 14:59:33 +00:00
Seungha Jeon
edfe35c2a8 docs: fix typo on friendli.ipynb (#27412)
This PR fixes typos in `chat/friendli.ipynb` and `llms/friendli.ipynb`
docs.
2024-10-21 14:58:49 +00:00
Connor Park
e62e390ca0 docs: update API Reference Link in /docs/how_to/vectorstore_retriever/ (#27477)
Description: updated docs
[here](https://python.langchain.com/docs/how_to/vectorstore_retriever/#:~:text=VectorStoreRetriever)
for creating VectorStoreRetrievers. The URL was missing a `.base`, and
now works as expected.

This was a fix for Issue #27196
2024-10-19 00:44:58 +00:00
Erick Friis
97a819d578 community: fix lint from new mypy (#27474) 2024-10-18 20:08:03 +00:00
Erick Friis
c397baa85f community: release 0.3.3 (#27472) 2024-10-18 12:52:15 -07:00
Erick Friis
4ceb28009a mongodb: migrate to repo (#27467) 2024-10-18 12:35:12 -07:00
Erick Friis
a562c54f7d azure-dynamic-sessions: migrate to repo (#27468) 2024-10-18 12:30:48 -07:00
Erick Friis
30660786b3 langchain: release 0.3.4 (#27458) 2024-10-18 11:59:54 -07:00
Erick Friis
b468552859 docs: langgraph error code redirects (#27465) 2024-10-18 10:39:32 -07:00
Erick Friis
82242dfbb1 docs: openai audio docs (#27459) 2024-10-18 17:06:55 +00:00
Erick Friis
2cf2cefe39 partners/openai: release 0.2.3 (#27457) 2024-10-18 08:16:01 -07:00
Erick Friis
7d65a32ee0 openai: audio modality, remove sockets from unit tests (#27436) 2024-10-18 08:02:09 -07:00
Mateusz Szewczyk
97dc578d47 docs: Update custom name for IBM (#27437)
Thank you for contributing to LangChain!

PR: Update custom name for IBM in api_reference docs

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-17 19:41:24 +00:00
Isaac Francisco
6e228c84a8 docs: update mongo table (#27434) 2024-10-17 18:29:29 +00:00
Erick Friis
2a27234a77 docs: fix error reference header (#27431) 2024-10-17 15:51:33 +00:00
Erick Friis
322ca84812 infra: add ibm api build (#27425)
test build:
https://github.com/langchain-ai/langchain/actions/runs/11386155179
2024-10-17 07:47:29 -07:00
Erick Friis
4d11211c89 infra: schedule triggers monorepo only by default (#27428)
fixes https://github.com/langchain-ai/langchain/issues/27426
2024-10-17 14:31:14 +00:00
Erick Friis
f9cc9bdcf3 core: release 0.3.12 (#27410) 2024-10-17 06:32:40 -07:00
Erick Friis
0ebddabf7d docs, core: error messaging [wip] (#27397) 2024-10-17 03:39:36 +00:00
Eugene Yurtsev
202d7f6c4a core[patch]: 0.3.11 release (#27403)
Core bump to 0.3.11
2024-10-16 15:39:37 -04:00
Erick Friis
a38e903360 docs: platforms -> providers (#27285) 2024-10-16 18:27:07 +00:00
ccurme
fdb7f951c8 monorepo: add script for updating notebook cassettes (#27399)
1. Move dependencies for running notebooks into monorepo poetry test
deps;
2. Add script to update cassettes for a single notebook;
3. Add cassettes for some how-to guides.

---

To update cassettes for a single notebook, run
`docs/scripts/update_cassettes.sh`. For example:
```
./docs/scripts/update_cassettes.sh docs/docs/how_to/binding.ipynb
```
Requires:
1. monorepo dev and test dependencies installed;
2. env vars required by notebook are set.

Note: How-to guides are not currently run in [scheduled
job](https://github.com/langchain-ai/langchain/actions/workflows/run_notebooks.yml).
Will add cassettes for more how-to guides in subsequent PRs before
adding them to scheduled job.
2024-10-16 13:46:49 -04:00
Artur Barseghyan
88d71f6986 docs: Cosmetic documentation fix. Update llm_chain.ipynb. (#27394)
ATM [LLM chain
docs](https://python.langchain.com/docs/tutorials/llm_chain/#server)
say:

```
# 3. Create parser
parser = StrOutputParser()

# 4. Create chain
chain = prompt_template | model | parser


# 4. App definition
app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple API server using LangChain's Runnable interfaces",
)

# 5. Adding chain route
add_routes(
    app,
    chain,
    path="/chain",
)
```

I corrected it to:


```
# 3. Create parser
parser = StrOutputParser()

# 4. Create chain
chain = prompt_template | model | parser

# 5. App definition
app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple API server using LangChain's Runnable interfaces",
)

# 6. Adding chain route
add_routes(
    app,
    chain,
    path="/chain",
)
```
2024-10-16 17:42:52 +00:00
Bagatur
a4392b070d core[patch]: add convert_to_openai_messages util (#27263)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 17:10:10 +00:00
sByteman
31e7664afd community[minor]: add proxy support to RecursiveUrlLoader (#27364)
**Description**
This PR introduces the proxies parameter to the RecursiveUrlLoader
class, allowing the user to specify proxy servers for requests. This
update enables crawling through proxy servers, providing enhanced
flexibility for network configurations.
The key changes include:
  1.Added an optional proxies parameter to the constructor (__init__).
2.Updated the documentation to explain the proxies parameter usage with
an example.
3.Modified the _get_child_links_recursive method to pass the proxies
parameter to the requests.get function.



**Sample Usage**

```python
from bs4 import BeautifulSoup as Soup
from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader

proxies = {
    "http": "http://localhost:1080",
    "https": "http://localhost:1080",
}
url = "https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel"
loader = RecursiveUrlLoader(
    url=url, max_depth=1, extractor=lambda x: Soup(x, "html.parser").text,proxies=proxies
)
docs = loader.load()
```

---------

Co-authored-by: root <root@thb>
2024-10-16 16:29:59 +00:00
Leonid Ganeline
3165415369 docs: integrations updates 21 (#27380)
Added missed provider pages. Added descriptions and links. Fixed
inconsistency in text formatting.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 02:41:06 +00:00
Mateusz Szewczyk
591a3db4fb docs: Update IBM ChatWatsonx documentation (#27358)
Thank you for contributing to LangChain!

We would like update IBM ChatWatsonx documentation in LangChain
documentation

Changes:
- Added support for `JSON mode`
- Added support for `Image Input`
- Added support for `Logprobs`

Chat Standard tests ->
https://github.com/langchain-ai/langchain-ibm/blob/main/libs/ibm/tests/integration_tests/test_chat_models_standard.py

Integration_tests job  ->
https://github.com/langchain-ai/langchain-ibm/actions/runs/11327509188

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 02:27:56 +00:00
Yuki Watanabe
b8bfebd382 community: Add deprecation notice for Databricks integration in langchain-community (#27355)
We have released the
[langchain-databricks](https://github.com/langchain-ai/langchain-databricks)
package for Databricks integration. This PR deprecates the legacy
classes within `langchain-community`.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 02:20:40 +00:00
xsai9101
15c1ddaf99 community: Add support for clob datatype in oracle database (#27330)
**Description**:
This PR add support of clob/blob data type for oracle document loader,
clob/blob can only be read by oracledb package when connection is open,
so reformat code to process data before connection closes.

**Dependencies**:
oracledb package same as before. pip install oracledb

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-16 02:19:20 +00:00
Leonid Ganeline
8e66822100 docs: integrations google update (#27218)
I've made several titles more compact hence a more compact menu.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 23:08:52 +00:00
Enes Bol
3f74dfc3d8 community[patch]: Fix vLLM integration to filter SamplingParams (#27367)
**Description:**
- This pull request addresses a bug in Langchain's VLLM integration,
where the use_beam_search parameter was erroneously passed to
SamplingParams. The SamplingParams class in vLLM does not support the
use_beam_search argument, which caused a TypeError.

- This PR introduces logic to filter out unsupported parameters,
ensuring that only valid parameters are passed to SamplingParams. As a
result, the integration now functions as expected without errors.

- The bug was reproduced by running the code sample from Langchain’s
documentation, which triggered the error due to the invalid parameter.
This fix resolves that error by implementing proper parameter filtering.

**VLLM Sampling Params Class:**
https://github.com/vllm-project/vllm/blob/main/vllm/sampling_params.py

**Issue:**
I could not found an Issue that belongs to this. Fixes "TypeError:
Unexpected keyword argument 'use_beam_search'" error when using VLLM
from Langchain.

**Dependencies:**
None.

**Tests and Documentation**:
Tests:
No new functionality was added, but I tested the changes by running
multiple prompts through the VLLM integration with various parameter
configurations. All tests passed successfully without breaking
compatibility.

Docs
No documentation changes were necessary as this is a bug fix.

**Reproducing the Error:**

https://python.langchain.com/docs/integrations/llms/vllm/

The code sample from the original documentation can be used to reproduce
the error I got.

from langchain_community.llms import VLLM
llm = VLLM(
    model="mosaicml/mpt-7b",
    trust_remote_code=True,  # mandatory for hf models
    max_new_tokens=128,
    top_k=10,
    top_p=0.95,
    temperature=0.8,
)
print(llm.invoke("What is the capital of France ?"))

![image](https://github.com/user-attachments/assets/3782d6ac-1f7b-4acc-bf2c-186216149de5)


This PR resolves the issue by ensuring that only valid parameters are
passed to SamplingParams.
2024-10-15 21:57:50 +00:00
Erick Friis
edf6d0a0fb partners/couchbase: release 0.2.0 (attempt 2) (#27375) 2024-10-15 14:51:05 -07:00
Erick Friis
d2cd43601b infra: add databricks api build (#27374) 2024-10-15 20:11:23 +00:00
Jorge Piedrahita Ortiz
12fea5b868 community: sambastudio chat model integration minor fix (#27238)
**Description:** sambastudio chat model integration minor fix
 fix default params
 fix usage metadata when streaming
2024-10-15 13:24:36 -04:00
Leonid Ganeline
fead4749b9 docs: integrations updates 20 (#27210)
Added missed provider pages. Added descriptions and links.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 16:38:12 +00:00
ZhangShenao
f3925d71b9 community: Fix word spelling in Text2vecEmbeddings (#27183)
Fix word spelling in `Text2vecEmbeddings`
2024-10-15 09:28:48 -07:00
Erick Friis
92ae61bcc8 multiple: rely on asyncio_mode auto in tests (#27200) 2024-10-15 16:26:38 +00:00
William FH
0a3e089827 [Anthropic] Shallow Copy (#27105)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 15:50:48 +00:00
Matthew Peveler
c6533616b6 docs: fix community pgvector deprecation warning formatting (#27094)
**Description**: PR fixes some formatting errors in deprecation message
in the `langchain_community.vectorstores.pgvector` module, where it was
missing spaces between a few words, and one word was misspelled.
**Issue**: n/a
**Dependencies**: n/a

Signed-off-by: mpeveler@timescale.com
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 15:45:53 +00:00
Erick Friis
3fa5ce3e5f community: clear mypy syntax warning in openapi (#27370)
not completely clear the regex is functional
2024-10-15 15:43:53 +00:00
Ahmet Yasin Aytar
443b37403d community: refactor Arxiv search logic (#27084)
PR message:

Description:
This PR refactors the Arxiv API wrapper by extracting the Arxiv search
logic into a helper function (_fetch_results) to reduce code duplication
and improve maintainability. The helper function is used in methods like
get_summaries_as_docs, run, and lazy_load, streamlining the code and
making it easier to maintain in the future.

Issue:
This is a minor refactor, so no specific issue is being fixed.

Dependencies:
No new dependencies are introduced with this change.

Add tests and docs:
No new integrations were added, so no additional tests or docs are
necessary for this PR.
Lint and test:
I have run make format, make lint, and make test to ensure all checks
pass successfully.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 08:43:03 -07:00
Qiu Qin
57fbc6bdf1 community: Update OCI data science integration (#27083)
This PR updates the integration with OCI data science model deployment
service.

- Update LLM to support streaming and async calls.
- Added chat model.
- Updated tests and docs.
- Updated `libs/community/scripts/check_pydantic.sh` since the use of
`@pre_init` is removed from existing integration.
- Updated `libs/community/extended_testing_deps.txt` as this integration
requires `langchain_openai`.

---------

Co-authored-by: MING KANG <ming.kang@oracle.com>
Co-authored-by: Dmitrii Cherkasov <dmitrii.cherkasov@oracle.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-15 08:32:54 -07:00
Rafael Miller
fc14f675f1 Community: Updated Firecrawl Document Loader to v1 (#26548)
This PR updates the Firecrawl Document Loader to use the recently
released V1 API of Firecrawl.

**Key Updates:**

**Firecrawl V1 Integration:** Updated the document loader to leverage
the new Firecrawl V1 API for improved performance, reliability, and
developer experience.

**Map Functionality Added:** Introduced the map mode for more flexible
document loading options.

These updates enhance the integration and provide access to the latest
features of Firecrawl.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-10-15 13:13:28 +00:00
Max Tran
8fea07f92e community: fixed KeyError: 'client' (#27345)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
Updated

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
twitter: @MaxHTran

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
Not needed due to small change

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Max Tran <maxtra@amazon.com>
2024-10-14 20:51:13 +00:00
Martin Triska
8dc4bec947 [community] [Bugfix] base_o365 document loader metadata needs to be JSON serializable (#26322)
In order for indexer to work, all metadata in the documents need to be
JSON serializable. Timestamps are not.

See here:

https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/indexing/api.py#L83-L89

@eyurtsev could you please review? It's a tiny PR :-)
2024-10-14 12:48:31 -04:00
YangZhaoo
de62d02102 docs: Maybe there is a missing closing bracket. (#27317)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: yangzhao <yzahha980122@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-14 12:46:56 -04:00
Trayan Azarov
59bbda9ba3 chroma: Deprecating versions 0.5.7 thru 0.5.12 (#27305)
**Description:** Deprecated version of Chroma >=0.5.5 <0.5.12 due to a
serious correctness issue that caused some embeddings for deployments
with multiple collections to be lost (read more on the issue in Chroma
repo)
**Issue:** chroma-core/chroma#2922 (fixed by chroma-core/chroma##2923
and released in
[0.5.13](https://github.com/chroma-core/chroma/releases/tag/0.5.13))
**Dependencies:** N/A
**Twitter handle:** `@t_azarov`
2024-10-14 11:56:05 -04:00
Erick Friis
2197958366 docs: add discussions with giscus (#27172) 2024-10-11 15:14:45 -07:00
Marcelo Nunes Alves
5647276998 community: Problem with embeddings in new versions of clickhouse. (#26041)
Starting with Clickhouse version 24.8, a different type of configuration
has been introduced in the vectorized data ingestion, and if this
configuration occurs, an error occurs when generating the table. As can
be seen below:

![Screenshot from 2024-09-04
11-48-00](https://github.com/user-attachments/assets/70840a93-1001-490c-921a-26924c51d9eb)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-11 18:54:50 +00:00
Sir Qasim
2a1029c53c Update chatbot.ipynb (#27243)
Async invocation:
remove : from at the end of line 
line 441 because there is not any structure block after it.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-10 18:03:10 +00:00
Eugene Yurtsev
5b9b8fe80f core[patch]: Ignore ASYNC110 to upgrade to newest ruff version (#27229)
Ignoring ASYNC110 with explanation
2024-10-09 11:25:58 -04:00
Vittorio Rigamonti
7da2efd9d3 community[minor]: VectorStore Infinispan. Adding TLS and authentication (#23522)
**Description**:
this PR enable VectorStore TLS and authentication (digest, basic) with
HTTP/2 for Infinispan server.
Based on httpx.

Added docker-compose facilities for testing
Added documentation

**Dependencies:**
requires `pip install httpx[http2]` if HTTP2 is needed

**Twitter handle:**
https://twitter.com/infinispan
2024-10-09 10:51:39 -04:00
Luke Jang
ff925d2ddc docs: fixed broken API reference link for StructuredTool.from_function (#27181)
Fix broken API reference link for StructuredTool.from_function
2024-10-09 10:05:22 -04:00
Diao Zihao
4553573acb core[patch],langchain[patch],community[patch]: Bump version dependency of tenacity to >=8.1.0,!=8.4.0,<10 (#27201)
This should fixes the compatibility issue with graprag as in

- https://github.com/langchain-ai/langchain/discussions/25595

Here are the release notes for tenacity 9
(https://github.com/jd/tenacity/releases/tag/9.0.0)

---------

Signed-off-by: Zihao Diao <hi@ericdiao.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-09 14:00:45 +00:00
Stefano Lottini
d05fdd97dd community: Cassandra Vector Store: extend metadata-related methods (#27078)
**Description:** this PR adds a set of methods to deal with metadata
associated to the vector store entries. These, while essential to the
Graph-related extension of the `Cassandra` vector store, are also useful
in themselves. These are (all come in their sync+async versions):

- `[a]delete_by_metadata_filter`
- `[a]replace_metadata`
- `[a]get_by_document_id`
- `[a]metadata_search`

Additionally, a `[a]similarity_search_with_embedding_id_by_vector`
method is introduced to better serve the store's internal working (esp.
related to reranking logic).

**Issue:** no issue number, but now all Document's returned bear their
`.id` consistently (as a consequence of a slight refactoring in how the
raw entries read from DB are made back into `Document` instances).

**Dependencies:** (no new deps: packaging comes through langchain-core
already; `cassio` is now required to be version 0.1.10+)


**Add tests and docs**
Added integration tests for the relevant newly-introduced methods.
(Docs will be updated in a separate PR).

**Lint and test** Lint and (updated) test all pass.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-09 06:41:34 +00:00
Erick Friis
84c05b031d community: release 0.3.2 (#27214) 2024-10-08 23:33:55 -07:00
Serena Ruan
a7c1ce2b3f [community] Add timeout control and retry for UC tool execution (#26645)
Add timeout at client side for UCFunctionToolkit and add retry logic.
Users could specify environment variable
`UC_TOOL_CLIENT_EXECUTION_TIMEOUT` to increase the timeout value for
retrying to get the execution response if the status is pending. Default
timeout value is 120s.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Tested in Databricks:
<img width="1200" alt="image"
src="https://github.com/user-attachments/assets/54ab5dfc-5e57-4941-b7d9-bfe3f8ad3f62">



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: serena-ruan <serena.rxy@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-09 06:31:48 +00:00
Tomaz Bratanic
481bd25d29 community: Fix database connections for neo4j (#27190)
Fixes https://github.com/langchain-ai/langchain/issues/27185

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 23:47:55 +00:00
Erick Friis
cedf4d9462 langchain: release 0.3.3 (#27213) 2024-10-08 16:39:42 -07:00
Jorge Piedrahita Ortiz
6c33124c72 docs: minor fix sambastudio chat model docs (#27212)
- **Description:**  minor fix sambastudio chat model docs

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 23:34:29 +00:00
Erick Friis
7264fb254c core: release 0.3.10 (#27209) 2024-10-08 16:21:42 -07:00
Bagatur
ce33c4fa40 openai[patch]: default temp=1 for o1 (#27206) 2024-10-08 15:45:21 -07:00
Mateusz Szewczyk
b298d0337e docs: Update IBM ChatWatsonx documentation (#27189) 2024-10-08 21:10:18 +00:00
RIdham Golakiya
73ad7f2e7a langchain_chroma[patch]: updated example for get documents with where clause (#26767)
Example updated for vectorstore ChromaDB.

If we want to apply multiple filters then ChromaDB supports filters like
this:
Reference: [ChromaDB
filters](https://cookbook.chromadb.dev/core/filters/)

Thank you.
2024-10-08 20:21:58 +00:00
Bagatur
e3e9ee8398 core[patch]: utils for adding/subtracting usage metadata (#27203) 2024-10-08 13:15:33 -07:00
ccurme
e3920f2320 community[patch]: fix structured_output in llamacpp integration (#27202)
Resolves https://github.com/langchain-ai/langchain/issues/25318.
2024-10-08 15:16:59 -04:00
Leonid Ganeline
c3cb56a9e8 docs: integrations updates 18 (#27054)
Added missed provider pages. Added descriptions and links. Fixed
inconsistency in text formatting.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 19:05:07 +00:00
Leonid Ganeline
b716d808ba docs: integrations/providers/microsoft update (#27055)
Added reference to the AzureCognitiveServicesToolkit.
Fixed titles.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 19:04:40 +00:00
Mathias Colpaert
feb4be82aa docs: in chatbot tutorial, make docs consistent with code sample (#27042)
**Docs Chatbot Tutorial**

The docs state that you can omit the language parameter, but the code
sample to demonstrate, still contains it.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 18:38:15 +00:00
Ikko Eltociear Ashimine
c10e1f70fe docs: update passio_nutrition_ai.ipynb (#27041)
initalize -> initialize


- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 18:35:48 +00:00
Erick Friis
b84e00283f standard-tests: test that only one chunk sets input_tokens (#27177) 2024-10-08 11:35:32 -07:00
Ajayeswar Reddy
9b7bdf1a26 Fixed typo in llibs/community/langchain_community/storage/sql.py (#27029)
- [ ] **PR title**: docs: fix typo in SQLStore import path

- [ ] **PR message**: 
- **Description:** This PR corrects a typo in the docstrings for the
class SQLStore(BaseStore[str, bytes]). The import path in the docstring
currently reads from langchain_rag.storage import SQLStore, which should
be changed to langchain_community.storage import SQLStore. This typo is
also reflected in the official documentation.
    - **Issue:** N/A
    - **Dependencies:** None
    - **Twitter handle:** N/A

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 17:51:26 +00:00
Nihal Chaudhary
0b36ed09cf DOC:Changed /docs/integrations/tools/jira/ (#27023)
- [x] - **Description:** replaced `%pip install -qU langchain-community`
to `%pip install -qU langchain-community langchain_openai ` in doc
\langchain\docs\docs\integrations\tools\jira.ipynb
- [x] - **Issue:** the issue #27013 
- [x] Add docs

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 17:48:08 +00:00
Jacob Lee
0ec74fbc14 docs: 👥 Update LangChain people data (#27022)
👥 Update LangChain people data

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 17:09:07 +00:00
Leonid Ganeline
ea9a59bcf5 docs: integrations updates 17 (#27015)
Added missed provider pages. Added missed descriptions and links.
I fixed the Ipex-LLM titles, so the ToC is now sorted properly for these
titles.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-08 17:03:18 +00:00
Vadym Barda
8d27325dbc core[patch]: support ValidationError from pydantic v1 in tools (#27194) 2024-10-08 10:19:04 -04:00
Christophe Bornet
16f5fdb38b core: Add various ruff rules (#26836)
Adds
- ASYNC
- COM
- DJ
- EXE
- FLY
- FURB
- ICN
- INT
- LOG
- NPY
- PD
- Q
- RSE
- SLOT
- T10
- TID
- YTT

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-07 22:30:27 +00:00
Erick Friis
5c826faece core: update make format to fix all autofixable things (#27174) 2024-10-07 15:20:47 -07:00
Christophe Bornet
d31ec8810a core: Add ruff rules for error messages (EM) (#26965)
All auto-fixes

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-07 22:12:28 +00:00
Oleksii Pokotylo
37ca468d03 community: AzureSearch: fix reranking for empty lists (#27104)
**Description:** 
  Fix reranking for empty lists 

**Issue:** 
```
ValueError: not enough values to unpack (expected 3, got 0)
    documents, scores, vectors = map(list, zip(*docs))
  File langchain_community/vectorstores/azuresearch.py", line 1680, in _reorder_results_with_maximal_marginal_relevance
```

Co-authored-by: Oleksii Pokotylo <oleksii.pokotylo@pwc.com>
2024-10-07 15:27:09 -04:00
Bhadresh Savani
8454a742d7 Update README.md for Tutorial to Usecase url (#27099)
Fixed tutorial URL since earlier Tutorial URL was pointing to usecase
age which does not have any detail it should redirect to correct URL
page
2024-10-07 15:24:33 -04:00
Christophe Bornet
c4ebccfec2 core[minor]: Improve support for id in VectorStore (#26660)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-07 15:01:08 -04:00
Bharat Ramanathan
931ce8d026 core[patch]: Update AsyncCallbackManager to honor run_inline attribute and prevent context loss (#26885)
## Description

This PR fixes the context loss issue in `AsyncCallbackManager`,
specifically in `on_llm_start` and `on_chat_model_start` methods. It
properly honors the `run_inline` attribute of callback handlers,
preventing race conditions and ordering issues.

Key changes:
1. Separate handlers into inline and non-inline groups.
2. Execute inline handlers sequentially for each prompt.
3. Execute non-inline handlers concurrently across all prompts.
4. Preserve context for stateful handlers.
5. Maintain performance benefits for non-inline handlers.

**These changes are implemented in `AsyncCallbackManager` rather than
`ahandle_event` because the issue occurs at the prompt and message_list
levels, not within individual events.**

## Testing

- Test case implemented in #26857 now passes, verifying execution order
for inline handlers.

## Related Issues

- Fixes issue discussed in #23909 

## Dependencies

No new dependencies are required.

---

@eyurtsev: This PR implements the discussed changes to respect
`run_inline` in `AsyncCallbackManager`. Please review and advise on any
needed changes.

Twitter handle: @parambharat

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-07 14:59:29 -04:00
Aleksandar Petrov
c61b9daef5 docs: Grammar fix in concepts.mdx (#27149)
Missing "is" in a sentence about the Tool usage.
2024-10-07 18:55:25 +00:00
Eugene Yurtsev
8f8392137a Update MIGRATE.md (#27169)
Upgrade the content of MIGRATE.md so it's in sync
2024-10-07 14:53:40 -04:00
João Carlos Ferra de Almeida
780ce00dea core[minor]: add **kwargs to index and aindex functions for custom vector_field support (#26998)
Added `**kwargs` parameters to the `index` and `aindex` functions in
`libs/core/langchain_core/indexing/api.py`. This allows users to pass
additional arguments to the `add_documents` and `aadd_documents`
methods, enabling the specification of a custom `vector_field`. For
example, users can now use `vector_field="embedding"` when indexing
documents in `OpenSearchVectorStore`

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-07 14:52:50 -04:00
Jorge Piedrahita Ortiz
14de81b140 community: sambastudio chat model (#27056)
**Description:**: sambastudio chat model integration added, previously
only LLM integration
     included docs and tests

---------

Co-authored-by: luisfucros <luisfucros@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-07 14:31:39 -04:00
Aditya Anand
f70650f67d core[patch]: correct typo doc-string for astream_events method (#27108)
This commit addresses a typographical error in the documentation for the
async astream_events method. The word 'evens' was incorrectly used in
the introductory sentence for the reference table, which could lead to
confusion for users.\n\n### Changes Made:\n- Corrected 'Below is a table
that illustrates some evens that might be emitted by various chains.' to
'Below is a table that illustrates some events that might be emitted by
various chains.'\n\nThis enhancement improves the clarity of the
documentation and ensures accurate terminology is used throughout the
reference material.\n\nIssue Reference: #27107
2024-10-07 14:12:42 -04:00
Bagatur
38099800cc docs: fix anthropic max_tokens docstring (#27166) 2024-10-07 16:51:42 +00:00
ogawa
07dd8dd3d7 community[patch]: update gpt-4o cost (#27038)
updated OpenAI cost definition according to the following:
https://openai.com/api/pricing/
2024-10-07 09:06:30 -04:00
Averi Kitsch
7a07196df6 docs: update Google Spanner Vector Store documentation (#27124)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Update Spanner VS integration doc
    - **Issue:** None
    - **Dependencies:** None
    - **Twitter handle:** NA


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-04 23:59:10 +00:00
Bagatur
06ce5d1d5c anthropic[patch]: Release 0.2.3 (#27126) 2024-10-04 22:38:03 +00:00
Bagatur
0b8416bd2e anthropic[patch]: fix input_tokens when cached (#27125) 2024-10-04 22:35:51 +00:00
Erick Friis
64a16f2cf0 infra: add nvidia and astradb back to api build (#27115)
test build
https://github.com/langchain-ai/langchain/actions/runs/11185115845
2024-10-04 14:41:41 -07:00
Bagatur
bd5b335cb4 standard-tests[patch]: fix oai usage metadata test (#27122) 2024-10-04 20:00:48 +00:00
Bagatur
827bdf4f51 fireworks[patch]: Release 0.2.1 (#27120) 2024-10-04 18:59:15 +00:00
Bagatur
98942edcc9 openai[patch]: Release 0.2.2 (#27119) 2024-10-04 11:54:01 -07:00
Bagatur
414fe16071 anthropic[patch]: Release 0.2.2 (#27118) 2024-10-04 11:53:53 -07:00
Bagatur
11df1b2b8d core[patch]: Release 0.3.9 (#27117) 2024-10-04 18:35:33 +00:00
Scott Hurrey
558fb4d66d box: Add citation support to langchain_box.retrievers.BoxRetriever when used with Box AI (#27012)
Thank you for contributing to LangChain!

**Description:** Box AI can return responses, but it can also be
configured to return citations. This change allows the developer to
decide if they want the answer, the citations, or both. Regardless of
the combination, this is returned as a single List[Document] object.

**Dependencies:** Updated to the latest Box Python SDK, v1.5.1
**Twitter handle:** BoxPlatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-04 18:32:34 +00:00
Bagatur
1e768a9ec7 anthropic[patch]: correctly handle tool msg with empty list (#27109) 2024-10-04 11:30:50 -07:00
Bagatur
4935a14314 core,integrations[minor]: Dont error on fields in model_kwargs (#27110)
Given the current erroring behavior, every time we've moved a kwarg from
model_kwargs and made it its own field that was a breaking change.
Updating this behavior to support the old instantiations /
serializations.

Assuming build_extra_kwargs was not something that itself is being used
externally and needs to be kept backwards compatible
2024-10-04 11:30:27 -07:00
Bagatur
0495b7f441 anthropic[patch]: add usage_metadata details (#27087)
fixes https://github.com/langchain-ai/langchain/pull/27087
2024-10-04 08:46:49 -07:00
Erick Friis
e8e5d67a8d openai: fix None token detail (#27091)
happens in Azure
2024-10-04 01:25:38 +00:00
Vadym Barda
2715bed70e docs[patch]: update links w/ new langgraph API ref (#26961)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-03 23:52:01 +00:00
Rashmi Pawar
47142eb6ee docs: Integrations NVIDIA llm documentation (#26934)
**Description:**

Add Notebook for NVIDIA prompt completion llm class.

cc: @sumitkbh @mattf @dglogo

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-03 23:32:45 +00:00
Erick Friis
ab4dab9a0c core: fix batch race condition in FakeListChatModel (#26924)
fixed #26273
2024-10-03 23:14:31 +00:00
Bagatur
87fc5ce688 core[patch]: exclude model cache from ser (#27086) 2024-10-03 22:00:31 +00:00
Bagatur
c09da53978 openai[patch]: add usage metadata details (#27080) 2024-10-03 14:01:03 -07:00
Bagatur
546dc44da5 core[patch]: add UsageMetadata details (#27072) 2024-10-03 20:36:17 +00:00
Sean
cc1b8b3d30 docs: Documentation update for Document Parse (#26844)
Renamed `Layout Analysis` to `Document Parser` in the doc as we have
recently renamed it!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-03 20:36:04 +00:00
Erick Friis
7f730ce8b2 docs: remove spaces in percent pip (#27082) 2024-10-03 20:34:24 +00:00
Tibor Reiss
47a9199fa6 community[patch]: Fix missing protected_namespaces (#27076)
Fixes #26861

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-10-03 20:12:11 +00:00
Bagatur
2a54448a0a langchain[patch]: Release 0.3.2 (#27073) 2024-10-03 18:13:23 +00:00
Bharat Ramanathan
103e573f9b community[patch]: chore warn deprecate the wandb callback handler (#27062)
- **Description:**: This PR deprecates the wandb callback handler in
favor of the new
[WeaveTracer](https://weave-docs.wandb.ai/guides/integrations/langchain#using-weavetracer)
in W&B
- **Dependencies:** No dependencies, just a deprecation warning.
- **Twitter handle:** @parambharat


@baskaryan
2024-10-03 11:59:20 -04:00
Vadym Barda
907c758d67 docs[patch]: add long-term memory agent tutorial (#27057) 2024-10-02 23:02:44 -04:00
Eugene Yurtsev
635c55c039 core[patch]: Release 0.3.8 (#27046)
0.3.8 release for core
2024-10-02 16:58:38 +00:00
Eugene Yurtsev
74bf620e97 core[patch]: Support injected tool args that are arbitrary types (#27045)
This adds support for inject tool args that are arbitrary types when
used with pydantic 2.

We'll need to add similar logic on the v1 path, and potentially mirror
the config from the original model when we're doing the subset.
2024-10-02 12:50:58 -04:00
Erick Friis
e806e9de38 infra: fix api docs build checkout 2 (#27033) 2024-10-01 14:49:35 -07:00
Bagatur
099235da01 Revert "huggingface[patch]: make HuggingFaceEndpoint serializable (#2… (#27032)
…7027)"

This reverts commit b5e28d3a6d.
2024-10-01 21:26:38 +00:00
Bagatur
5f2e93ffea huggingface[patch]: xfail test (#27031) 2024-10-01 21:14:07 +00:00
Bagatur
b5e28d3a6d huggingface[patch]: make HuggingFaceEndpoint serializable (#27027) 2024-10-01 13:16:10 -07:00
ccurme
9d10151123 core[patch]: fix init of RunnableAssign (#26903)
Example in API ref currently raises ValidationError.

Resolves https://github.com/langchain-ai/langchain/issues/26862
2024-10-01 14:21:54 -04:00
Erick Friis
f7583194de docs: build new api docs (#26951) 2024-10-01 09:18:54 -07:00
Erick Friis
95a87291fd community: deprecate community ollama integrations (#26733) 2024-10-01 09:18:07 -07:00
ZhangShenao
e317d457cf Bug-Fix[Community] Fix FastEmbedEmbeddings (#26764)
#26759 

- Fix https://github.com/langchain-ai/langchain/issues/26759 
- Change `model` param from private to public, which may not be
initiated.
- Add test case
2024-09-30 21:23:08 -04:00
Erick Friis
a8e1577f85 milvus: mv to external repo (#26920) 2024-10-01 00:38:30 +00:00
Erick Friis
35f6393144 unstructured: mv to external repo (#26923) 2024-09-30 17:38:21 -07:00
Erick Friis
7ecd720120 multiple: update docs urls to latest 2 (#26837) 2024-09-30 17:37:07 -07:00
Erika Cardenas
4a32cc3c66 Update FeatureTables.js to add Weaviate (#26824)
Thank you for contributing to LangChain!


- [x] **PR message**: 
    - Add Weaviate to the vector store list.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-30 23:05:37 +00:00
William FH
6a861b0ad9 [Doc] Name variable langgraph_agent_executor (#26799) 2024-09-30 15:52:23 -07:00
Ayodele Aransiola
5346c7b27e doc: grammar fix on index.mdx (#26771)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


The PR is an adjustment on few grammar adjustments on the page.
@leomofthings is my twitter handle




If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-30 21:52:39 +00:00
Tomaz Bratanic
446144e7c6 Update neo4j vector procedures (#26775) 2024-09-30 14:45:09 -07:00
Arun Prakash
870bd42b0d docs: GremlinGraph Remove = in the URL (#26705)
- **Description:** URL is appended with = which is not working
    - **Issue:** removing the = symbol makes the URL valid
    - **Twitter handle:** @arunprakash_com

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-30 21:36:30 +00:00
federico-pisanu
2538963945 core[patch]: improve index/aindex api when batch_size<n_docs (#25754)
- **Description:** prevent index function to re-index entire source
document even if nothing has changed.
- **Issue:** #22135

I worked on a solution to this issue that is a compromise between being
cheap and being fast.
In the previous code, when batch_size is greater than the number of docs
from a certain source almost the entire source is deleted (all documents
from that source except for the documents in the first batch)
My solution deletes documents from vector store and record manager only
if at least one document has changed for that source.

Hope this can help!

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-30 20:57:41 +00:00
Eugene Yurtsev
7fde2791dc core[patch]: Add kwargs to Runnable (#27008)
Fixes #26685

---------

Co-authored-by: Tibor Reiss <tibor.reiss@gmail.com>
2024-09-30 16:45:29 -04:00
Christophe Bornet
2a6abd3f0a community[patch]: Add docstring for Links (#25969)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-30 20:33:50 +00:00
Ronan Amicel
19ed3165fb docs: Fix typo in list of PDF loaders (#26774)
Description: Fix typo in list of PDF loaders.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-30 20:04:18 +00:00
Mohammad Mohtashim
e12f570ead Merge pull request #26794
* [chore]: Agent Observation should be casted to string to avoid errors

* Merge branch 'master' into fix_observation_type_streaming

* [chore]: Using Json.dumps

* [chore]: Exact same logic as  when casting agent oobservation to string
2024-09-30 15:54:51 -04:00
Bagatur
34bd718fe1 core[patch]: Release 0.3.7 (#27004) 2024-09-30 18:52:42 +00:00
Bagatur
248be02259 core[patch]: fix structured prompt template format (#27003)
template_format is an init argument on ChatPromptTemplate but not an
attribute on the object so was getting shoved into
StructuredPrompt.structured_ouptut_kwargs
2024-09-30 11:47:46 -07:00
Bagatur
0078493a80 fireworks[patch]: allow tool_choice with multiple tools (#26999)
https://docs.fireworks.ai/api-reference/post-chatcompletions
2024-09-30 11:28:43 -07:00
Bagatur
c7120d87dd groq[patch]: support tool_choice=any/required (#27000)
https://console.groq.com/docs/api-reference#chat-create
2024-09-30 11:28:35 -07:00
Christophe Bornet
db8845a62a core: Add ruff rules for pycodestyle Warning (W) (#26964)
All auto-fixes.
2024-09-30 09:31:43 -04:00
Bagatur
9404e7af9d openai[patch]: exclude http client (#26891)
httpx clients aren't serializable
2024-09-29 11:16:27 -07:00
Andrew Benton
ce2669cb56 docs: update code interpreter tool table to reflect riza file upload support (#26960)
**Description:** Update the code interpreter tools feature table to
reflect Riza file upload support (blog announcement here:
https://riza.io/blog/adding-support-for-input-files-and-http-credentials)
**Issue:** N/A
**Dependencies:** N/A
2024-09-29 12:04:07 -04:00
Erick Friis
b2c315997c infra: custom commit to external repo (#26962) 2024-09-27 16:39:28 -07:00
Ben Chambers
29bf89db25 community: Add conversions from GVS to networkx (#26906)
These allow converting linked documents (such as those used with
GraphVectorStore) to networkx for rendering and/or in-memory graph
algorithms such as community detection.
2024-09-27 16:48:55 -04:00
Christophe Bornet
7809b31b95 core[patch]: Add ruff rules for flake8-simplify (SIM) (#26848)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-27 20:13:23 +00:00
Eugene Yurtsev
de0b48c41a docs: Upgrade examples with RunnableWithMessageHistory to langgraph memory (#26855)
This PR updates the documentation examples that used
RunnableWithMessageHistory to show how to achieve the same
implementation with langgraph memory.

Some of the underlying PRs (not all of them):

- docs[patch]: update chatbot tutorial and migration guide (#26780)
- docs[patch]: update chatbot memory how-to (#26790)
- docs[patch]: update chatbot tools how-to (#26816)
- docs: update chat history in rag how-to (#26821)
- docs: update trim messages notebook (#26793)
- docs: clean up imports in how to guide for rag qa with chat history
(#26825)
- docs[patch]: update conversational rag tutorial (#26814)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
Co-authored-by: mercyspirit <ziying.qiu@gmail.com>
Co-authored-by: aqiu7 <aqiu7@gatech.edu>
Co-authored-by: John <43506685+Coniferish@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Subhrajyoty Roy <subhrajyotyroy@gmail.com>
Co-authored-by: Rajendra Kadam <raj.725@outlook.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Devin Gaffney <itsme@devingaffney.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-27 20:04:30 +00:00
ccurme
44eddd39d6 infra[patch]: update notebooks workflow (#26956)
Addressing some lingering comments from
https://github.com/langchain-ai/langchain/pull/26944, adding parameters
for
- python version
- working directory

![Screenshot 2024-09-27 at 3 33
21 PM](https://github.com/user-attachments/assets/dfa45772-fddb-4489-a148-c9ed83d844d0)
2024-09-27 15:39:14 -04:00
ccurme
67df944dfb infra: add CI job for running tutorial notebooks (#26944) 2024-09-27 18:29:49 +00:00
Erick Friis
9eb26c5f9d infra: api docs build ref experimental (#26950) 2024-09-27 10:21:07 -07:00
Erick Friis
135164e1ee infra: api docs build ref update (#26949) 2024-09-27 10:12:10 -07:00
Erick Friis
c38ea7a069 infra: api docs build (#26948) 2024-09-27 09:47:43 -07:00
Christophe Bornet
f4e738bb40 core: Add ruff rules for PIE (#26939)
All auto-fixes.
2024-09-27 12:08:35 -04:00
ccurme
836c2a4ae0 docs: update memory integrations page (#26912) 2024-09-27 10:02:09 -04:00
ccurme
39987ebd91 openai[patch]: update deprecation target in API ref (#26921) 2024-09-27 08:42:31 -04:00
Subhrajyoty Roy
7f37fd8b80 community[patch]: callback before yield for cloudflare (#26927)
**Description:** Moves yield to after callback for `_stream` function
for the cloudfare workersai model in the community llm package
**Issue:** #16913
2024-09-27 08:42:01 -04:00
Youshin Kim
2d9a09dfa4 Fix typo in mlflow code example in mlflow.py (#26931)
- [x] PR title: Fix typo in code example in mlflow.py
- In libs/community/langchain_community/chat_models/mlflow.py
2024-09-27 12:41:39 +00:00
Subhrajyoty Roy
7037ba0f06 community[patch]: callback before yield for mlx pipeline (#26928)
**Description:** Moves yield to after callback for `_stream` function
for the MLX pipeline model in the community llm package
**Issue:** #16913
2024-09-27 08:41:34 -04:00
Subhrajyoty Roy
adcfecdb67 community[patch]: callback before yield for textgen (#26929)
**Description:** Moves callback to before yield for `_stream` and
`_astream` function for the textgen model in the community llm package
**Issue:** #16913
2024-09-27 08:41:13 -04:00
Subhrajyoty Roy
5f2cc4ecb2 community[patch]: callback before yield for titan takeoff (#26930)
**Description:** Moves yield to after callback for `_stream` function
for the titan takeoff model in the community llm package
**Issue:** #16913
2024-09-27 08:40:22 -04:00
Emmanuel Sciara
c6350d636e core[fix]: using async rate limiter methods in async code (#26914)
**Description:** Replaced blocking (sync) rate_limiter code in async
methods.

**Issue:** #26913

**Dependencies:** N/A

**Twitter handle:** no need 🤗
2024-09-26 20:44:28 +00:00
Eugene Yurtsev
02f5962cf1 docs: add api referencs to langgraph (#26877)
Add api references to langgraph
2024-09-26 15:21:10 -04:00
Abhi Agarwal
696114e145 community: add sqlite-vec vectorstore (#25003)
**Description**:

Adds a vector store integration with
[sqlite-vec](https://alexgarcia.xyz/sqlite-vec/), the successor to
sqlite-vss that is a single C file with no external dependencies.

Pretty straightforward, just copy-pasted the sqlite-vss integration and
made a few tweaks and added integration tests. Only question is whether
all documentation should be directed away from sqlite-vss if it is
defacto deprecated (cc @asg017).

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: philippe-oger <philippe.oger@adevinta.com>
2024-09-26 17:37:10 +00:00
Erick Friis
8bc12df2eb voyageai: new models (#26907)
Co-authored-by: fzowl <zoltan@voyageai.com>
Co-authored-by: fzowl <160063452+fzowl@users.noreply.github.com>
2024-09-26 17:07:10 +00:00
Eugene Yurtsev
2a0d9d05fb docs: Fix trim_messages invocations in the memory migration guide (#26902)
Should only be start_on="human", not start_on=("human", "ai")
2024-09-26 17:02:30 +00:00
Erick Friis
7a99a4d4f8 infra: fix experimental in dco imports check (#26905) 2024-09-26 09:51:58 -07:00
Subhrajyoty Roy
ba467f1a36 community[patch]: callback before yield for gigachat (#26881)
**Description:** Moves yield to after callback for `_stream` and
`_astream` function for the gigachat model in the community llm package
**Issue:** #16913
2024-09-26 12:47:28 -04:00
Subhrajyoty Roy
11e703a97e community[patch]: callback before yield for google palm (#26882)
**Description:** Moves yield to after callback for `_stream` function
for the google palm model in the community package
**Issue:** #16913
2024-09-26 12:47:05 -04:00
Julius Stopforth
121e79b1f0 core: Fix IndexError when trim_messages invoked with empty list (#26896)
This prevents `trim_messages` from raising an `IndexError` when invoked
with `include_system=True`, `strategy="last"`, and an empty message
list.

Fixes #26895

Dependencies: none
2024-09-26 11:29:58 -04:00
ccurme
7091a1a798 openai[patch]: increase token limit in azure integration tests (#26901)
`test_json_mode` occasionally runs into this
2024-09-26 14:31:33 +00:00
Erick Friis
2ea5f60cc5 experimental: migrate to external repo (#26879)
security scanners can't distinguish monorepo sources from each other.
this will resolve issues for folks trying to use e.g. langchain-core but
getting security issues from experimental flagged!
2024-09-25 19:02:19 -07:00
Bagatur
c750600d3d infra: update release secrets (#26878) 2024-09-26 00:12:31 +00:00
Jack Peplinski
edf879d321 docs: update extraction_examples.ipynb (#26874)
The `Without examples 😿` and `With examples 😻` should have different
outputs to illustrate their point.

See v0.2 docs.
https://python.langchain.com/docs/how_to/extraction_examples/#without-examples-

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-25 17:26:42 -04:00
Erick Friis
6f3c8313ba community: bump langchain version (#26876) 2024-09-25 12:58:24 -07:00
Erick Friis
e068407f18 community: bump core versoin (#26875) 2024-09-25 12:57:16 -07:00
Eugene Yurtsev
25cb44c9ee 0.3.1 release community (#26872)
Release for 0.3.1

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-25 19:38:53 +00:00
Erick Friis
9a31ad6f60 langchain: release 0.3.1 (#26868) 2024-09-25 11:43:54 -07:00
Erick Friis
ef2ab26113 core: release 0.3.6 (#26863) 2024-09-25 11:05:53 -07:00
ccurme
87e21493f7 docs[patch]: remove deprecated loaders from feature tables (#26709) 2024-09-25 12:53:32 -04:00
ccurme
a0010063e8 docs[patch]: add guide for loading web pages (#26708) 2024-09-25 12:03:42 -04:00
Bagatur
eaffa92c1d openai[patch]: Release 0.2.1 (#26858) 2024-09-25 15:55:49 +00:00
Rajendra Kadam
51c4393298 community[patch]: Fix validation error in SettingsConfigDict across multiple Langchain modules (#26852)
- **Description:** This pull request addresses the validation error in
`SettingsConfigDict` due to extra fields in the `.env` file. The issue
is prevalent across multiple Langchain modules. This fix ensures that
extra fields in the `.env` file are ignored, preventing validation
errors.
  **Changes include:**
    - Applied fixes to modules using `SettingsConfigDict`.

- **Issue:** NA, similar
https://github.com/langchain-ai/langchain/issues/26850
- **Dependencies:** NA
2024-09-25 10:02:14 -04:00
Devin Gaffney
d502858412 Update main README.md to reference latest version of documentation (#26854)
Update README.md to point at latest docs
2024-09-25 09:44:18 -04:00
Eugene Yurtsev
27c12146c8 docs[patch]: In conceptual docs explain constraints on ToolMessage (#26792)
Minor clarification
2024-09-25 09:34:45 -04:00
Christophe Bornet
3a1b9259a7 core: Add ruff rules for comprehensions (C4) (#26829) 2024-09-25 09:34:17 -04:00
Rajendra Kadam
7e5a9c317f community[minor]: [Pebblo] Enhance PebbloSafeLoader to take anonymize flag (#26812)
- **Description:** The flag is named `anonymize_snippets`. When set to
true, the Pebblo server will anonymize snippets by redacting all
personally identifiable information (PII) from the snippets going into
VectorDB and the generated reports
- **Issue:** NA
- **Dependencies:** NA
- **docs**: Updated
2024-09-25 09:33:06 -04:00
Rajendra Kadam
92003b3724 community[patch]: [SharePointLoader] Fix validation error in _O365Settings due to extra fields in .env file (#26851)
**Description:** Fix validation error in _O365Settings by ignoring extra
fields in .env file
**Issue:** https://github.com/langchain-ai/langchain/issues/26850
**Dependencies:** NA
2024-09-25 09:31:59 -04:00
Subhrajyoty Roy
b61fb98466 community[patch]: callback before yield for friendli (#26842)
**Description:** Moves yield to after callback for `_stream` and
`_astream` function for the friendli model in the community package
**Issue:** #16913
2024-09-25 09:31:12 -04:00
ccurme
13acf9e6b0 langchain[patch]: add deprecation warnings (#26853) 2024-09-25 09:26:44 -04:00
William FH
82b5b77940 [Core] Add more interops tests (#26841)
To test that the client propagates both ways
2024-09-24 20:18:20 -07:00
William FH
9b6ac41442 [Core] Inherit tracing metadata & tags (#26838) 2024-09-24 19:33:12 -07:00
Erick Friis
3796e143f8 docs: remove one more print from build (#26834) 2024-09-24 22:40:16 +00:00
Erick Friis
95269366ae docs: make build less verbose (#26833) 2024-09-24 22:30:05 +00:00
Erick Friis
425c0f381f experimental: release 0.3.1 (#26830) 2024-09-24 15:03:05 -07:00
John
6c3ea262c8 partners/unstructured: release 0.1.5 (#26831)
**Description:** update package version to support loading URLs #26670
**Issue:**  #26697
2024-09-24 15:02:53 -07:00
mercyspirit
0414be4b80 experimental[major]: CVE-2024-46946 fix (#26783)
Description: Resolve CVE-2024-46946 by switching out sympify with
parse_expr with a very specific allowed set of operations.

https://nvd.nist.gov/vuln/detail/cve-2024-46946

Sympify uses eval which makes it vulnerable to code execution.
parse_expr is limited to specific expressions.

Bandit results

![image](https://github.com/user-attachments/assets/170a6376-7028-4e70-a7ef-9acfb49c1d8a)

---------

Co-authored-by: aqiu7 <aqiu7@gatech.edu>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-24 21:37:56 +00:00
Erick Friis
f9ef688b3a docs: upgrade to docusaurus v3 (#26803) 2024-09-24 11:28:13 -07:00
Subhrajyoty Roy
b1da532522 community[patch]: callback before yield for deepsparse llm (#26822)
**Description:** Moves yield to after callback for `_stream` and
`_astream` function for the deepsparse model in the community package
**Issue:** #16913
2024-09-24 13:55:52 -04:00
Nuno Campos
de70a64e3a core: Run LangChainTracer inline (#26797)
- this flag ensures the tracer always runs in the same thread as the run
being traced for both sync and async runs
- pro: less chance for ordering bugs and other oddities
- blocking the event loop is not a concern given all code in the tracer
holds the GIL anyway
2024-09-24 08:31:18 -07:00
Jorge Piedrahita Ortiz
408a930d55 community: Add Sambanova Cloud Chat model community integration (#26333)
**Description:** : Add SambaNova Cloud Chat model community integration
Includes 
- chat model integration (following Standardize ChatModel docstrings)
-  tests
- docs usage notebook (following Standardize ChatModel integration docs)

https://cloud.sambanova.ai/

---------

Co-authored-by: luisfucros <luisfucros@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-09-24 14:11:32 +00:00
Tom
2b83c7c3ab community[patch]: Fix tool_calls parsing when streaming from DeepInfra (#26813)
- **Description:** This PR fixes the response parsing logic for
`ChatDeepInfra`, more specifially `_convert_delta_to_message_chunk()`,
which is invoked when streaming via `ChatDeepInfra`.
- **Issue:** Streaming from DeepInfra via `ChatDeepInfra` is currently
broken because the response parsing logic doesn't handle that
`tool_calls` can be `None`. (There is no GitHub issue for this problem
yet.)
- **Dependencies:** –
- **Twitter handle:** –

Keeping this here as a reminder:
> If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-24 13:47:36 +00:00
Subhrajyoty Roy
997d95c8f8 community[patch]: callback before yield for bedrock llm (#26804)
**Description:** Moves yield to after callback for
`_prepare_input_and_invoke_stream` and
`_aprepare_input_and_invoke_stream` for bedrock llm in community
package.
**Issue:** #16913
2024-09-24 12:14:59 +00:00
Erick Friis
e40a2b8bbf docs: fix mdx codefences (#26802)
```
git grep -l -E '"```\{=mdx\}\\n",' | xargs perl -0777 -i -pe 's/"```\{=mdx\}\\n",\n    (\W.*?),\n\s*"```\\?n?"/$1/s'
```
2024-09-24 06:06:13 +00:00
Erick Friis
35081d2765 docs: fix admonition formatting (#26801) 2024-09-23 21:55:17 -07:00
Erick Friis
603d38f06d docs: make docs mdxv2 compatible (#26798)
prep for docusaurus migration
2024-09-23 21:24:23 -07:00
ccurme
2a4c5713cd openai[patch]: fix azure integration tests (#26791) 2024-09-23 17:49:15 -04:00
ccurme
1ce056d1b2 docs[patch]: add memory migration guides to sidebar (#26711)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-23 15:31:27 -04:00
Mohammad Mohtashim
154a5ff7ca core[patch]: On Chain Start Fix for Chain Class (#26593)
- **Issue:** #26588

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-23 19:30:59 +00:00
ccurme
bba7af903b core[patch]: set default on Blob (#26787)
Resolves https://github.com/langchain-ai/langchain/issues/26781
2024-09-23 18:55:56 +00:00
ccurme
97b27f0930 langchain[patch]: fix extended tests (#26788)
Broken by addition of `disabled_params`
2024-09-23 18:52:09 +00:00
Brace Sproul
fb9ac8da2f fix(docs): Drop announcement bar (#26782) 2024-09-23 18:03:59 +00:00
Bagatur
e1e4f88b3e openai[patch]: enable Azure structured output, parallel_tool_calls=Fa… (#26599)
…lse, tool_choice=required

response_format=json_schema, tool_choice=required, parallel_tool_calls
are all supported for gpt-4o on azure.
2024-09-22 22:25:22 -07:00
Gabriel Altay
bb40a0fb32 Remove pydantic restricted namespaces from HuggingFaceInferenceAPIEmbedings (#26744)
without this `model_config` importing this package produces warnings
about "model_name" having conflicts with protected namespace "model_".

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-22 08:05:37 -04:00
Gor Hayrapetyan
f97ac92f00 community[patch]: Handle empty PR body in get_pull_request in Github utility (#26739)
**Description:**
When PR body is empty `get_pull_request` method fails with bellow
exception.


**Issue:**
```
TypeError('expected string or buffer')Traceback (most recent call last):


  File ".../.venv/lib/python3.9/site-packages/langchain_core/tools/base.py", line 661, in run
    response = context.run(self._run, *tool_args, **tool_kwargs)


  File ".../.venv/lib/python3.9/site-packages/langchain_community/tools/github/tool.py", line 52, in _run
    return self.api_wrapper.run(self.mode, query)


  File ".../.venv/lib/python3.9/site-packages/langchain_community/utilities/github.py", line 816, in run
    return json.dumps(self.get_pull_request(int(query)))


  File ".../.venv/lib/python3.9/site-packages/langchain_community/utilities/github.py", line 495, in get_pull_request
    add_to_dict(response_dict, "body", pull.body)


  File ".../.venv/lib/python3.9/site-packages/langchain_community/utilities/github.py", line 487, in add_to_dict
    tokens = get_tokens(value)


  File ".../.venv/lib/python3.9/site-packages/langchain_community/utilities/github.py", line 483, in get_tokens
    return len(tiktoken.get_encoding("cl100k_base").encode(text))


  File "....venv/lib/python3.9/site-packages/tiktoken/core.py", line 116, in encode
    if match := _special_token_regex(disallowed_special).search(text):


TypeError: expected string or buffer
```

**Twitter:**  __gorros__
2024-09-22 01:56:24 +00:00
Erick Friis
238a31bbd9 core: release 0.3.5 (#26737) 2024-09-21 00:26:39 +00:00
William FH
55af6fbd02 [LangChainTracer] Omit Chunk (#26602)
in events / new llm token
2024-09-20 17:10:34 -07:00
Anton Dubovik
3e2cb4e8a4 openai: embeddings: supported chunk_size when check_embedding_ctx_length is disabled (#23767)
Chunking of the input array controlled by `self.chunk_size` is being
ignored when `self.check_embedding_ctx_length` is disabled. Effectively,
the chunk size is assumed to be equal 1 in such a case. This is
suprising.

The PR takes into account `self.chunk_size` passed by the user.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 16:58:45 -07:00
William FH
864020e592 [Tracer] add project name to run from tracer (#26736) 2024-09-20 16:48:37 -07:00
Nithish Raghunandanan
2d21274bf6 couchbase: Add ttl support to caches & chat_message_history (#26214)
**Description:** Add support to delete documents automatically from the
caches & chat message history by adding a new optional parameter, `ttl`.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 23:44:29 +00:00
Krishna Kulkarni
c6c508ee96 Refining Skip Count Calculation by Filtering Documents with session_id (#26020)
In the previous implementation, `skip_count` was counting all the
documents in the collection. Instead, we want to filter the documents by
`session_id` and calculate `skip_count` by subtracting `history_size`
from the filtered count.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-20 23:40:56 +00:00
Tibor Reiss
a8b24135a2 fix[experimental]: Fix text splitter with gradient (#26629)
Fixes #26221

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 23:35:50 +00:00
Alejandro Rodríguez
4ac9a6f52c core: fix "template" not allowed as prompt param (#26060)
- **Description:**  fix "template" not allowed as prompt param
- **Issue:** #26058
- **Dependencies:** none


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 23:33:06 +00:00
Christophe Bornet
58f339a67c community: Fix links in GraphVectorStore pydoc (#25959)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 23:17:53 +00:00
Christophe Bornet
e49c413977 core: Add docstring for GraphVectorStoreRetriever (#26224)
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-09-20 23:16:37 +00:00
Lucain
a2023a1e96 huggingface; fix huggingface_endpoint.py (initialize clients only with supported kwargs) (#26378)
## Description

By default, `HuggingFaceEndpoint` instantiates both the
`InferenceClient` and the `AsyncInferenceClient` with the
`"server_kwargs"` passed as input. This is an issue as both clients
might not support exactly the same kwargs. This has been highlighted in
https://github.com/huggingface/huggingface_hub/issues/2522 by
@morgandiverrez with the `trust_env` parameter. In order to make
`langchain` integration future-proof, I do think it's wiser to forward
only the supported parameters to each client. Parameters that are not
supported are simply ignored with a warning to the user. From a
`huggingface_hub` maintenance perspective, this allows us much more
flexibility as we are not constrained to support the exact same kwargs
in both clients.

## Issue

https://github.com/huggingface/huggingface_hub/issues/2522

## Dependencies

None

## Twitter 

https://x.com/Wauplin

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 16:05:24 -07:00
ccurme
f2285376a5 community[patch]: add web loader tests (#26728) 2024-09-20 18:29:54 -04:00
Erick Friis
4a2745064a core: release 0.3.4 (#26729) 2024-09-20 14:47:15 -07:00
Nuno Campos
345edeb1f0 core: In astream_events propagate cancellation reason to inner task (#26727)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-20 14:42:10 -07:00
Erick Friis
465e43cd43 core: release 0.3.3 (#26713) 2024-09-20 13:54:19 -07:00
Eugene Yurtsev
4fc69d61ad core[patch]: Fix defusedxml import (#26718)
Fix defusedxml import. Haven't investigated what's actually going on
under the hood -- defusedxml probably does some weird things in __init__
2024-09-20 16:53:24 -04:00
Eugene Yurtsev
79b224f6f3 core/langchain: fix version used in deprecation (#26724)
in core deprecation should be version 0.3.3 instead of 0.3.4
in langchain deprecation should be version 0.3.1 instead of 0.3.4
2024-09-20 16:47:18 -04:00
Eugene Yurtsev
8a9f7091c0 docs: Update trim message usage in migrating_memory (#26722)
Make sure we don't end up with a ToolMessage that precedes an AIMessage
2024-09-20 20:20:27 +00:00
Eugene Yurtsev
91f4711e53 core[patch],langchain[patch]: deprecate memory and entity abstractions and implementations (#26717)
This PR deprecates the old memory, entity abstractions and implementations
2024-09-20 15:06:25 -04:00
William FH
19ce95d3c9 Avoid copying runs (#26689)
Also, re-unify run trees. Use a single shared client.
2024-09-20 10:57:41 -07:00
Eric
90031b1b3e support epsilla cloud vector database in langchain (#26065)
Description

- support epsilla cloud in langchain

---------

Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-20 17:14:23 +00:00
ZhangShenao
baef7639fd Improvement[text-splitter] Fix import of ExperimentalMarkdownSyntaxTextSplitter (#26703)
#26028 

Export `ExperimentalMarkdownSyntaxTextSplitter` in __init__

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-20 17:04:31 +00:00
Eugene Yurtsev
acf8c2c13e docs: Add migration instructions for v0.0.x memory abstractions (#26668)
This PR adds a migration guide for any code that relies on old memory
abstractions.
2024-09-20 15:09:23 +00:00
ccurme
eeab6a688c docs[patch]: update PDF loader docs (#26627)
Docs preview:
https://langchain-git-cc-pdfdocs-langchain.vercel.app/docs/how_to/document_loader_pdf/
2024-09-20 11:07:06 -04:00
stein1988
91594928c5 fix:fix ChatZhipuAI tool call bug (#26693)
- [ ] **PR title**: "community:fix ChatZhipuAI tool call bug"

- [ ] **Description:** ZhipuAI api response as follows:
{'id': '20240920132549e379a9152a6a4d7c', 'created': 1726809949, 'model':
'glm-4-flash', 'choices': [{'index': 0, 'finish_reason': 'tool_calls',
'delta': {'role': 'assistant', 'tool_calls': [{'id':
'call_20240920132549e379a9152a6a4d7c', 'index': 0, 'type': 'function',
'function': {'name': 'get_datetime_offline', 'arguments': '{}'}}]}}]}
so, tool_calls = dct.get("tool_call", None) in
_convert_delta_to_message_chunk should be "tool_calls"
2024-09-20 13:06:42 +00:00
guoqiang0401
8f0c04f47e Update tool_calling.ipynb (#26699)
There is a small bug in "TypedDict class" sample source.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-20 13:04:50 +00:00
Bagatur
f7bb3640f1 core[patch]: support js chat model namespaces (#26688) 2024-09-19 16:14:20 -07:00
Bagatur
c453b76579 core[patch]: Release 0.3.2 (#26686) 2024-09-19 14:58:45 -07:00
Piyush Jain
f087ab43fd core[patch]: Fix load of ChatBedrock (#26679)
Complementary PR to master for #26643.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-19 21:57:20 +00:00
Bagatur
409f35363b core[patch]: support load from path for default namespaces (#26675) 2024-09-19 14:47:27 -07:00
Eugene Yurtsev
e8236e58f2 ci: restore qa template that was known to work (#26684)
Restore qa template that was working
2024-09-19 17:20:42 -04:00
ccurme
eef18dec44 unstructured[patch]: support loading URLs (#26670)
`unstructured.partition.auto.partition` supports a `url` kwarg, but
`url` in `UnstructuredLoader.__init__` is reserved for the server URL.
Here we add a `web_url` kwarg that is passed to the partition kwargs:
```python
self.unstructured_kwargs["url"] = web_url
```
2024-09-19 11:40:25 -07:00
Erick Friis
311f861547 core, community: move graph vectorstores to community (#26678)
remove beta namespace from core, add to community
2024-09-19 11:38:14 -07:00
Serena Ruan
c77c28e631 [community] Fix WorkspaceClient error with pydantic validation (#26649)
Thank you for contributing to LangChain!

Fix error like
<img width="1167" alt="image"
src="https://github.com/user-attachments/assets/2e219b26-ec7e-48ef-8111-e0ff2f5ac4c0">

After the fix:
<img width="584" alt="image"
src="https://github.com/user-attachments/assets/48f36fe7-628c-48b6-81b2-7fe741e4ca85">


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: serena-ruan <serena.rxy@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-19 18:25:33 +00:00
ccurme
7d49ee9741 unstructured[patch]: add to integration tests (#26666)
- Add to tests on parsed content;
- Add tests for async + lazy loading;
- Add a test for `strategy="hi_res"`.
2024-09-19 13:43:34 -04:00
Erick Friis
28dd6564db docs: highlight styling (#26636)
MERGE ME PLEASE
2024-09-19 17:12:59 +00:00
ccurme
f91bdd12d2 community[patch]: add to pypdf tests and run in CI (#26663) 2024-09-19 14:45:49 +00:00
ice yao
4d3d62c249 docs: fix nomic link error (#26642) 2024-09-19 14:41:45 +00:00
Rajendra Kadam
60dc19da30 [community] Added PebbloTextLoader for loading text data in PebbloSafeLoader (#26582)
- **Description:** Added PebbloTextLoader for loading text in
PebbloSafeLoader.
- Since PebbloSafeLoader wraps document loaders, this new loader enables
direct loading of text into Documents using PebbloSafeLoader.
- **Issue:** NA
- **Dependencies:** NA
- [x] **Tests**: Added/Updated tests
2024-09-19 09:59:04 -04:00
Jorge Piedrahita Ortiz
55b641b761 community: fix error in sambastudio embeddings (#26260)
fix error in samba studio embeddings  result unpacking
2024-09-19 09:57:04 -04:00
Jorge Piedrahita Ortiz
37b72023fe community: remove sambaverse (#26265)
removing Sambaverse llm model and references given is not available
after Sep/10/2024

<img width="1781" alt="image"
src="https://github.com/user-attachments/assets/4dcdb5f7-5264-4a03-b8e5-95c88304e059">
2024-09-19 09:56:30 -04:00
Martin Triska
3fc0ea510e community : [bugfix] Use document ids as keys in AzureSearch vectorstore (#25486)
# Description
[Vector store base
class](4cdaca67dc/libs/core/langchain_core/vectorstores/base.py (L65))
currently expects `ids` to be passed in and that is what it passes along
to the AzureSearch vector store when attempting to `add_texts()`.
However AzureSearch expects `keys` to be passed in. When they are not
present, AzureSearch `add_embeddings()` makes up new uuids. This is a
problem when trying to run indexing. [Indexing code
expects](b297af5482/libs/core/langchain_core/indexing/api.py (L371))
the documents to be uploaded using provided ids. Currently AzureSearch
ignores `ids` passed from `indexing` and makes up new ones. Later when
`indexer` attempts to delete removed file, it uses the `id` it had
stored when uploading the document, however it was uploaded under
different `id`.

**Twitter handle: @martintriska1**
2024-09-19 09:37:18 -04:00
Tomaz Bratanic
a8561bc303 Fix async parsing for llm graph transformer (#26650) 2024-09-19 09:15:33 -04:00
Erik
4e0a6ebe7d community: Add warning when page_content is empty (#25955)
Page content sometimes is empty when PyMuPDF can not find text on pages.
For example, this can happen when the text of the PDF is not copyable
"by hand". Then an OCR solution is need - which is not integrated here.

This warning should accurately warn the user that some pages are lost
during this process.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-19 05:22:09 +00:00
Christophe Bornet
fd21ffe293 core: Add N(naming) ruff rules (#25362)
Public classes/functions are not renamed and rule is ignored for them.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-19 05:09:39 +00:00
Daniel Cooke
7835c0651f langchain_chroma: Pass through kwargs to Chroma collection.delete (#25970)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-19 04:21:24 +00:00
Tibor Reiss
85caaa773f docs[community]: Fix raw string in docstring (#26350)
Fixes #26212: replaced the raw string with backslashes. Alternative:
raw-stringif the full docstring.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-09-19 04:18:56 +00:00
Erick Friis
8fb643a6e8 partners/box: release 0.2.1 (#26644) 2024-09-19 04:02:06 +00:00
Tomaz Bratanic
03b9aca55d community: Retry retriable errors in Neo4j (#26211)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-19 04:01:07 +00:00
Scott Hurrey
acbb4e4701 box: Add searchoptions for BoxRetriever, documentation for BoxRetriever as agent tool (#26181)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


Added search options for BoxRetriever and added documentation to
demonstrate how to use BoxRetriever as an agent tool - @BoxPlatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-18 21:00:06 -07:00
Erick Friis
e0c36afc3e docs: v0.3 link redirect (#26632) 2024-09-18 14:28:56 -07:00
Erick Friis
9909354cd0 core: use ruff.target-version instead (#26634)
tested on one of the replacement cases and seems to work! 
![ScreenShot 2024-09-18 at 02 02
43PM](https://github.com/user-attachments/assets/7170975a-2542-43ed-a203-d4126c6a2c81)
2024-09-18 21:06:14 +00:00
Erick Friis
84b831356c core: remove [project] tag from pyproject (#26633)
makes core incompatible with uv installs
2024-09-18 20:39:49 +00:00
Christophe Bornet
a47b332841 core: Put Python version as a project requirement so it is considered by ruff (#26608)
Ruff doesn't know about the python version in
`[tool.poetry.dependencies]`. It can get it from
`project.requires-python`.

Notes:
* poetry seems to have issues getting the python constraints from
`requires-python` and using `python` in per dependency constraints. So I
had to duplicate the info. I will open an issue on poetry.
* `inspect.isclass()` doesn't work correctly with `GenericAlias`
(`list[...]`, `dict[..., ...]`) on Python <3.11 so I added some `not
isinstance(type, GenericAlias)` checks:

Python 3.11
```pycon
>>> import inspect
>>> inspect.isclass(list)
True
>>> inspect.isclass(list[str])
False
```

Python 3.9
```pycon
>>> import inspect
>>> inspect.isclass(list)
True
>>> inspect.isclass(list[str])
True
```

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-18 14:37:57 +00:00
Patrick McGleenon
0f07cf61da docs: fixed typo in XML document loader (#26613)
Fixed typo `Unstrucutred`
2024-09-18 14:26:57 +00:00
Erick Friis
d158401e73 infra: master release checkout ref for release note (#26605) 2024-09-18 01:51:54 +00:00
Bagatur
de58942618 docs: consolidate dropdowns (#26600) 2024-09-18 01:24:10 +00:00
Bagatur
df38d5250f docs: cleanup nav (#26546) 2024-09-17 17:49:46 -07:00
sanjay920
b246052184 docs: fix typo in clickhouse vectorstore doc (#26598)
- **Description:** typo in clickhouse vectorstore doc
- **Issue:** #26597
- **Dependencies:** none
- **Twitter handle:** sanjay920

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 23:33:22 +00:00
Miguel Grinberg
52729ac0be docs: update hybrid search example with Elasticsearch retriever (#26328)
- **Description:** the example to perform hybrid search with the
Elasticsearch retriever is out of date
- **Issue:** N/A
- **Dependencies:** N/A

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 23:15:27 +00:00
Marco Rossi IT
f62d454f36 docs: fix typo on amazon_textract.ipynb (#26493)
- **Description:** fixed a typo on amazon textract page

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 22:27:45 +00:00
gbaian10
6fe2536c5a docs: fix the ImportError in google_speech_to_text.ipynb (#26522)
fix #26370

- #26370 

`GoogleSpeechToTextLoader` is a deprecated method in
`langchain_community.document_loaders.google_speech_to_text`.

The new recommended usage is to use `SpeechToTextLoader` from
`langchain_google_community`.

When importing from `langchain_google_community`, use the name
`SpeechToTextLoader` instead of the old `GoogleSpeechToTextLoader`.


![image](https://github.com/user-attachments/assets/3a8bd309-9858-4938-b7db-872f51b9542e)

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 22:18:57 +00:00
Zhanwei Zhang
418b170f94 docs: Fix typo in conda environment code block in rag.ipynb (#26487)
Thank you for contributing to LangChain!

- [x] **PR title**: Fix typo in conda environment code block in
rag.ipynb
  - In docs/tutorials/rag.ipynb

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 22:13:55 +00:00
ZhangShenao
c3b3f46cb8 Improvement[Community] Improve api doc of BeautifulSoupTransformer (#26423)
- Add missing args

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 22:00:07 +00:00
ogawa
e2245fac82 community[patch]: o1-preview and o1-mini costs (#26411)
updated OpenAI cost definitions according to the following:
https://openai.com/api/pricing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 21:59:46 +00:00
ZhangShenao
1a8e9023de Improvement[Community] Improve streamlit_callback_handler (#26373)
- add decorator for static methods

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 21:54:37 +00:00
Bagatur
1a62f9850f anthropic[patch]: Release 0.2.1 (#26592) 2024-09-17 14:44:21 -07:00
Harutaka Kawamura
6ed50e78c9 community: Rename deployments server to AI gateway (#26368)
We recently renamed `MLflow Deployments Server` to `MLflow AI Gateway`
in mlflow. This PR updates the relevant notebooks to use `MLflow AI
gateway`

---

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 21:36:04 +00:00
Bagatur
5ced41bf50 anthropic[patch]: fix tool call and tool res image_url handling (#26587)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-09-17 14:30:07 -07:00
Christophe Bornet
c6bdd6f482 community: Fix references in link extractors docstrings (#26314)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 21:26:25 +00:00
Christophe Bornet
3a99467ccb core[patch]: Add ruff rule UP006(use PEP585 annotations) (#26574)
* Added rules `UPD006` now that Pydantic is v2+

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-17 21:22:50 +00:00
wlleiiwang
2ef4c9466f community: modify document links for tencent vectordb (#26316)
- modify document links for create a tencent vectordb database instance.

Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 21:11:10 +00:00
Erick Friis
194adc485c docs: pypi readme image links (#26590) 2024-09-17 20:41:34 +00:00
Bagatur
97b05d70e6 docs: anthropic api ref nit (#26591) 2024-09-17 20:39:53 +00:00
Bagatur
e1d113ea84 core,openai,grow,fw[patch]: deprecate bind_functions, update chat mod… (#26584)
…el api ref
2024-09-17 11:32:39 -07:00
ccurme
7c05f71e0f milvus[patch]: fix vectorstore integration tests (#26583)
Resolves https://github.com/langchain-ai/langchain/issues/26564
2024-09-17 14:17:05 -04:00
Bagatur
145a49cca2 core[patch]: Release 0.3.1 (#26581) 2024-09-17 17:34:09 +00:00
Nuno Campos
5fc44989bf core[patch]: Fix "argument of type 'NoneType' is not iterable" error in LangChainTracer (#26576)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-17 10:29:46 -07:00
Erick Friis
f4a65236ee infra: only force reinstall on release (#26580) 2024-09-17 17:12:17 +00:00
Isaac Francisco
06cde06a20 core[minor]: remove beta from RemoveMessage (#26579) 2024-09-17 17:09:58 +00:00
Erick Friis
3e51fdc840 infra: more skip if pull request libs (#26578) 2024-09-17 09:48:02 -07:00
RUO
0a177ec2cc community: Enhance MongoDBLoader with flexible metadata and optimized field extraction (#23376)
### Description:
This pull request significantly enhances the MongodbLoader class in the
LangChain community package by adding robust metadata customization and
improved field extraction capabilities. The updated class now allows
users to specify additional metadata fields through the metadata_names
parameter, enabling the extraction of both top-level and deeply nested
document attributes as metadata. This flexibility is crucial for users
who need to include detailed contextual information without altering the
database schema.

Moreover, the include_db_collection_in_metadata flag offers optional
inclusion of database and collection names in the metadata, allowing for
even greater customization depending on the user's needs.

The loader's field extraction logic has been refined to handle missing
or nested fields more gracefully. It now employs a safe access mechanism
that avoids the KeyError previously encountered when a specified nested
field was absent in a document. This update ensures that the loader can
handle diverse and complex data structures without failure, making it
more resilient and user-friendly.

### Issue:
This pull request addresses a critical issue where the MongodbLoader
class in the LangChain community package could throw a KeyError when
attempting to access nested fields that may not exist in some documents.
The previous implementation did not handle the absence of specified
nested fields gracefully, leading to runtime errors and interruptions in
data processing workflows.

This enhancement ensures robust error handling by safely accessing
nested document fields, using default values for missing data, thus
preventing KeyError and ensuring smoother operation across various data
structures in MongoDB. This improvement is crucial for users working
with diverse and complex data sets, ensuring the loader can adapt to
documents with varying structures without failing.

### Dependencies: 
Requires motor for asynchronous MongoDB interaction.

### Twitter handle: 
N/A

### Add tests and docs
Tests: Unit tests have been added to verify that the metadata inclusion
toggle works as expected and that the field extraction correctly handles
nested fields.
Docs: An example notebook demonstrating the use of the enhanced
MongodbLoader is included in the docs/docs/integrations directory. This
notebook includes setup instructions, example usage, and outputs.
(Here is the notebook link : [colab
link](https://colab.research.google.com/drive/1tp7nyUnzZa3dxEFF4Kc3KS7ACuNF6jzH?usp=sharing))
Lint and test
Before submitting, I ran make format, make lint, and make test as per
the contribution guidelines. All tests pass, and the code style adheres
to the LangChain standards.

```python
import unittest
from unittest.mock import patch, MagicMock
import asyncio
from langchain_community.document_loaders.mongodb import MongodbLoader

class TestMongodbLoader(unittest.TestCase):
    def setUp(self):
        """Setup the MongodbLoader test environment by mocking the motor client 
        and database collection interactions."""
        # Mocking the AsyncIOMotorClient
        self.mock_client = MagicMock()
        self.mock_db = MagicMock()
        self.mock_collection = MagicMock()

        self.mock_client.get_database.return_value = self.mock_db
        self.mock_db.get_collection.return_value = self.mock_collection

        # Initialize the MongodbLoader with test data
        self.loader = MongodbLoader(
            connection_string="mongodb://localhost:27017",
            db_name="testdb",
            collection_name="testcol"
        )

    @patch('langchain_community.document_loaders.mongodb.AsyncIOMotorClient', return_value=MagicMock())
    def test_constructor(self, mock_motor_client):
        """Test if the constructor properly initializes with the correct database and collection names."""
        loader = MongodbLoader(
            connection_string="mongodb://localhost:27017",
            db_name="testdb",
            collection_name="testcol"
        )
        self.assertEqual(loader.db_name, "testdb")
        self.assertEqual(loader.collection_name, "testcol")

    def test_aload(self):
        """Test the aload method to ensure it correctly queries and processes documents."""
        # Setup mock data and responses for the database operations
        self.mock_collection.count_documents.return_value = asyncio.Future()
        self.mock_collection.count_documents.return_value.set_result(1)
        self.mock_collection.find.return_value = [
            {"_id": "1", "content": "Test document content"}
        ]

        # Run the aload method and check responses
        loop = asyncio.get_event_loop()
        results = loop.run_until_complete(self.loader.aload())
        self.assertEqual(len(results), 1)
        self.assertEqual(results[0].page_content, "Test document content")

    def test_construct_projection(self):
        """Verify that the projection dictionary is constructed correctly based on field names."""
        self.loader.field_names = ['content', 'author']
        self.loader.metadata_names = ['timestamp']
        expected_projection = {'content': 1, 'author': 1, 'timestamp': 1}
        projection = self.loader._construct_projection()
        self.assertEqual(projection, expected_projection)

if __name__ == '__main__':
    unittest.main()
```


### Additional Example for Documentation
Sample Data:

```json
[
    {
        "_id": "1",
        "title": "Artificial Intelligence in Medicine",
        "content": "AI is transforming the medical industry by providing personalized medicine solutions.",
        "author": {
            "name": "John Doe",
            "email": "john.doe@example.com"
        },
        "tags": ["AI", "Healthcare", "Innovation"]
    },
    {
        "_id": "2",
        "title": "Data Science in Sports",
        "content": "Data science provides insights into player performance and strategic planning in sports.",
        "author": {
            "name": "Jane Smith",
            "email": "jane.smith@example.com"
        },
        "tags": ["Data Science", "Sports", "Analytics"]
    }
]
```
Example Code:

```python
loader = MongodbLoader(
    connection_string="mongodb://localhost:27017",
    db_name="example_db",
    collection_name="articles",
    filter_criteria={"tags": "AI"},
    field_names=["title", "content"],
    metadata_names=["author.name", "author.email"],
    include_db_collection_in_metadata=True
)

documents = loader.load()

for doc in documents:
    print("Page Content:", doc.page_content)
    print("Metadata:", doc.metadata)
```
Expected Output:

```
Page Content: Artificial Intelligence in Medicine AI is transforming the medical industry by providing personalized medicine solutions.
Metadata: {'author_name': 'John Doe', 'author_email': 'john.doe@example.com', 'database': 'example_db', 'collection': 'articles'}
```

Thank you.

---

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-09-17 10:23:17 -04:00
ccurme
6758894af1 docs: update v0.3 integrations table (#26571) 2024-09-17 09:56:04 -04:00
venkatram-dev
6ba3c715b7 doc_fix_chroma_integration (#26565)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
docs:integrations:vectorstores:chroma:fix_typo


- [x] **PR message**: ***Delete this entire checklist*** and replace
with


- **Description:** fix_typo in docs:integrations:vectorstores:chroma
https://python.langchain.com/docs/integrations/vectorstores/chroma/
    - **Issue:** https://github.com/langchain-ai/langchain/issues/26561

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-17 08:17:54 -04:00
Bagatur
d8952b8e8c langchain[patch]: infer mistral provider in init_chat_model (#26557) 2024-09-17 00:35:54 +00:00
Bagatur
31f61d4d7d docs: v0.3 nits (#26556) 2024-09-17 00:14:47 +00:00
Bagatur
99abd254fb docs: clean up init_chat_model (#26551) 2024-09-16 22:08:22 +00:00
Tomaz Bratanic
3bcd641bc1 Add check for prompt based approach in llm graph transformer (#26519) 2024-09-16 15:01:09 -07:00
Bagatur
0bd98c99b3 docs: add sema4 to release table (#26549) 2024-09-16 14:59:13 -07:00
Eugene Yurtsev
8a2f2fc30b docs: what langchain-cli migrate can do (#26547) 2024-09-16 20:10:40 +00:00
SQpgducray
724a53711b docs: Fix missing self argument in _get_docs_with_query method of `Cust… (#26312)
…omSelfQueryRetriever`

This commit corrects an issue in the `_get_docs_with_query` method of
the `CustomSelfQueryRetriever` class. The method was incorrectly using
`self.vectorstore.similarity_search_with_score(query, **search_kwargs)`
without including the `self` argument, which is required for proper
method invocation.

The `self` argument is necessary for calling instance methods and
accessing instance attributes. By including `self` in the method call,
we ensure that the method is correctly executed in the context of the
current instance, allowing it to function as intended.

No other changes were made to the method's logic or functionality.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-16 20:02:30 +00:00
Eugene Yurtsev
c6a78132d6 docs: show how to use langchain-cli for migration (#26535)
Update v0.3 instructions a bit

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-16 15:53:05 -04:00
Bagatur
a319a0ff1d docs: add redirects for tools and lcel (#26541) 2024-09-16 18:06:15 +00:00
Eugene Yurtsev
63c3cc1f1f ci: updates issue and discussion templates (#26542)
Update issue and discussion templates
2024-09-16 17:43:04 +00:00
ccurme
0154c586d3 docs: update integrations table in 0.3 guide (#26536) 2024-09-16 17:41:56 +00:00
Eugene Yurtsev
c2588b334f unstructured: release 0.1.4 (#26540)
Release to work with langchain 0.3
2024-09-16 17:38:38 +00:00
Eugene Yurtsev
8b985a42e9 milvus: 0.1.6 release (#26538)
Release to work with langchain 0.3
2024-09-16 13:33:09 -04:00
Eugene Yurtsev
5b4206acd8 box: 0.2.0 release (#26539)
Release to work with langchain 0.3
2024-09-16 13:32:59 -04:00
ccurme
0592c29e9b qdrant[patch]: release 0.1.4 (#26534)
`langchain-qdrant` imports pydantic but was importing pydantic proper
before 0.3 release:
042e84170b/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py (L5-L8)
2024-09-16 13:04:12 -04:00
Eugene Yurtsev
88891477eb langchain-cli: release 0.0.31 (#26533)
langchain-cli 0.0.31 release
2024-09-16 12:57:24 -04:00
ccurme
88bc15d69b standard-tests[patch]: add async test for structured output (#26527) 2024-09-16 11:15:23 -04:00
Erick Friis
1ab181f514 voyageai: release 0.1.2 (#26512) 2024-09-16 03:11:15 +00:00
Erick Friis
ee4e11379f nomic: release 0.1.3, core 0.3 compat but not required (#26511) 2024-09-15 20:10:25 -07:00
Yoshitaka Fujii
bd42344b0a docs: Update concepts.mdx (#26496)
- Fix comments in Python
- Fix repeated sentences
2024-09-16 01:46:15 +00:00
Erick Friis
9f5960a0aa docs: new algolia index (#26508) 2024-09-15 18:33:42 -07:00
Erick Friis
135afdf4fb docs: most 0.1 redirects too (#26494)
takes redirects from 0.1 docs and factors them into suggested redirects
in 0.3 docs
2024-09-15 18:29:58 +00:00
Erick Friis
4131be63af multiple: 0.3.0 not dev version (#26502) 2024-09-15 18:26:50 +00:00
Bhadresh Savani
f66b7ba32d Update google_search.ipynb (#26420)
Added changes for pip installation
2024-09-14 15:08:40 -07:00
jessicaou
9c6aa3f0b7 broken LangGraph docs link (#26438)
Update broken langgraph link in the README.md file

Co-authored-by: Jess Ou <jessou@jesss-mbp.local.meter>
2024-09-14 15:07:51 -07:00
Nicolas
2240ca2979 docs: Fix Firecrawl v0 version (#26452)
Firecrawl integration is currently on v0 - which is supported until
version 0.0.20.

@rafaelsideguide is working on a pr for v1 but meanwhile we should fix
the docs.
2024-09-14 15:06:15 -07:00
Eugene Yurtsev
77ccb4b1cf cli[patch]: Update the migration script message (#26490)
Update the migration script message
2024-09-14 14:40:35 -04:00
Bagatur
b47f4cfe51 mongodb[minor]: Release 0.2.0 (#26484) 2024-09-13 19:17:36 -07:00
Bagatur
779a008d4e docs: update v3 versions (#26483) 2024-09-14 02:16:54 +00:00
Bagatur
4e6620ecdd chroma[patch]: Release 0.1.4 (#26470) 2024-09-13 17:31:34 -07:00
Bagatur
543a80569c prompty[minor]: Release 0.1.0 (#26481) 2024-09-13 23:32:01 +00:00
ccurme
9c88037dbc huggingface[patch]: xfail test (#26479) 2024-09-13 23:16:06 +00:00
Bagatur
a2bfa41216 azure-dynamic-sessions[minor]: Release 0.2.0 (#26478) 2024-09-13 23:09:48 +00:00
ccurme
8abc7ff55a experimental: release 0.3 (#26477) 2024-09-13 23:07:35 +00:00
Bagatur
6abb23ca97 exa[minor]: Release 0.2.0 (#26476) 2024-09-13 23:04:10 +00:00
ccurme
900115a568 community: release 0.3 (#26472) 2024-09-13 22:55:56 +00:00
Bagatur
17b397ef93 pinecone[minor]: Release 0.2.0 (#26474) 2024-09-13 22:55:35 +00:00
Erick Friis
ca304ae046 robocorp: rm package (now langchain-sema4) (#26471) 2024-09-13 15:54:00 -07:00
Erick Friis
537f6924dc partners/ollama: release 0.2.0 (#26468) 2024-09-13 15:48:48 -07:00
Erick Friis
995dfc6b05 partners/fireworks: release 0.2.0 (#26467) 2024-09-13 22:48:16 +00:00
Erick Friis
832bc834b1 partners/anthropic: release 0.2.0 (#26469)
0.3.0 version was a mistake! not released - bumping version back to
0.2.0 here
2024-09-13 22:47:09 +00:00
Erick Friis
6997731729 partners/anthropic: release 0.3.0 (#26466) 2024-09-13 22:44:11 +00:00
Bagatur
64bfe1ff23 groq[minor]: Release 0.2.0 (#26465) 2024-09-13 22:43:11 +00:00
Erick Friis
58c7414e10 langchain: release 0.3.0 (#26462) 2024-09-13 22:40:37 +00:00
ccurme
125c9896a8 huggingface: release 0.1 (#26463) 2024-09-13 22:39:49 +00:00
Bagatur
f7ae12fa1f openai[minor]: Release 0.2.0 (#26464) 2024-09-13 15:38:10 -07:00
ccurme
d1462badaf text-splitters: release 0.3 (#26460)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-13 22:31:06 +00:00
ccurme
9b30bdceb6 mistralai: release 0.2 (#26458)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-13 18:27:51 -04:00
Bagatur
3125a89198 infra: fix min version (#26461) 2024-09-13 22:25:22 +00:00
Bagatur
44791ce131 infra: rm pydantic from min version test (#26459) 2024-09-13 15:22:28 -07:00
Bagatur
fa8e0d90de docs: update version docs (#26457) 2024-09-13 22:20:24 +00:00
Bagatur
222caaebdd infra: fix release (#26455) 2024-09-13 15:01:36 -07:00
Erick Friis
d46ab19954 core: release 0.3.0 (#26453) 2024-09-13 21:45:45 +00:00
Erick Friis
c2a3021bb0 multiple: pydantic 2 compatibility, v0.3 (#26443)
Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Dan O'Donovan <dan.odonovan@gmail.com>
Co-authored-by: Tom Daniel Grande <tomdgrande@gmail.com>
Co-authored-by: Grande <Tom.Daniel.Grande@statsbygg.no>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: ZhangShenao <15201440436@163.com>
Co-authored-by: Friso H. Kingma <fhkingma@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Morgante Pell <morgantep@google.com>
2024-09-13 14:38:45 -07:00
Bagatur
d9813bdbbc openai[patch]: Release 0.1.25 (#26439) 2024-09-13 12:00:01 -07:00
liuhetian
7fc9e99e21 openai[patch]: get output_type when using with_structured_output (#26307)
- This allows pydantic to correctly resolve annotations necessary when
using openai new param `json_schema`

Resolves issue: #26250

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-13 11:42:01 -07:00
Bagatur
0f2b32ffa9 core[patch]: Release 0.2.40 (#26435) 2024-09-13 09:57:09 -07:00
Bagatur
e32adad17a community[patch]: Release 0.2.17 (#26432) 2024-09-13 09:56:39 -07:00
langchain-infra
8a02fd9c01 core: add additional import mappings to loads (#26406)
Support using additional import mapping. This allows users to override
old mappings/add new imports to the loads function.

- [x ] **Add tests and docs**: If you're adding a new integration,
please include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-09-13 09:39:58 -07:00
Erick Friis
1d98937e8d partners/openai: release 0.1.24 (#26417) 2024-09-12 21:54:13 -07:00
Harrison Chase
28ad244e77 community, openai: support nested dicts (#26414)
needed for thinking tokens

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-12 21:47:47 -07:00
Erick Friis
c0dd293f10 partners/groq: release 0.1.10 (#26393) 2024-09-12 17:41:11 +00:00
Erick Friis
54c85087e2 groq: add back streaming tool calls (#26391)
api no longer throws an error

https://console.groq.com/docs/tool-use#streaming
2024-09-12 10:29:45 -07:00
jessicaou
396c0aee4d docs: Adding LC Academy links (#26164)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Jess Ou <jessou@jesss-mbp.local.meter>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-11 23:37:17 +00:00
Bagatur
feb351737c core[patch]: fix empty OpenAI tools when strict=True (#26287)
Fix #26232

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-11 16:06:03 -07:00
William FH
d87feb1b04 [Docs] Correct the admonition explaining min langchain-anthropic version in doc (#26359)
0.1.15 instead of just 0.1.5
2024-09-11 23:03:42 +00:00
ccurme
398718e1cb core[patch]: fix regression in convert_to_openai_tool with instances of Tool (#26327)
```python
from langchain_core.tools import Tool
from langchain_core.utils.function_calling import convert_to_openai_tool

def my_function(x: int) -> int:
    return x + 2

tool = Tool(
    name="tool_name",
    func=my_function,
    description="test description",
)
convert_to_openai_tool(tool)
```

Current:
```
{'type': 'function',
 'function': {'name': 'tool_name',
  'description': 'test description',
  'parameters': {'type': 'object',
   'properties': {'args': {'type': 'array', 'items': {}},
    'config': {'type': 'object',
     'properties': {'tags': {'type': 'array', 'items': {'type': 'string'}},
      'metadata': {'type': 'object'},
      'callbacks': {'anyOf': [{'type': 'array', 'items': {}}, {}]},
      'run_name': {'type': 'string'},
      'max_concurrency': {'type': 'integer'},
      'recursion_limit': {'type': 'integer'},
      'configurable': {'type': 'object'},
      'run_id': {'type': 'string', 'format': 'uuid'}}},
    'kwargs': {'type': 'object'}},
   'required': ['config']}}}
```

Here:
```
{'type': 'function',
 'function': {'name': 'tool_name',
  'description': 'test description',
  'parameters': {'properties': {'__arg1': {'title': '__arg1',
     'type': 'string'}},
   'required': ['__arg1'],
   'type': 'object'}}}
```
2024-09-11 15:51:10 -04:00
이규민
7feae62ad7 core[patch]: Support non ASCII characters in tool output if user doesn't output string (#26319)
### simple modify
core: add supporting non english character

target issue is #26315 
same issue on langgraph -
https://github.com/langchain-ai/langgraph/issues/1504
2024-09-11 15:21:00 +00:00
William FH
b993172702 Keyword-like runnable config (#26295) 2024-09-11 07:44:47 -07:00
Bagatur
17659ca2cd core[patch]: Release 0.2.39 (#26279) 2024-09-10 20:11:27 +00:00
Nuno Campos
212c688ee0 core[minor]: Remove serialized manifest from tracing requests for non-llm runs (#26270)
- This takes a long time to compute, isn't used, and currently called on
every invocation of every chain/retriever/etc
2024-09-10 12:58:24 -07:00
ccurme
979232257b huggingface[patch]: add integration tests for embeddings (#26272) 2024-09-10 14:57:16 -04:00
ccurme
4ffd27c4d0 huggingface[patch]: add integration tests (#26269)
Add standard tests for ChatHuggingFace. About half of these fail
currently.
2024-09-10 18:31:51 +00:00
Emad Rad
16d41eab1e docs: typos fixed (#26234)
While going through the chatbot tutorial, I noticed a couple of typos
and grammatical issues. Also, the pip install command for
langchain_community was commented out, but the document mentions
installing it.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-09-10 00:52:20 +00:00
venkatram-dev
fa229d6c02 docs: fix_typo_llm_chain_tutorial (#26229)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
docs:tutorials:llm_chain:fix typo



- [ ] **PR message**: 
fix typo in llm chain tutorial

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-10 00:39:29 +00:00
Christophe Bornet
9cf7ae0a52 community: Add docstring for HtmlLinkExtractor (#26213)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-10 00:27:37 +00:00
Christophe Bornet
56580b5fff community: Add docstring for GLiNERLinkExtractor (#26218)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-10 00:27:23 +00:00
Christophe Bornet
e235a572a0 community: Add docstring for KeybertLinkExtractor (#26210)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-10 00:26:29 +00:00
Vadym Barda
bab9de581c core[patch]: wrap mermaid node names w/ markdown in <p> tag (#26235)
This fixes the issue where `__start__` and `__end__` node labels are
being interpreted as markdown, as of the most recent Mermaid update
2024-09-09 20:11:00 -04:00
miri-bar
3e48c728d5 docs: add ai21 tool calling example (#26199)
Add tool calling example to AI21 docs
2024-09-09 09:34:54 -07:00
Geoffrey HARRAZI
76bce42629 docs: Update Google BigQuery Vector Search with new SQL filter feature introduce in langchain-google-community 1.0.9 (#26184)
Hello,

fix: https://github.com/langchain-ai/langchain/issues/26183

Adding documentation regarding SQL like filter for Google BigQuery
Vector Search coming in next langchain-google-community 1.0.9 release.
Note: langchain-google-community==1.0.9 is not yet released

Question: There is no way to warn the user int the doc about the
availability of a feature after a specific package version ?

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 18:58:28 +00:00
Matt Hull
bca51ca164 docs: Update func doc strings in tools_human (#26149)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Fix docstring for two functions that look like have
docstrings carried over from other functions.
    - **Issue:** Not found issue reporting the miss-leading docstrings.
    - **Dependencies:** None
    - **Twitter handle:** 


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 18:48:24 +00:00
Qasim Khan
fa17b145bb docs: fix typo in graph_constructing tutorial (#26134)
Changed 

> "At a high-level, the steps of constructing a knowledge are from text
are:"

to 

> "At a high-level, the steps of constructing a knowledge graph from
text are:"

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 18:46:49 +00:00
Tomaz Bratanic
181e4fc0e0 Add session expired retry to neo4j graph (#26182)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 11:40:43 -07:00
Sebastian Cherny
b3c7ed4913 Adding bind_tools in ChatOctoAI (#26168)
The object extends from
langchain_community.chat_models.openai.ChatOpenAI which doesn't have
`bind_tools` defined. I tried extending from
`langchain_openai.ChatOpenAI` in
https://github.com/langchain-ai/langchain/pull/25975 but that PR got
closed because this is not correct.
So adding our own `bind_tools` (which for now copying from ChatOpenAI is
good enough) will solve the tool calling issue we are having now.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 18:38:43 +00:00
Malik Ashar Khan
042e84170b Fix typo in ScrapflyLoader documentation (#26117)
This PR fixes a minor typo in the ScrapflyLoader documentation. The word
"passigng" was changed to "passing."

Before: passigng
After: passing

This change improves the clarity and professionalism of the
documentation.

Co-authored-by: Ashar <asharmalik.ds193@gmail.com>
2024-09-08 18:33:01 +00:00
John
97a8e365ec partners/unstructured: update unstructured client version (#26105)
Users are having version conflicts with `unstructured-client` as
described here:

https://unstructuredw-kbe4326.slack.com/archives/C06JJHC9G4U/p1725557970546199?thread_ts=1725035247.162819&cid=C06JJHC9G4U

This PR fixes that issue and should update the version to "0.1.3" as
well for a clean-slate version for users to install

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 18:32:34 +00:00
Vadym Barda
1b3bd52e0e core[patch]: fix edge labels for mermaid graphs (#26201) 2024-09-08 14:35:25 +00:00
Marcelo Machado
9bd4f1dfa8 docs: small improvement ChatOllama setup description (#26043)
Small improvement on ChatOllama description

---------

Co-authored-by: Marcelo Machado <mmachado@ibm.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 00:15:05 +00:00
Leonid Ganeline
2f80d67dc1 docs: integrations reference updates 16 (#26059)
Added missed provider pages and links. Fixed inconsistent formatting.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 00:13:53 +00:00
Ikko Eltociear Ashimine
ffdc370200 docs: update agent_executor.ipynb (#26035)
initalize -> initialize

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-08 00:07:26 +00:00
Leonid Ganeline
5052e87d7c docs: integrations reference updates 15 (#25994)
Added missed provider pages and links. Fixed inconsistent formatting.
2024-09-07 16:51:47 -07:00
Erick Friis
6e82d2184b partners/mongodb: release 0.1.9 (#26193) 2024-09-07 23:20:25 +00:00
William FH
262e19b15d infra: Clear cache for env-var checks (#26073) 2024-09-06 21:29:29 +00:00
Brace Sproul
854f37be87 docs[minor]: Add state of agents survey to docs announcement bar (#26167) 2024-09-06 14:28:08 -07:00
ChengZi
a03141ac51 partners[milvus]: fix integration test issues (#26136)
fix some integration test issues:
https://github.com/langchain-ai/langchain/actions/runs/10688447230/job/29628412258

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-06 16:52:36 +00:00
Erick Friis
5c1ebd3086 partners/unstructured: release 0.1.3 (#26119) 2024-09-06 16:22:53 +00:00
Bagatur
de97d50644 core,standard-tests[patch]: add Ser/Des test and update serialization mapping (#26042) 2024-09-04 11:58:36 -07:00
Bagatur
1241a004cb fmt 2024-09-04 11:44:59 -07:00
Bagatur
4ba14ae9e5 fmt 2024-09-04 11:34:59 -07:00
Bagatur
dba308447d fmt 2024-09-04 11:28:04 -07:00
Bagatur
fdf6fbde18 fmt 2024-09-04 11:12:11 -07:00
Bagatur
576574c82c fmt 2024-09-04 11:05:36 -07:00
Bagatur
7bf54636ff make 2024-09-04 10:24:42 -07:00
Bagatur
3ec93c2817 standard-tests[patch]: add Ser/Des test 2024-09-04 10:24:06 -07:00
Friso H. Kingma
af11fbfbf6 langchain_openai: Make sure the response from the async client in the astream method of ChatOpenAI is properly awaited in case of "include_response_headers=True" (#26031)
- **Description:** This is a **one line change**. the
`self.async_client.with_raw_response.create(**payload)` call is not
properly awaited within the `_astream` method. In `_agenerate` this is
done already, but likely forgotten in the other method.
  - **Issue:** Not applicable
  - **Dependencies:** No dependencies required.

(If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-04 13:26:48 +00:00
ZhangShenao
c812237217 Improvement[Community] Improve args description in api doc of DocArrayInMemorySearch (#26024)
- Add missing arg
- Remove redundant arg
2024-09-04 09:26:26 -04:00
Tomaz Bratanic
c649b449d7 Add the option to ignore structured output method to LLM graph transf… (#26013)
Open source models like Llama3.1 have function calling, but it's not
that great. Therefore, we introduce the option to ignore model's
function calling and just use the prompt-based approach
2024-09-04 09:15:43 -04:00
Bagatur
34fc00aff1 openai[patch]: add back azure embeddings api_version alias (#26003) 2024-09-03 17:27:10 -07:00
Bagatur
4b99426a4f openai[patch]: add back azure embeddings api_version alias 2024-09-03 17:25:03 -07:00
Eugene Yurtsev
bc3b851f08 openai[patch]: Upgrade @root_validators in preparation for pydantic 2 migration (#25491)
* Upgrade @root_validator in openai pkg
* Ran notebooks for all but AzureAI embeddings

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-03 14:42:24 -07:00
Tom Daniel Grande
0207dc1431 community: delta in openai choice can be None, creates handler for that (#25954)
Thank you for contributing to LangChain!

- [X ] **PR title**

- [X ] **PR message**: 

     **Description:** adds a handler for when delta choice is None

     **Issue:** Fixes #25951
     **Dependencies:** Not applicable


- [ X] **Add tests and docs**: Not applicable

- [X ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Grande <Tom.Daniel.Grande@statsbygg.no>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-03 20:30:03 +00:00
Bagatur
9eb9ff52c0 experimental[patch]: Release 0.0.65 (#25987) 2024-09-03 19:15:48 +00:00
Bagatur
bc3b02651c standard-tests[patch]: test init from env vars (#25983) 2024-09-03 19:05:39 +00:00
Bagatur
ac922105ad infra: rm ai21 from CI (#25984) 2024-09-03 11:47:27 -07:00
Bagatur
0af447c90b community[patch]: Release 0.2.16 (#25982) 2024-09-03 18:34:18 +00:00
Dan O'Donovan
f49da71e87 community[patch]: change default Neo4j username/password (#25226)
**Description:**

Change the default Neo4j username/password (when not supplied as
environment variable or in code) from `None` to `""`.

Neo4j has an option to [disable
auth](https://neo4j.com/docs/operations-manual/current/configuration/configuration-settings/#config_dbms.security.auth_enabled)
which is helpful when developing. When auth is disabled, the username /
password through the `neo4j` module should be `""` (ie an empty string).

Empty strings get marked as false in
`langchain_core.utils.env.get_from_dict_or_env` -- changing this code /
behaviour would have a wide impact and is undesirable.

In order to both _allow_ access to Neo4j with auth disabled and _not_
impact `langchain_core` this patch is presented. The downside would be
that if a user forgets to set NEO4J_USERNAME or NEO4J_PASSWORD they
would see an invalid credentials error rather than missing credentials
error. This could be mitigated but would result in a less elegant patch!

**Issue:**
Fix issue where langchain cannot communicate with Neo4j if Neo4j auth is
disabled.
2024-09-03 11:24:18 -07:00
Bagatur
035d8cf51b milvus[patch]: Release 0.1.5 (#25981) 2024-09-03 18:19:51 +00:00
Bagatur
1dfc8c01af langchain[patch]: Release 0.2.16 (#25977) 2024-09-03 18:10:21 +00:00
Bagatur
fb642e1e27 text-splitters[patch]: Release 0.2.4 (#25979) 2024-09-03 18:09:43 +00:00
Bagatur
7457949619 mistralai[patch]: Release 0.1.13 (#25978) 2024-09-03 18:03:15 +00:00
Bagatur
0c69c9fb3f core[patch]: Release 0.2.38 (#25974) 2024-09-03 17:31:41 +00:00
Eugene Yurtsev
fa8402ea09 core[minor]: Add support for multiple env keys for secrets_from_env (#25971)
- Add support to look up secret using more than one env variable
- Add overload to help mypy

Needed for https://github.com/langchain-ai/langchain/pull/25491
2024-09-03 11:39:54 -04:00
Maximilian Schulz
fdeaff4149 langchain-mistralai - make base URL possible to set via env variable for ChatMistralAI (#25956)
Thank you for contributing to LangChain!


**Description:** 

Similar to other packages (`langchain_openai`, `langchain_anthropic`) it
would be beneficial if that `ChatMistralAI` model could fetch the API
base URL from the environment.

This PR allows this via the following order:
- provided value
- then whatever `MISTRAL_API_URL` is set to
- then whatever `MISTRAL_BASE_URL` is set to
- if `None`, then default is ` "https://api.mistral.com/v1"`


- [x] **Add tests and docs**:

Added unit tests, docs I feel are unnecessary, as this is just aligning
with other packages that do the same?


- [x] **Lint and test**: 

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-03 14:32:35 +00:00
Jorge Piedrahita Ortiz
c7154a4045 community: sambastudio llms api v2 support (#25063)
- **Description:** SambaStudio GenericV2 API support
2024-09-03 10:18:15 -04:00
ZhangShenao
8d784db107 docs: Add missing args in api doc of WebResearchRetriever (#25949)
Add missing args in api doc of `WebResearchRetriever`
2024-09-03 01:24:23 -07:00
Bagatur
da113f6363 docs: ChatOpenAI.with_structured_output nits (#25952) 2024-09-03 08:20:58 +00:00
Bagatur
5b99bb2437 docs: fix bullet list spacing (#25950)
Fix #25935
2024-09-03 08:12:58 +00:00
Yuki Watanabe
ef329f6819 docs: Fix databricks doc (#25941)
https://github.com/langchain-ai/langchain/pull/25929 broke the layout
because of missing `:::` for the caution clause.

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-09-02 18:17:47 -07:00
Bagatur
f872c50b3f docs: installation nits (#24484) 2024-09-03 01:05:08 +00:00
Isaac Francisco
4833375200 community[patch]: added option to change how duckduckgosearchresults tool converts api outputs into string (#22580)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-02 22:42:19 +00:00
JonZeolla
78ff51ce83 community[patch]: update the default hf bge embeddings (#22627)
**Description:** This updates the langchain_community > huggingface >
default bge embeddings ([the current default recommends this
change](https://huggingface.co/BAAI/bge-large-en))
**Issue:** None
**Dependencies:** None
**Twitter handle:** @jonzeolla

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-02 22:10:21 +00:00
Leonid Ganeline
150251fd49 docs: integrations reference updates 13 (#25711)
Added missed provider pages and links. Fixed inconsistent formatting.
Added arxiv references to docstirngs.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-02 22:08:50 +00:00
Yuki Watanabe
64dfdaa924 docs: Add Databricks integration (#25929)
Updating the gateway pages in the documentation to name the
`langchain-databricks` integration.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-02 22:05:40 +00:00
Bagatur
933bc0d6ff core[patch]: support additional kwargs on StructuredPrompt (#25645) 2024-09-02 14:55:26 -07:00
Yash Parmar
51dae57357 community[minor]: jina search tools integrating (jina reader) (#23339)
- **PR title**: "community: add Jina Search tool"
- **Description:** Added the Jina Search tool for querying the Jina
search API. This includes the implementation of the JinaSearchAPIWrapper
and the JinaSearch tool, along with a Jupyter notebook example
demonstrating its usage.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** [Twitter
handle](https://x.com/yashp3020?t=7wM0gQ7XjGciFoh9xaBtqA&s=09)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-02 14:52:14 -07:00
Matthew DeGenaro
66828f4ecc text-splitters[patch]: Modified SpacyTextSplitter to fully keep whitespace when strip_whitespace is false (#23272)
Previously, regardless of whether or not strip_whitespace was set to
true or false, the strip text method in the SpacyTextSplitter class used
`sent.text` to get the sentence. I modified this to include a ternary
such that if strip_whitespace is false, it uses `sent.text_with_ws`
I also modified the project.toml to include the spacy pipeline package
and to lock the numpy version, as higher versions break spacy.

- **Issue:** N/a
- **Dependencies:** None
2024-09-02 21:15:56 +00:00
Qingchuan Hao
3145995ed9 community[patch]: BingSearchResults returns raw snippets as artifact(#23304)
Returns an array of results which is more specific and easier for later
use.

Tested locally:
```
resp = tool.invoke("what's the weather like in Shanghai?")
for item in resp:
    print(item)
```
returns
```
{'snippet': '<b>Shanghai</b>, <b>Shanghai</b>, China <b>Weather</b> Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days.', 'title': 'Shanghai, Shanghai, China Weather Forecast | AccuWeather', 'link': 'https://www.accuweather.com/en/cn/shanghai/106577/weather-forecast/106577'}
{'snippet': '5. 99 / 87 °F. 6. 99 / 86 °F. 7. Detailed forecast for 14 days. Need some help? Current <b>weather</b> <b>in Shanghai</b> and forecast for today, tomorrow, and next 14 days.', 'title': 'Weather for Shanghai, Shanghai Municipality, China - timeanddate.com', 'link': 'https://www.timeanddate.com/weather/china/shanghai'}
{'snippet': '<b>Shanghai</b> - <b>Weather</b> warnings issued 14-day forecast. <b>Weather</b> warnings issued. Forecast - <b>Shanghai</b>. Day by day forecast. Last updated Friday at 01:05. Tonight, ... Temperature feels <b>like</b> 34 ...', 'title': 'Shanghai - BBC Weather', 'link': 'https://www.bbc.com/weather/1796236'}
{'snippet': 'Current <b>weather</b> <b>in Shanghai</b>, <b>Shanghai</b>, China. Check current conditions <b>in Shanghai</b>, <b>Shanghai</b>, China with radar, hourly, and more.', 'title': 'Shanghai, Shanghai, China Current Weather | AccuWeather', 'link': 'https://www.accuweather.com/en/cn/shanghai/106577/current-weather/106577'}
13-Day Beijing, Xi&#39;an, Chengdu, <b>Shanghai</b> Chinese Language and Culture Immersion Tour. <b>Shanghai</b> in September. Average daily temperature range: 23–29°C (73–84°F) Average rainy days: 10. Average sunny days: 20. September ushers in pleasant autumn <b>weather</b>, making it one of the best months to visit <b>Shanghai</b>. <b>Weather</b> in <b>Shanghai</b>: Climate, Seasons, and Average Monthly Temperature. <b>Shanghai</b> has a subtropical maritime monsoon climate, meaning high humidity and lots of rain. Hot muggy summers, cool falls, cold winters with little snow, and warm springs are the norm. Midsummer through early fall is the best time to visit <b>Shanghai</b>. <b>Shanghai</b>, <b>Shanghai</b>, China <b>Weather</b> Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. 1165. 45.9. 121. Winter, from December to February, is quite cold: the average January temperature is 5 °C (41 °F). There may be cold periods, with highs around 5 °C (41 °F) or below, and occasionally, even snow can fall. The temperature dropped to -10 °C (14 °F) in January 1977 and to -7 °C (19.5 °F) in January 2016. 5. 99 / 87 °F. 6. 99 / 86 °F. 7. Detailed forecast for 14 days. Need some help? Current <b>weather</b> in <b>Shanghai</b> and forecast for today, tomorrow, and next 14 days. Everything you need to know about today&#39;s <b>weather</b> in <b>Shanghai</b>, <b>Shanghai</b>, China. High/Low, Precipitation Chances, Sunrise/Sunset, and today&#39;s Temperature History. <b>Shanghai</b> - <b>Weather</b> warnings issued 14-day forecast. <b>Weather</b> warnings issued. Forecast - <b>Shanghai</b>. Day by day forecast. Last updated Friday at 01:05. Tonight, ... Temperature feels <b>like</b> 34 ... <b>Shanghai</b> 14 Day Extended Forecast. <b>Weather</b> Today <b>Weather</b> Hourly 14 Day Forecast Yesterday/Past <b>Weather</b> Climate (Averages) Currently: 84 °F. Passing clouds. (<b>Weather</b> station: <b>Shanghai</b> Hongqiao Airport, China). See more current <b>weather</b>. Current <b>weather</b> in <b>Shanghai</b>, <b>Shanghai</b>, China. Check current conditions in <b>Shanghai</b>, <b>Shanghai</b>, China with radar, hourly, and more. <b>Shanghai</b> <b>Weather</b> Forecasts. <b>Weather Underground</b> provides local &amp; long-range <b>weather</b> forecasts, weatherreports, maps &amp; tropical <b>weather</b> conditions for the <b>Shanghai</b> area.
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-02 21:11:32 +00:00
venkatram-dev
a09e2afee4 typo_summarization_tutorial (#25938)
Thank you for contributing to LangChain!

- [ ] **PR title**:
docs: fix typo in summarization_tutorial


- [ ] **PR message**: 
docs: fix couple of typos in summarization_tutorial

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-02 13:44:11 -07:00
Alexander KIRILOV
6a8f8a56ac community[patch]: added content_columns option to CSVLoader (#23809)
**Description:** 
Adding a new option to the CSVLoader that allows us to implicitly
specify the columns that are used for generating the Document content.
Currently these are implicitly set as "all fields not part of the
metadata_columns".

In some cases however it is useful to have a field both as a metadata
and as part of the document content.
2024-09-02 20:25:53 +00:00
Bruno Alvisio
ab527027ac community: Resolve refs recursively when generating openai_fn from OpenAPI spec (#19002)
- **Description:** This PR is intended to improve the generation of
payloads for OpenAI functions when converting from an OpenAPI spec file.
The solution is to recursively resolve `$refs`.
Currently when converting OpenAPI specs into OpenAI functions using
`openapi_spec_to_openai_fn`, if the schemas have nested references, the
generated functions contain `$ref` that causes the LLM to generate
payloads with an incorrect schema.

For example, for the for OpenAPI spec:

```
text = """
{
  "openapi": "3.0.3",
  "info": {
    "title": "Swagger Petstore - OpenAPI 3.0",
    "termsOfService": "http://swagger.io/terms/",
    "contact": {
      "email": "apiteam@swagger.io"
    },
    "license": {
      "name": "Apache 2.0",
      "url": "http://www.apache.org/licenses/LICENSE-2.0.html"
    },
    "version": "1.0.11"
  },
  "externalDocs": {
    "description": "Find out more about Swagger",
    "url": "http://swagger.io"
  },
  "servers": [
    {
      "url": "https://petstore3.swagger.io/api/v3"
    }
  ],
  "tags": [
    {
      "name": "pet",
      "description": "Everything about your Pets",
      "externalDocs": {
        "description": "Find out more",
        "url": "http://swagger.io"
      }
    },
    {
      "name": "store",
      "description": "Access to Petstore orders",
      "externalDocs": {
        "description": "Find out more about our store",
        "url": "http://swagger.io"
      }
    },
    {
      "name": "user",
      "description": "Operations about user"
    }
  ],
  "paths": {
    "/pet": {
      "post": {
        "tags": [
          "pet"
        ],
        "summary": "Add a new pet to the store",
        "description": "Add a new pet to the store",
        "operationId": "addPet",
        "requestBody": {
          "description": "Create a new pet in the store",
          "content": {
            "application/json": {
              "schema": {
                "$ref": "#/components/schemas/Pet"
              }
            }
          },
          "required": true
        },
        "responses": {
          "200": {
            "description": "Successful operation",
            "content": {
              "application/json": {
                "schema": {
                  "$ref": "#/components/schemas/Pet"
                }
              }
            }
          }
        }
      }
    }
  },
  "components": {
    "schemas": {
      "Tag": {
        "type": "object",
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64"
          },
          "model_type": {
            "type": "number"
          }
        }
      },
      "Category": {
        "type": "object",
        "required": [
          "model",
          "year",
          "age"
        ],
        "properties": {
          "year": {
            "type": "integer",
            "format": "int64",
            "example": 1
          },
          "model": {
            "type": "string",
            "example": "Ford"
          },
          "age": {
            "type": "integer",
            "example": 42
          }
        }
      },
      "Pet": {
        "required": [
          "name"
        ],
        "type": "object",
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64",
            "example": 10
          },
          "name": {
            "type": "string",
            "example": "doggie"
          },
          "category": {
            "$ref": "#/components/schemas/Category"
          },
          "tags": {
            "type": "array",
            "items": {
              "$ref": "#/components/schemas/Tag"
            }
          },
          "status": {
            "type": "string",
            "description": "pet status in the store",
            "enum": [
              "available",
              "pending",
              "sold"
            ]
          }
        }
      }
    }
  }
}
"""
```

Executing:
```
spec = OpenAPISpec.from_text(text)
pet_openai_functions, pet_callables = openapi_spec_to_openai_fn(spec)
response = model.invoke("Create a pet named Scott", functions=pet_openai_functions)
```

`pet_open_functions` contains unresolved `$refs`:

```
[
  {
    "name": "addPet",
    "description": "Add a new pet to the store",
    "parameters": {
      "type": "object",
      "properties": {
        "json": {
          "properties": {
            "id": {
              "type": "integer",
              "schema_format": "int64",
              "example": 10
            },
            "name": {
              "type": "string",
              "example": "doggie"
            },
            "category": {
              "ref": "#/components/schemas/Category"
            },
            "tags": {
              "items": {
                "ref": "#/components/schemas/Tag"
              },
              "type": "array"
            },
            "status": {
              "type": "string",
              "enum": [
                "available",
                "pending",
                "sold"
              ],
              "description": "pet status in the store"
            }
          },
          "type": "object",
          "required": [
            "name",
            "photoUrls"
          ]
        }
      }
    }
  }
]
```

and the generated JSON has an incorrect schema (e.g. category is filled
with `id` and `name` instead of `model`, `year` and `age`:

```
{
  "id": 1,
  "name": "Scott",
  "category": {
    "id": 1,
    "name": "Dogs"
  },
  "tags": [
    {
      "id": 1,
      "name": "tag1"
    }
  ],
  "status": "available"
}
```

With this change, the generated JSON by the LLM becomes,
`pet_openai_functions` becomes:

```
[
  {
    "name": "addPet",
    "description": "Add a new pet to the store",
    "parameters": {
      "type": "object",
      "properties": {
        "json": {
          "properties": {
            "id": {
              "type": "integer",
              "schema_format": "int64",
              "example": 10
            },
            "name": {
              "type": "string",
              "example": "doggie"
            },
            "category": {
              "properties": {
                "year": {
                  "type": "integer",
                  "schema_format": "int64",
                  "example": 1
                },
                "model": {
                  "type": "string",
                  "example": "Ford"
                },
                "age": {
                  "type": "integer",
                  "example": 42
                }
              },
              "type": "object",
              "required": [
                "model",
                "year",
                "age"
              ]
            },
            "tags": {
              "items": {
                "properties": {
                  "id": {
                    "type": "integer",
                    "schema_format": "int64"
                  },
                  "model_type": {
                    "type": "number"
                  }
                },
                "type": "object"
              },
              "type": "array"
            },
            "status": {
              "type": "string",
              "enum": [
                "available",
                "pending",
                "sold"
              ],
              "description": "pet status in the store"
            }
          },
          "type": "object",
          "required": [
            "name"
          ]
        }
      }
    }
  }
]
```

and the JSON generated by the LLM is:
```
{
  "id": 1,
  "name": "Scott",
  "category": {
    "year": 2022,
    "model": "Dog",
    "age": 42
  },
  "tags": [
    {
      "id": 1,
      "model_type": 1
    }
  ],
  "status": "available"
}
```

which has the intended schema.

    - **Twitter handle:**: @brunoalvisio

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-09-02 13:17:39 -07:00
Nuno Campos
464dae8ac2 core: Include global variables in variables found by get_function_nonlocals (#25936)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-09-02 11:49:25 -07:00
Luiz F. G. dos Santos
36bbdc776e community: fix bug to support for file_search tool from OpenAI (#25927)
- **Description:** The function `_is_assistants_builtin_tool` didn't had
support for `file_search` from OpenAI. This was creating conflict and
blocking the usage of such. OpenAI Assistant changed from`retrieval` to
`file_search`.
  
  The following code
  
  ```
              agent = OpenAIAssistantV2Runnable.create_assistant(
                name="Data Analysis Assistant",
                instructions=prompt[0].content,
                tools={'type': 'file_search'},
                model=self.chat_config.connection.deployment_name,
                client=llm,
                as_agent=True,
                tool_resources={
                    "file_search": {
                        "vector_store_ids": vector_store_id
                        }
                    }
                )
```

Was throwing the following error

```
Traceback (most recent call last):
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py",
line 500, in get_response
    return await super().get_response(post, context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py",
line 96, in get_response
    response = await self.inner_chat.get_response(post, context)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py",
line 96, in get_response
    response = await self.inner_chat.get_response(post, context)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py",
line 96, in get_response
    response = await self.inner_chat.get_response(post, context)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  [Previous line repeated 4 more times]
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/azure_open_ai_chat.py",
line 147, in get_response
chain = chain_factory.get_chain(prompts, post.conversation.id,
overrides, context)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/llm_connections/chains.py",
line 1324, in get_chain
    agent = OpenAIAssistantV2Runnable.create_assistant(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py",
line 256, in create_assistant
tools=[_get_assistants_tool(tool) for tool in tools], # type: ignore
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py",
line 256, in <listcomp>
tools=[_get_assistants_tool(tool) for tool in tools], # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py",
line 119, in _get_assistants_tool
    return convert_to_openai_tool(tool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_core/utils/function_calling.py",
line 255, in convert_to_openai_tool
    function = convert_to_openai_function(tool)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_core/utils/function_calling.py",
line 230, in convert_to_openai_function
    raise ValueError(
ValueError: Unsupported function

{'type': 'file_search'}

Functions must be passed in as Dict, pydantic.BaseModel, or Callable. If
they're a dict they must either be in OpenAI function format or valid
JSON schema with top-level 'title' and 'description' keys.
```

With the proposed changes, this is fixed and the function will have support for `file_search`.
  This was the only place missing the support for `file_search`.
  
  Reference doc
  https://platform.openai.com/docs/assistants/tools/file-search
  
  
  - **Twitter handle:** luizf0992

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-09-02 18:21:51 +00:00
Jacob Lee
f49cce739b 👥 Update LangChain people data (#25917)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-09-02 11:14:35 -07:00
Leonid Ganeline
96b99a5022 docs: integrations google missed references (#25923)
Added missed integration links. Fixed inconsistent formatting.
2024-09-02 11:14:18 -07:00
Leonid Ganeline
086556d466 docs: integrations reference updates 14 (#25928)
Added missed provider pages and links. Fixed inconsistent formatting.
2024-09-02 11:07:45 -07:00
Tyler Wray
1ff8c36aa6 docs: fix pgvector link (#25930)
- **Description:** pg_vector link is 404'ing. This fixes it.
2024-09-02 18:03:19 +00:00
xander-art
6cd452d985 Feature/update hunyuan (#25779)
Description: 
    - Add system templates and user templates in integration testing
    - initialize the response id field value to request_id
    - Adjust the default model to hunyuan-pro
    - Remove the default values of Temperature and TopP
    - Add SystemMessage

all the integration tests have passed.
1、Execute integration tests for the first time
<img width="1359" alt="71ca77a2-e9be-4af6-acdc-4d665002bd9b"
src="https://github.com/user-attachments/assets/9298dc3a-aa26-4bfa-968b-c011a4e699c9">

2、Run the integration test a second time
<img width="1501" alt="image"
src="https://github.com/user-attachments/assets/61335416-4a67-4840-bb89-090ba668e237">

Issue: None
Dependencies: None
Twitter handle: None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-02 12:55:08 +00:00
Yuwen Hu
566e9ba164 community: add Intel GPU support to ipex-llm llm integration (#22458)
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds Intel GPU support to `ipex-llm` llm integration.
**Dependencies:** `ipex-llm`
**Contribution maintainer**: @ivy-lv11 @Oscilloscope98
**tests and docs**: 
- Add: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/libs/community/tests/llms/test_ipex_llm.py

---------

Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
2024-09-02 08:49:08 -04:00
Bagatur
d19e074374 core[patch]: handle serializable fields that cant be converted to bool (#25903) 2024-09-01 16:44:33 -07:00
Kirushikesh DB
7f857a02d5 docs: HuggingFace pipeline returns the prompt if return_full_text is not set (#25916)
Thank you for contributing to LangChain!

**Description:**
The current documentation of using the Huggingface with Langchain needs
to set return_full_text as False otherwise pipeline by default returns
both the prompt and response as output.


Code to reproduce:
```python
from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline
from langchain_core.messages import (
    HumanMessage,
    SystemMessage,
)

llm = HuggingFacePipeline.from_model_id(
    model_id="microsoft/Phi-3.5-mini-instruct",
    task="text-generation",
    pipeline_kwargs=dict(
        max_new_tokens=512,
        do_sample=False,
        repetition_penalty=1.03,
        # return_full_text=False
    ),
    device=0
)

chat_model = ChatHuggingFace(llm=llm)

messages = [
    SystemMessage(content="You're a helpful assistant"),
    HumanMessage(
        content="What happens when an unstoppable force meets an immovable object?"
    ),
]

ai_msg = chat_model.invoke(messages)
print(ai_msg.content)
```
Output:
```
<|system|>
You're a helpful assistant<|end|>
<|user|>
What happens when an unstoppable force meets an immovable object?<|end|>
<|assistant|>
 The scenario of an "unstoppable force" meeting an "immovable object" is a classic paradox that has puzzled philosophers, scientists, and thinkers for centuries. In physics, however, there are no such things as truly unstoppable forces or immovable objects because all physical entities have mass and interact with other masses through fundamental forces (like gravity).

When we consider the laws of motion, particularly Newton's third law which states that for every action, there is an equal and opposite reaction, it becomes clear that if one were to exist, the other would necessarily be negated by the interaction. For example, if you push against a solid wall with great force, the wall exerts an equal and opposite force back on you, preventing your movement.

In theoretical discussions, this paradox often serves as a thought experiment to explore concepts like determinism versus free will, the limits of physical laws, and the nature of reality itself. However, in practical terms, any force applied to an object will result in some form of deformation, transfer of energy, or movement, depending on the properties of both the force and the object.

So while the idea of an unstoppable force and an immovable object remains a fascinating philosophical conundrum, it does not hold up under the scrutiny of physical laws as we understand them.
```

---------

Co-authored-by: Kirushikesh D B kirushi@ibm.com <kirushi@cccxl012.pok.ibm.com>
2024-09-01 13:52:20 -07:00
Yuxi Zheng
38dfde6946 docs: fix typo in Cassandra for ./cookbook/cql_agent.ipynb (#25922)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: “syd” <“zheng.yuxi@outlook.com>
2024-09-01 20:51:47 +00:00
Borahm Lee
9cdb99bd60 docs: remove unused imports in Tutorials Basics (#25919)
## Description

- `List` is not explicitly used, so the unnecessary imports will be
removed.
2024-09-01 20:51:00 +00:00
Erick Friis
8732cfc6ef docs: review process gh discussion (#25921) 2024-09-01 17:20:46 +00:00
Erick Friis
08b9715845 docs: pr review process (#25899)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-09-01 16:51:12 +00:00
ccurme
60054db1c4 infra[patch]: remove together from scheduled tests (#25909)
These now run in https://github.com/langchain-ai/langchain-together
2024-08-31 18:43:16 +00:00
Emmanuel Leroy
654da27255 improve llamacpp embeddings (#12972)
- **Description:**
Improve llamacpp embedding class by adding the `device` parameter so it
can be passed to the model and used with `gpu`, `cpu` or Apple metal
(`mps`).
Improve performance by making use of the bulk client api to compute
embeddings in batches.
  
  - **Dependencies:** none
  - **Tag maintainer:** 
@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-31 18:27:59 +00:00
Sandeep Bhandari
f882824eac Update tool_choice.ipynb spelling mistake of select (#25907)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-31 12:36:32 +00:00
ℍ𝕠𝕝𝕝𝕠𝕨 𝕄𝕒𝕟
64b62f6ae4 community[neo4j_vector]: make embedding dimension check optional (#25737)
**Description:**

Starting from Neo4j 5.23 (22 August 2024), with vector-2.0 indexes,
`vector.dimensions` is not required to be set, which will cause it the
key not exist error in index config if it's not set.

Since the existence of vector.dimensions will only ensure additional
checks, this commit turns embedding dimension check optional, and only
do checks when it exists (not None).

https://neo4j.com/release-notes/database/neo4j-5/

**Twitter handle:** @HollowM186

Signed-off-by: Hollow Man <hollowman@opensuse.org>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-31 12:36:20 +00:00
Christophe Bornet
0a752a74cc community[patch], docs: Add API reference doc for GraphVectorStore (#25751) 2024-08-30 17:42:00 -07:00
Bagatur
28e2ec7603 ollama[patch]: Release 0.1.3 (#25902) 2024-08-31 00:11:45 +00:00
Bagatur
ca1c3bd9c0 community[patch]: bump + fix core dep (#25901) 2024-08-30 15:54:07 -07:00
Bagatur
fabe32c06d core[patch]: Release 0.2.37 (#25900) 2024-08-30 22:29:12 +00:00
Richmond Alake
9992a1db43 cookbook: AI Agent Built With LangChain and FireWorksAI (#22609)
Thank you for contributing to LangChain!

- **AI Agent Built With LangChain and FireWorksAI**: "community
notebook"
- **Description:** Added a new AI agent in the cookbook folder that
integrates prompt compression using LLMLingua and arXiv retrieval tools.
The agent is designed to optimize the efficiency and performance of
research tasks by compressing lengthy prompts and retrieving relevant
academic papers. The agent also makes uses of MongoDB to store
conversational history and as it's knowledge base using MongoDB vector
store
    - **Twitter handle:** https://x.com/richmondalake

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-30 22:19:17 +00:00
mehdiosa
c6f00e6bdc community: Fix branch not being considered when using GithubFileLoader (#20075)
- **Description:** Added `ref` query parameter so data is not loaded
only from the default branch but any branch passed

---------

Co-authored-by: Osama Mehdi <mehdi@hm.edu>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-30 21:47:11 +00:00
Leonid Ganeline
54d2b861f6 docs: integrations reference updates 12 (#25676)
Added missed provider pages and links. Fixed inconsistent formatting.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-30 21:25:42 +00:00
Aditya
c8b1c3a7e7 docs: update documentation for Vertex Embeddings Models (#25745)
- **Description:update documentation for Vertex Embeddings Models
    - **Issue:NA
    - **Dependencies:NA
    - **Twitter handle:NA

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-08-30 13:58:21 -07:00
Alex Sherstinsky
617a4e617b community: Fix a bug in handling kwargs overwrites in Predibase integration, and update the documentation. (#25893)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-30 12:41:42 -07:00
Erick Friis
28f6ff6fcd docs: remove incorrect vectorstore local column (#25895) 2024-08-30 18:54:51 +00:00
Anush
ade4bfdff1 qdrant: Updated class check in Self-Query Retriever factory (#25877)
## Description

- Updates the self-query retriever factory to check for the new Qdrant
vector store class. i.e. `langchain_qdrant.QdrantVectorstore`.
- Deprecates `QdrantSparseVectorRetriever`, since the vector store
implementation natively supports it now.

Resolves #25798
2024-08-30 12:11:55 -04:00
Djordje
862ef32fdc community: Fixed infinity embeddings async request (#25882)
**Description:** Fix async infinity embeddings
**Issue:** #24942  

@baskaryan, @ccurme
2024-08-30 12:10:34 -04:00
rainsubtime
f75d5621e2 community:Fix a bug of LLM in moonshot (#25878)
- **Description:** When useing LLM integration moonshot,it's occurring
error "'Moonshot' object has no attribute '_client'",it's because of the
"_client" that is private in pydantic v1.0 so that we can't use it.I
turn "_client" into "client" , the error to be resolved!
- **Issue:** the issue #24390 
- **Dependencies:** none
- **Twitter handle:** @Rainsubtime




- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Cyue <Cyue_work2001@163.com>
2024-08-30 16:09:39 +00:00
ZhangShenao
fd0f147df3 Improvement[Community] Add tool-calling test case for ChatZhipuAI (#25884)
- Add tool-calling test case for `ChatZhipuAI`
2024-08-30 12:05:43 -04:00
k.muto
5bb810c5c6 docs: updated args_schema to be required when using callback handlers in custom tools. (#25887)
- **Description:** if you use callback handlers when using tool,
run_manager will be added to input, so you need to explicitly specify
args_schema, but i was confused because it was not listed, so i added
it. Also, it seems that the type does not work with pydantic.BaseModel.
- **Issue:** None
- **Dependencies:** None
2024-08-30 12:04:40 -04:00
默奕
6377185291 add neo4j query constructor for self query (#25288)
- [x] **PR title - community: add neo4j query constructor for self
query**

- [x] **PR message**
- **Description:** adding a Neo4jTranslator so that the Neo4j vector
database can use SelfQueryRetriever
    - **Issue:** this issue had been raised before in #19748
    - **Dependencies:** none. 
    - **Twitter handle:** @moyi_dang
- p.s. I have not added the query constructor in BUILTIN_TRANSLATORS in
this PR, I want to make changes to only one package at a time.

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-30 14:54:33 +00:00
Ohad Eytan
b5d670498f partners/milvus: allow creating a vectorstore with sparse embeddings (#25284)
# Description
Milvus (and `pymilvus`) recently added the option to use [sparse
vectors](https://milvus.io/docs/sparse_vector.md#Sparse-Vector) with
appropriate search methods (e.g., `SPARSE_INVERTED_INDEX`) and
embeddings (e.g., `BM25`, `SPLADE`).

This PR allow creating a vector store using langchain's `Milvus` class,
setting the matching vector field type to `DataType.SPARSE_FLOAT_VECTOR`
and the default index type to `SPARSE_INVERTED_INDEX`.

It is only extending functionality, and backward compatible. 

## Note
I also interested in extending the Milvus class further to support multi
vector search (aka hybrid search). Will be happy to discuss that. See
[here](https://github.com/langchain-ai/langchain/discussions/19955),
[here](https://github.com/langchain-ai/langchain/pull/20375), and
[here](https://github.com/langchain-ai/langchain/discussions/22886)
similar needs.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-30 02:30:23 +00:00
Erick Friis
09b04c7e3b "community: release 0.2.15" (#25867) 2024-08-30 02:18:48 +00:00
Erick Friis
f7e62754a1 community: undo azure_ad_access_token breaking change (#25818) 2024-08-30 02:06:14 +00:00
Leonid Ganeline
6047138379 docs: arxiv reference updates (#24949)
Added: arxiv references to the concepts page.
Regenerated: arxiv references page.
Improved: formatting of the concepts page (moved the Partner packages
section after langchain_community)
2024-08-29 18:51:18 -07:00
Bagatur
1759ff5836 infra: rm together lagnchain test dp (#25866) 2024-08-30 00:59:53 +00:00
Erick Friis
24f0c232fe docs: elastic feature (#25865) 2024-08-30 00:55:16 +00:00
Erick Friis
1640872059 together: mv to external repo (#25863) 2024-08-29 16:42:59 -07:00
Michael Paciullo
e7c856c298 langchain_openai: Add "strict" parameter to OpenAIFunctionsAgent (#25862)
- **Description:** OpenAI recently introduced a "strict" parameter for
[structured outputs in their
API](https://openai.com/index/introducing-structured-outputs-in-the-api/).
An optional `strict` parameter has been added to
`create_openai_functions_agent()` and `create_openai_tools_agent()` so
developers can use this feature in those agents.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-29 22:27:07 +00:00
Bagatur
fabd3295fa core[patch]: dont mutate merged lists/dicts (#25858)
Update merging utils to
- not mutate objects
- have special handling to 'type' keys in dicts
2024-08-29 20:34:54 +00:00
Kyle Winkelman
09c2d8faca langchain_openai: Cleanup OpenAIEmbeddings validate_environment. (#25855)
**Description:** [This portion of
code](https://github.com/langchain-ai/langchain/blob/v0.1.16/libs/partners/openai/langchain_openai/embeddings/base.py#L189-L196)
has no use as a couple lines later a [`ValueError` is
thrown](https://github.com/langchain-ai/langchain/blob/v0.1.16/libs/partners/openai/langchain_openai/embeddings/base.py#L209-L213).
**Issue:** A follow up to #25852.
2024-08-29 13:54:43 -04:00
Kyle Winkelman
201bdf7148 community: Cap AzureOpenAIEmbeddings chunk_size at 2048 instead of 16. (#25852)
**Description:** Within AzureOpenAIEmbeddings there is a validation to
cap `chunk_size` at 16. The value of 16 is either an old limitation or
was erroneously chosen. I have checked all of the `preview` and `stable`
releases to ensure that the `embeddings` endpoint can handle 2048
entries
[Azure/azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference).
I have also found many locations that confirm this limit should be 2048:
-
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings
-
https://learn.microsoft.com/en-us/azure/ai-services/openai/quotas-limits

**Issue:** fixes #25462
2024-08-29 16:48:04 +00:00
Leonid Ganeline
08c9c683a7 docs: integrations reference updates 6 (#25188)
Added missed provider pages. Added missed references to the integration
components.
2024-08-29 09:17:41 -07:00
Allan Ascencio
a8af396a82 added octoai test (#21793)
- [ ] **PR title**: community: add tests for ChatOctoAI

- [ ] **PR message**: 
Description: Added unit tests for the ChatOctoAI class in the community
package to ensure proper validation and default values. These tests
verify the correct initialization of fields, the handling of missing
required parameters, and the proper setting of aliases.
Issue: N/A
Dependencies: None

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-08-29 15:07:27 +00:00
Param Singh
69f9acb60f premai[patch]: Standardize premai params (#21513)
Thank you for contributing to LangChain!

community:premai[patch]: standardize init args

- updated `temperature` with Pydantic Field, updated the unit test.
- updated `max_tokens` with Pydantic Field, updated the unit test.
- updated `max_retries` with Pydantic Field, updated the unit test.

Related to #20085

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-29 11:01:28 -04:00
Guangdong Liu
fcf9230257 community(sparkllm): Add function call support in Sparkllm chat model. (#20607)
- **Description:** Add function call support in Sparkllm chat model.
Related documents
https://www.xfyun.cn/doc/spark/Web.html#_2-function-call%E8%AF%B4%E6%98%8E
- @baskaryan

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-29 14:38:39 +00:00
ChengZi
37f5ba416e partners[milvus]: fix issue when metadata_schema is None (#25836)
fix issue when metadata_schema is None

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-08-29 10:11:09 -04:00
ccurme
426333ff6f infra[patch]: remove AI21 from scheduled tests (#25847)
These now run in https://github.com/langchain-ai/langchain-ai21
2024-08-29 14:03:20 +00:00
Jorge Piedrahita Ortiz
9ac953a948 Community: sambastudio embeddings GenericV2 API support (#25064)
- **Description:** 
        SambaStudio GenericV2 API support 
        Minor changes for requests error handling
2024-08-29 09:52:49 -04:00
Sam Jove
bdce9a47d0 community[patch]: callback before yield for _astream (gigachat) (#25834)
Description: Moves yield to after callback for _astream for gigachat in
the community package
Issue: #16913

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-29 13:29:28 +00:00
Jinoos Lee
703af9ffe3 Patch enable to use Amazon OpenSearch Serverless(aoss) for Semantic Cache store (#25833)
- [x] **PR title**: "community: Patch enable to use Amazon OpenSearch
Serverless for Semantic Cache store"

- [x] **PR message**: 
- **Description:** OpenSearchSemanticCache class support Amazon
OpenSearch Serverless for Semantic Cache store, it's only required to
pass auth(http_auth) parameter to initializer
    - **Dependencies:** none

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Jinoos Lee <jinoos@amazon.com>
2024-08-29 13:28:22 +00:00
William FH
1ad621120d docs: Update langgraph 0.2.0 checkpointer import path (#25205)
And fix the description for timeout
2024-08-28 19:32:08 -07:00
Andrew Benton
c410545075 docs: add self-hosting row to code interpreter tools table (#25303)
**Description:** Add information about self-hosting support to the code
interpreter tools table.
**Issue:** N/A
**Dependencies:** N/A
2024-08-28 19:30:12 -07:00
Eugene Yurtsev
83327ac43a docs: Fix typo in openai llm integration notebook (#25492)
Fix typo in openai LLM integration notebook.
2024-08-28 19:22:57 -07:00
Leonid Ganeline
31f55781b3 docs: added ColBERT reference (#25452)
Added references to the source papers.
Fixed URL verification code.
Improved arXive page formatting.
Regenerated arXiv page.
2024-08-28 19:05:44 -07:00
Mikhail Khludnev
a017f49fd3 comminity[patch]: fix #25575 YandexGPTs for _grpc_metadata (#25617)
it fixes two issues:

### YGPTs are broken #25575

```
File ....conda/lib/python3.11/site-packages/langchain_community/embeddings/yandex.py:211, in _make_request(self, texts, **kwargs)
..
--> 211 res = stub.TextEmbedding(request, metadata=self._grpc_metadata)  # type: ignore[attr-defined]

AttributeError: 'YandexGPTEmbeddings' object has no attribute '_grpc_metadata'
```
My gut feeling that #23841 is the cause.

I have to drop leading underscore from `_grpc_metadata` for quickfix,
but I just don't know how to do it _pydantic_ enough.

### minor issue:

if we use `api_key`, which is not the best practice the code fails with 

```
File ~/git/...../python3.11/site-packages/langchain_community/embeddings/yandex.py:119, in YandexGPTEmbeddings.validate_environment(cls, values)
...

AttributeError: 'tuple' object has no attribute 'append'
```

- Added new integration test. But it requires YGPT env available and
active account. I don't know how int tests dis\enabled in CI.
 - added small unit tests with mocks. Should be fine.

---------

Co-authored-by: mikhail-khludnev <mikhail_khludnev@rntgroup.com>
2024-08-28 18:48:10 -07:00
Serena Ruan
850bf89e48 community[patch]: Support passing extra params for executing functions in UCFunctionToolkit (#25652)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


Support passing extra params when executing UC functions:
The params should be a dictionary with key EXECUTE_FUNCTION_ARG_NAME,
the assumption is that the function itself doesn't use such variable
name (starting and ending with double underscores), and if it does we
raise Exception.
If invalid params passing to the execute_statement, we raise Exception
as well.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-28 18:47:32 -07:00
崔浩
3555882a0d community[patch]: optimize xinference llm import (#25809)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: optimize xinference llm import"

- [ ] **PR message**: 
- **Description:** from xinferece_client import RESTfulClient when there
is no importing xinference.
    - **Dependencies:** xinferece_client
- **Why do so:** the total xinference(pip install xinference[all]) is
too heavy for installing, let alone it is useless for langchain user
except RESTfulClient. The modification has maintained consistency with
the xinference embeddings
[embeddings/xinference](../blob/master/libs/community/langchain_community/embeddings/xinference.py#L89).
2024-08-29 01:41:43 +00:00
Michael Rubél
9decd0b243 langchain[patch]: fix moderation chain init (#25778)
[This
commit](d3ca2cc8c3)
has broken the moderation chain so we've faced a crash when migrating
the LangChain from v0.1 to v0.2.

The issue appears that the class attribute the code refers to doesn't
hold the value processed in the `validate_environment` method. We had
`extras={}` in this attribute, and it was casted to `True` when it
should've been `False`. Adding a simple assignment seems to resolve the
issue, though I'm not sure it's the right way.

---

---------

Co-authored-by: Michael Rubél <mrubel@oroinc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-28 18:41:31 -07:00
Madhu Shantan
63a1569d5f docs: fixed syntax error in ChatAnthropic Example - rag app tutorial notebook (#25824)
Thank you for contributing to LangChain!

- [ ] **PR title**: docs: fixed syntax error in ChatAnthropic Example -
rag app tutorial notebook - generation


- [ ] **PR message**: 
- **Description:** Fixed a syntax error in the ChatAnthropic
initialization example in the RAG tutorial notebook. The original code
had an extra set of quotation marks around the model parameter, which
would cause a Python syntax error. The corrected version removes these
unnecessary quotes.
 
- **Dependencies:** No new dependencies required for this documentation
fix.
I've verified that the corrected code is syntactically valid and matches
the expected format for initializing a ChatAnthropic instance in
LangChain.
    - **Twitter handle:** madhu_shantan


- [ ] **Add tests and docs**: the error in Jupyter notebook: 
<img width="1189" alt="Screenshot 2024-08-29 at 12 43 47 AM"
src="https://github.com/user-attachments/assets/07148a93-300f-40e2-ad4a-ac219cbb56a4">

the corrected cell: 
<img width="983" alt="Screenshot 2024-08-29 at 12 44 18 AM"
src="https://github.com/user-attachments/assets/75b1455a-3671-454e-ac16-8ca77c049dbd">



- [ ] **Lint and test**: As this is a documentation-only change, I have
not run the full test suite. However, I have verified that the corrected
code example is syntactically valid and matches the expected usage of
the ChatAnthropic class.
 
the error in the docs is here -  
<img width="1020" alt="Screenshot 2024-08-29 at 12 48 36 AM"
src="https://github.com/user-attachments/assets/812ccb20-b411-4a5b-afc1-41742efb32a7">
2024-08-29 01:31:01 +00:00
Erick Friis
e5ae988505 prompty: bump core version (#25831) 2024-08-28 23:06:13 +00:00
Erick Friis
c8b8335b82 core: prompt variable error msg (#25787) 2024-08-28 22:54:00 +00:00
ccurme
ff168aaec0 prompty: release 0.0.3 (#25830) 2024-08-28 15:52:17 -07:00
Matthieu
783397eacb community: avoid double templating in langchain_prompty (#25777)
## Description

In `langchain_prompty`, messages are templated by Prompty. However, a
call to `ChatPromptTemplate` was initiating a second templating. We now
convert parsed messages to `Message` objects before calling
`ChatPromptTemplate`, signifying clearly that they are already
templated.

We also revert #25739 , which applied to this second templating, which
we now avoid, and did not fix the original issue.

## Issue

Closes #25703
2024-08-28 18:18:18 -04:00
ccurme
afe8ccaaa6 community[patch]: Add ID field back to Azure AI Search results (#25828)
Commandeering https://github.com/langchain-ai/langchain/pull/23243 as
maintainers don't have ability to modify that PR.

Fixes https://github.com/langchain-ai/langchain/issues/22827

---------

Co-authored-by: Ming Quah <fleetadmiralbutter@icloud.com>
2024-08-28 17:56:50 -04:00
rbrugaro
9fa172bc26 add links in example nb with tei/tgi references (#25821)
I have validated langchain interface with tei/tgi works as expected when
TEI and TGI running on Intel Gaudi2. Adding some references to notebooks
to help users find relevant info.

---------

Co-authored-by: Rita Brugarolas <rbrugaro@idc708053.jf.intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-28 21:33:25 +00:00
Erick Friis
8fb594fd2a ai21: migrate to external repo (#25827) 2024-08-28 14:24:07 -07:00
Erick Friis
095b712a26 ollama: bump core version (#25826) 2024-08-28 12:31:16 -07:00
Erick Friis
5db6c6d96d community: release 0.2.14 (#25822) 2024-08-28 19:05:53 +00:00
Erick Friis
d6c4803ab0 core: release 0.2.36 (#25819) 2024-08-28 18:04:51 +00:00
Erick Friis
5186325bc7 partners/ollama: release 0.1.2 (#25817)
release for #25697
2024-08-28 17:47:32 +00:00
Rohit Gupta
aff50a1e6f milvus: add array data type for collection create (#23219)
Add array data type for milvus vector store collection create


Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Rohit Gupta <rohit.gupta2@walmart.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-28 16:55:57 +00:00
Cillian Berragan
754f3c41f9 community: add score to PineconeHybridSearchRetriever (#25781)
**Description:**

Adds the 'score' returned by Pinecone to the
`PineconeHybridSearchRetriever` list of returned Documents.

There is currently no way to return the score when using Pinecone hybrid
search, so in this PR I include it by default.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-28 13:11:06 +00:00
ZhangShenao
3f1d652f15 Improvement[Community] Improve api doc for PineconeHybridSearchRetriever (#25803)
- Complete missing args in api doc
2024-08-28 08:38:56 -04:00
Moritz Schlager
555f97becb community[patch]: fix model initialization bug for deepinfra (#25727)
### Description
adds an init method to ChatDeepInfra to set the model_name attribute
accordings to the argument
### Issue
currently, the model_name specified by the user during initialization of
the ChatDeepInfra class is never set. Therefore, it always chooses the
default model (meta-llama/Llama-2-70b-chat-hf, however probably since
this is deprecated it always uses meta-llama/Llama-3-70b-Instruct). We
stumbled across this issue and fixed it as proposed in this pull
request. Feel free to change the fix according to your coding guidelines
and style, this is just a proposal and we want to draw attention to this
problem.
### Dependencies
no additional dependencies required

Feel free to contact me or @timo282 and @finitearth if you have any
questions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-08-28 02:02:35 -07:00
Bagatur
a052173b55 together[patch]: Release 0.1.6 (#25805) 2024-08-28 02:01:49 -07:00
Bagatur
b0ac6fe8d3 community[patch]: Release 0.2.13 (#25806) 2024-08-28 08:57:49 +00:00
Bagatur
85aef7641c openai[patch]: Release 0.1.23 (#25804) 2024-08-28 08:52:08 +00:00
Bagatur
0d3fd0aeb9 langchain[patch]: Release 0.2.15 (#25802) 2024-08-28 08:35:00 +00:00
zysoong
25a6790e1a community[patch]: Minor Improvement of extract hyperlinks tool output (#25728)
**Description:** Make the hyperlink only appear once in the
extract_hyperlinks tool output. (for some websites output contains
meaningless '#' hyperlinks multiple times which will extend the tokens
of context window without any advantage)
**Issue:** None
**Dependencies:** None
2024-08-28 08:02:40 +00:00
Christophe Bornet
ff0df5ea15 core[patch]: Add B(bugbear) ruff rules (#25520)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-28 07:09:29 +00:00
Isaac Francisco
d5ddaac1fc docs minor fix (#25794) 2024-08-28 04:14:36 +00:00
ccurme
3c784e10a8 docs: improve docs for InMemoryVectorStore (#25786)
Closes https://github.com/langchain-ai/langchain/issues/25775
2024-08-27 21:12:32 -07:00
Erick Friis
1023fbc98a databricks: mv to partner repo (#25788) 2024-08-27 18:51:17 -07:00
ccurme
2e5c379632 openai[patch]: fix get_num_tokens for function calls (#25785)
Closes https://github.com/langchain-ai/langchain/issues/25784

See additional discussion
[here](0a4ee864e9 (r145147380)).
2024-08-27 20:18:19 +00:00
Erick Friis
2aa35d80a0 docs, infra: cerebras docs, update docs template linting with better error (#25782) 2024-08-27 17:19:59 +00:00
venkatram-dev
48b579f6b5 date_time_parser (#25763)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain: Chains: query_constructor: add date time
parser"

- [x] **PR message**: 
- **Description:** add date time parser to langchain Chains
query_constructor
    - **Issue: https://github.com/langchain-ai/langchain/issues/25526


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-27 13:18:52 -04:00
Tomaz Bratanic
f359e6b0a5 Add mmr to neo4j vector (#25765) 2024-08-27 08:55:19 -04:00
pazshalev
995305fdd5 test: fix tool calling integration tests for AI21 Jamba models (#25771)
Ignore specific integration tests that handles specific tool calling
cases that will soon be fixed.
2024-08-27 08:54:51 -04:00
Luis Valencia
99f9a664a5 community: Azure Search Vector Store is missing Access Token Authentication (#24330)
Added Azure Search Access Token Authentication instead of API KEY auth.
Fixes Issue: https://github.com/langchain-ai/langchain/issues/24263
Dependencies: None
Twitter: @levalencia

@baskaryan

Could you please review? First time creating a PR that fixes some code.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-26 15:41:50 -07:00
Leonid Ganeline
49b0bc7b5a docs: integrations reference updates 5 (#25151)
Added missed references. Added missed provider pages.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-26 15:12:39 -04:00
ZhangShenao
44e3e2391c Improvement[Community] Improve methods in IMessageChatLoader (#25746)
- Add @staticmethod to static methods in `IMessageChatLoader`.
- Format args name.
2024-08-26 14:20:22 -04:00
Erick Friis
815f59dba5 partners/ai21: release 0.1.8 (#25759) 2024-08-26 18:02:43 +00:00
amirai21
17dffd9741 AI21: tools calling support in Langchain (#25635)
This pull request introduces support for the AI21 tools calling feature,
available by the Jamba-1.5 models. When Jamba-1.5 detects the necessity
to invoke a provided tool, as indicated by the 'tools' parameter passed
to the model:

```
class ToolDefinition(TypedDict, total=False):
    type: Required[Literal["function"]]
    function: Required[FunctionToolDefinition]

class FunctionToolDefinition(TypedDict, total=False):
    name: Required[str]
    description: str
    parameters: ToolParameters

class ToolParameters(TypedDict, total=False):
    type: Literal["object"]
    properties: Required[Dict[str, Any]]
    required: List[str]
```

It will respond with a list of tool calls structured as follows:

```
class ToolCall(AI21BaseModel):
    id: str
    function: ToolFunction
    type: Literal["function"] = "function"

class ToolFunction(AI21BaseModel):
    name: str
    arguments: str
```

This pull request incorporates the necessary modifications to integrate
this functionality into the ai21-langchain library.

---------

Co-authored-by: asafg <asafg@ai21.com>
Co-authored-by: pazshalev <111360591+pazshalev@users.noreply.github.com>
Co-authored-by: Paz Shalev <pazs@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-26 10:50:30 -07:00
maang-h
a566a15930 Fix MoonshotChat instantiate with alias (#25755)
- **Description:**
   -  Fix `MoonshotChat` instantiate with alias
   - Add `MoonshotChat` to `__init__.py`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-26 17:33:22 +00:00
venkatram-dev
ec99f0d193 milvus: add_db_milvus_connection (#25627)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"
  - "libs: langchain_milvus: add db name to milvus connection check"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:**  add db name to milvus connection check
    - **Issue:** https://github.com/langchain-ai/langchain/issues/25277



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-26 17:29:12 +00:00
Ashvin
af3b3a4474 Update endpoint for AzureMLEndpointApiType class. (#25725)
This addresses the issue mentioned in #25702

I have updated the endpoint used in validating the endpoint API type in
the AzureMLBaseEndpoint class from `/v1/completions` to `/completions`
and `/v1/chat/completions` to `/chat/completions`.

Co-authored-by: = <=>
2024-08-26 08:50:02 -04:00
Mohammad Mohtashim
dcf2278a05 [Community]: Added Template Format Parameter in create_chat_prompt for Langchain Prompty (#25739)
- **Description:** Added a `template_format` parameter to
`create_chat_prompt` to allow `.prompty` files to handle variables in
different template formats.
- **Issue:** #25703

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-26 12:48:21 +00:00
Dristy Srivastava
7205057c3e [Community][minor]: Added langchain_version while calling discover API (#24428)
- **Description:** Added langchain version while calling discover API
during both ingestion and retrieval
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** NA
- **Docs** NA

---------

Co-authored-by: dristy.cd <dristy@clouddefense.io>
2024-08-26 08:47:48 -04:00
Dristy Srivastava
fbb4761199 [Community][minor]: Updating source path, and file path for SharePoint loader in PebbloSafeLoader (#25592)
- **Description:** Updating source path and file path in Pebblo safe
loader for SharePoint apps during loading
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** NA
- **Docs** NA

---------

Co-authored-by: dristy.cd <dristy@clouddefense.io>
2024-08-26 08:38:40 -04:00
Rajendra Kadam
745d1c2b8d community[minor]: [Pebblo] Fix URL construction in newer Python versions (#25747)
- **PR message**: **Fix URL construction in newer Python versions**
- **Description:** 
- Update the URL construction logic to use the .value attribute for
Routes enum members.
- This adjustment resolves an issue where the code worked correctly in
Python 3.9 but failed in Python 3.11.
  - Clean up unused routes.
- **Issue:** NA
- **Dependencies:** NA
2024-08-26 07:27:30 -04:00
Rajendra Kadam
58a98c7d8a community: [PebbloRetrievalQA] Implemented Async support for prompt APIs (#25748)
- **Description:** PebbloRetrievalQA: Implemented Async support for
prompt APIs (classification and governance)
- **Issue:** NA
- **Dependencies:** NA
2024-08-26 07:27:05 -04:00
Tomaz Bratanic
6703d795c5 Handle Ollama tool raw schema in llmgraphtransformer (#25752) 2024-08-26 07:26:26 -04:00
Bagatur
30f1bf24ac core[patch]: Release 0.2.35 (#25729) 2024-08-25 16:49:27 -07:00
Christophe Bornet
038c287b3a all: Improve make lint command (#25344)
* Removed `ruff check --select I` as `I` is already selected and checked
in the main `ruff check` command
* Added checks for non-empty `PYTHON_FILES`
* Run `ruff check` only on `PYTHON_FILES`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 18:23:52 -07:00
Mohammad Mohtashim
9a29398fe6 huggingface: fix model param population (#24743)
- **Description:** Fix the validation error for `endpoint_url` for
HuggingFaceEndpoint. I have given a descriptive detail of the isse in
the issue that I have created.
- **Issue:** #24742

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-24 00:45:28 +00:00
Yuki Watanabe
c7a8af2e75 databricks: add vector search and embeddings (#25648)
### Summary

Add `DatabricksVectorSearch` and `DatabricksEmbeddings` classes to the
`langchain-databricks` partner packages. Core functionality is
unchanged, but the vector search class is largely refactored for
readability and maintainability.

This PR does not add integration tests yet. This will be added once the
Databricks test workspace is ready.

Tagging @efriis as POC


### Tracker
[] Create a package and imgrate ChatDatabricks
[✍️] Migrate DatabricksVectorSearch, DatabricksEmbeddings, and their
docs
~[ ] Migrate UCFunctionToolkit and its doc~
[ ] Add provider document and update README.md
[ ] Add integration tests and set up secrets (after moved to an external
package)
[ ] Add deprecation note to the community implementations.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-24 00:40:21 +00:00
Erick Friis
71c039571a docs: remove deprecated nemo embed docs (#25720) 2024-08-24 00:36:33 +00:00
Hyman
58e72febeb openai:compatible with other llm usage meta data (#24500)
- [ ] **PR message**:
- **Description:** Compatible with other llm (eg: deepseek-chat, glm-4)
usage meta data
    - **Issue:** N/A
    - **Dependencies:** no new dependencies added


- [ ] **Add tests and docs**: 
libs/partners/openai/tests/unit_tests/chat_models/test_base.py
```shell
cd libs/partners/openai
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_openai_astream
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_openai_stream
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_deepseek_astream
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_deepseek_stream
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_glm4_astream
poetry run pytest tests/unit_tests/chat_models/test_base.py::test_glm4_stream
```

---------

Co-authored-by: hyman <hyman@xiaozancloud.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 16:59:14 -07:00
Erick Friis
3dc7d447aa infra: reenable min version testing 2, ci ignore ai21 (#25709) 2024-08-23 23:28:42 +00:00
Erick Friis
f6491ceb7d community: remove integration test deps (#24460)
they arent used
2024-08-23 23:25:17 +00:00
Erick Friis
0022ae1b31 docs: remove templates (#25717)
- [x] check redirect works at template root
- [x] check redirect works within individual template page
2024-08-23 15:51:12 -07:00
Sharmistha S. Gupta
90439b12f6 Added support for Nebula Chat model (#21925)
Description: Added support for Nebula Chat model in addition to Nebula
Instruct
Dependencies: N/A
Twitter handle: @Symbldotai

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-23 22:34:32 +00:00
James Espichan Vilca
080741d336 core[patch]: Fix type for inner input in base prompts (#25713)
Thank you for contributing to LangChain!

- [ ] **PR title**: "langchain-core: Fix type"
- The file to modify is located in
/libs/core/langchain_core/prompts/base.py


- [ ] **PR message**: 
- **Description:** The change is a type for the inner input variable,
the type go from dict to Any. This change is required since the method
_validate input expects a type that is not only a dictionary.
    - **Dependencies:** There are no dependencies for this change


- [ ] **Add tests and docs**: 
1. A test is not needed. This error occurs because I overrode a portion
of the _validate_input method, which is causing a 'beartype' to raise an
error.
2024-08-23 14:06:39 -07:00
John
5ce9a716a7 docs, langchain-unstructured: update langchain-unstructured docs and update ustructured-client dependency (#25451)
Be more explicit in the docs about creating an instance of the
UnstructuredClient if you want to customize it versus using sdk
parameters with the UnstructuredLoader.

Bump the unstructured-client dependency as discussed
[here](https://github.com/langchain-ai/langchain/discussions/25328#discussioncomment-10350949)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 20:36:41 +00:00
Scott Hurrey
92abf62292 box[patch]: fix bugs in docs (#25699) 2024-08-23 19:36:23 +00:00
Ian
64ace25eb8 <Community>: tidb vector support vector index (#19984)
This PR introduces adjustments to ensure compatibility with the recently
released preview version of [TiDB Serverless Vector
Search](https://tidb.cloud/ai), aiming to prevent user confusion.

- TiDB Vector now supports vector indexing with cosine and l2 distance
strategies, although inner_product remains unsupported.
- Changing the distance strategy is currently not supported, so the test
cased should be adjusted.
2024-08-23 13:59:23 -04:00
Austin Burdette
f355a98bb6 community:yuan2[patch]: standardize init args (#21462)
updated stop and request_timeout so they aliased to stop_sequences, and
timeout respectively. Added test that both continue to set the same
underlying attributes.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-23 17:56:19 +00:00
ccurme
bc557a5663 text-splitters[patch]: fix typing for keep_separator (#25706) 2024-08-23 17:22:02 +00:00
Erick Friis
8170bd636f Revert "infra: reenable min version testing" (#25708)
Reverts langchain-ai/langchain#24640
2024-08-23 10:20:23 -07:00
Erick Friis
3d5808ef27 infra: reenable min version testing (#24640)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-08-23 10:17:41 -07:00
Erick Friis
b365ee996b community: remove unused verify_ssl kwarg from aiohttp request (#25707)
it's not a valid kwarg in aiohttp request
2024-08-23 17:14:04 +00:00
Erick Friis
580fbd9ada docs: api ref to new site somewheres (#25679)
```
https://api\.python\.langchain\.com/en/latest/([^/]*)/langchain_([^.]*)\.(.*)\.html([^"]*)
https://python.langchain.com/v0.2/api_reference/$2/$1/langchain_$2.$3.html$4
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-23 10:01:16 -07:00
Bagatur
6a60a2a435 infra: gitignore api_ref mds (#25705) 2024-08-23 09:50:30 -07:00
Ashvin
2cd77a53a3 docs: Add docstrings for CassandraChatMessageHistory class and package namespace function. (#24222)
- Modified docstring for CassandraChatMessageHistory in
libs/community/langchain_community/chat_message_history/cassandra.py.

- Added docstring for _package_namespace function in
docs/api_reference/create_api_rst.py

---------

Co-authored-by: ashvin <ashvin.anilkumar@qburst.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-23 15:49:41 +00:00
Leonid Ganeline
8788a34bfa community: NeptuneGraph fix (#23281)
Issue: the `service` optional parameter was mentioned but not used.
Fix: added this parameter.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-23 15:34:26 +00:00
Djordje
22f9ae489f community: Opensearch - added score function for similarity_score_threshold (#23928)
This PR resolves the NotImplemented error for the
similarity_score_threshold search type for OpenSearch.
2024-08-23 11:30:04 -04:00
ZhangShenao
b38c83ff93 patch[Community] Optimize methods in several ChatLoaders (#24806)
There are some static methods in ChatLoaders, try to add @staticmethod
decorator for them.
2024-08-23 11:00:41 -04:00
James Espichan Vilca
644e0d3463 Use extend method for embeddings concatenation in mlflow_gateway (#14358)
## Description
There is a bug in the concatenation of embeddings obtained from MLflow
that does not conform to the type hint requested by the function.
``` python  
def _query(self, texts: List[str]) -> List[List[float]]:
```
It is logical to expect a **List[List[float]]** for a **List[str]**.
However, the append method encapsulates the response in a global List.
To avoid this, the extend method should be used, which will add the
embeddings of all strings at the same list level.

## Testing
I have tried using OpenAI-ADA to obtain the embeddings, and the result
of executing this snippet is as follows:

``` python  
embeds = await MlflowAIGatewayEmbeddings().aembed_documents(texts=["hi", "how are you?"])
print(embeds)
```  

``` python  
[[[-0.03512698, -0.020624293, -0.015343423, ...], [-0.021260535, -0.011461929, -0.00033121882, ...]]]
```
When in reality, the expected result should be:

``` python  
[[-0.03512698, -0.020624293, -0.015343423, ...], [-0.021260535, -0.011461929, -0.00033121882, ...]]
```
The above result complies with the expected type hint:
**List[List[float]]** . As I mentioned, we can achieve that by using the
extend method instead of the append method.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-23 14:43:43 +00:00
Christophe Bornet
7f1e444efa partners: Use simsimd types (#25299)
The simsimd package [now has
types](https://github.com/ashvardanian/SimSIMD/releases/tag/v5.0.0)
2024-08-23 10:41:39 -04:00
clement.l
642f9530cd community: add supported blockchains to Blockchain Document Loader (#25428)
- Remove deprecated chains.
- Add more supported chains.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-23 14:39:42 +00:00
conjuncts
818267bbc3 community: allow chroma DB delete() to use "where" argument (#19826)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"

Description: Simply pass kwargs to allow arguments like "where" to be
propagated
Issue: Previously, db.delete(where={}) wouldn't work for chroma
vectorstores
Dependencies: N/A
Twitter handle: N/A

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-08-23 10:10:57 -04:00
Kevin Engelke
3c7f12cbf5 community[minor]: Fix missing 'keep_newlines' parameter forward-pass to 'process_pages' function in confluence loader (#20086) (#20087)
- **Description:** Fixed missing `keep_newlines` parameter forward-pass
in confluence-loader
- **Issue:** #20086 
- **Dependencies:** None

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-23 12:59:38 +00:00
Erik Lindgren
583b0449eb community[patch]: Fix Hybrid Search for non-Databricks managed embeddings (#25590)
Description: Send both the query and query_embedding to the Databricks
index for hybrid search.

Issue: When using hybrid search with non-Databricks managed embedding we
currently don't pass both the embedding and query_text to the index.
Hybrid search requires both of these. This change fixes this issue for
both `similarity_search` and `similarity_search_by_vector`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 08:57:13 +00:00
Alejandro Companioni
bcd5842b5d community[patch]: Updating default PPLX model to supported llama-3.1 model. (#25643)
# Issue

As of late July, Perplexity [no longer supports Llama 3
models](https://docs.perplexity.ai/changelog/introducing-new-and-improved-sonar-models).

# Description

This PR updates the default model and doc examples to reflect their
latest supported model. (Mostly updating the same places changed by
#23723.)

# Twitter handle

`@acompa_` on behalf of the team at Not Diamond. Check us out
[here](https://notdiamond.ai).

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-08-23 08:33:30 +00:00
Leonid Ganeline
163ef35dd1 docs: templates updated titles (#25646)
Updated titles into a consistent format. 
Fixed links to the diagrams.
Fixed typos.
Note: The Templates menu in the navbar is now sorted by the file names.
I'll try sorting the navbar menus by the page titles, not the page file
names.
2024-08-23 01:19:38 -07:00
Parsa Abbasi
1b2ae40d45 docs: Updated WikipediaLoader documentation (#25647)
- Output of the cells was not included in the documentation. I have
added them.
- There is another parameter in the `WikipediaLoader` class called
`doc_content_chars_max` (Based on
[this](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.wikipedia.WikipediaLoader.html)).
I have included this in the list of parameters.
- I put the list of parameters under a new section called "Parameters"
in the documentation.
- I also included the `langchain_community` package in the installation
command.
- Some minor formatting/spelling issues were fixed.
2024-08-23 01:19:03 -07:00
Jakub W.
b865ee49a0 community[patch]: Dynamodb history messages key (#25658)
- **Description:** adding the history_messages_key so you don't have to
use "History" as a key in langchain
2024-08-23 08:05:28 +00:00
Erick Friis
b28bc252c4 core[patch]: mmr util (#25689) 2024-08-22 21:31:17 -07:00
ZhangShenao
ba89933c2c Doc[Embeddings] Add docs for ZhipuAIEmbeddings (#25662)
- Add docs for `ZhipuAIEmbeddings`.
- Using integration doc template.
- Source api reference: https://bigmodel.cn/dev/api#vector

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 01:33:43 +00:00
Erick Friis
6096c80b71 core: pydantic output parser streaming fix (#24415)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-22 18:00:09 -07:00
Eugene Yurtsev
c316361115 core[patch]: Add _api.rename_parameter to support renaming of parameters in functions (#25101)
Add ability to rename paramerters in function signatures

```python

    @rename_parameter(since="2.0.0", removal="3.0.0", old="old_name", new="new_name")
    def foo(new_name: str) -> str:
        """original doc"""
        return new_name
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-08-22 17:16:31 -07:00
Yusuke Fukasawa
0258cb96fa core[patch]: add additionalProperties recursively to oai function if strict (#25169)
Hello. 
First of all, thank you for maintaining such a great project.

## Description
In https://github.com/langchain-ai/langchain/pull/25123, support for
structured_output is added. However, `"additionalProperties": false`
needs to be included at all levels when a nested object is generated.

error from current code:
https://gist.github.com/fufufukakaka/e9b475300e6934853d119428e390f204
```
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for response_format 'JokeWithEvaluation': In context=('properties', 'self_evaluation'), 'additionalProperties' is required to be supplied and to be false", 'type': 'invalid_request_error', 'param': 'response_format', 'code': None}}
```

Reference: [Introducing Structured Outputs in the
API](https://openai.com/index/introducing-structured-outputs-in-the-api/)

```json
{
  "model": "gpt-4o-2024-08-06",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful math tutor."
    },
    {
      "role": "user",
      "content": "solve 8x + 31 = 2"
    }
  ],
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "math_response",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "steps": {
            "type": "array",
            "items": {
              "type": "object",
              "properties": {
                "explanation": {
                  "type": "string"
                },
                "output": {
                  "type": "string"
                }
              },
              "required": ["explanation", "output"],
              "additionalProperties": false
            }
          },
          "final_answer": {
            "type": "string"
          }
        },
        "required": ["steps", "final_answer"],
        "additionalProperties": false
      }
    }
  }
}
```

In the current code, `"additionalProperties": false` is only added at
the last level.
This PR introduces the `_add_additional_properties_key` function, which
recursively adds `"additionalProperties": false` to the entire JSON
schema for the request.

Twitter handle: `@fukkaa1225`

Thank you!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-23 00:08:58 +00:00
Bagatur
b35ee09b3f infra: xfail pydantic v2 arg to py function (#25686)
Issue to track: #25687
2024-08-22 23:52:57 +00:00
Christophe Bornet
ee98da4f4e core[patch]: Add UP(upgrade) ruff rules (#25358) 2024-08-22 16:29:22 -07:00
William FH
294f7fcb38 core[patch]: Remove different parent run id warning (#25683) 2024-08-22 16:10:35 -07:00
Vadym Barda
46d344c33d core[patch]: support drawing nested subgraphs in draw_mermaid (#25581)
Previously the code was able to only handle a single level of nesting
for subgraphs in mermaid. This change adds support for arbitrary nesting
of subgraphs.
2024-08-22 16:08:49 -07:00
Manuel Jaiczay
1c31234eed community: fix HuggingFacePipeline pipeline_kwargs (#19920)
Fix handling of pipeline_kwargs to prioritize class attribute defaults.

#19770

Co-authored-by: jaizo <manuel.jaiczay@polygons.at>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-22 18:29:46 -04:00
Nobuhiko Otoba
4b63a217c2 "community: Fix GithubFileLoader source code", "docs: Fix GithubFileLoader code sample" (#19943)
This PR adds tiny improvements to the `GithubFileLoader` document loader
and its code sample, addressing the following issues:

1. Currently, the `file_extension` argument of `GithubFileLoader` does
not change its behavior at all.
1. The `GithubFileLoader` sample code in
`docs/docs/integrations/document_loaders/github.ipynb` does not work as
it stands.

The respective solutions I propose are the following:

1. Remove `file_extension` argument from `GithubFileLoader`.
1. Specify the branch as `master` (not the default `main`) and rename
`documents` as `document`.

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-22 18:24:57 -04:00
Leonid Ganeline
e7abee034e docs: integrations reference updates 4 (#25118)
Added missed references; missed provider pages.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 22:16:45 +00:00
Erick Friis
5fb8aa82b9 docs: api ref to new site links in featuretable (#25678) 2024-08-22 21:52:50 +00:00
Bagatur
cf9c484715 standard-tests[patch]: test Message.name (#25677)
Tests:
https://github.com/langchain-ai/langchain/actions/runs/10516092584
2024-08-22 14:47:31 -07:00
Nada Amin
ac7b71e0d7 langchain_community.graphs: Neo4JGraph: prop min_size might be None (#23944)
When I used the Neo4JGraph enhanced_schema=True option, I ran into an
error because a prop min_size of None was compared numerically with an
int.

The fix I applied is similar to the pattern of skipping embeddings
elsewhere in the file.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-22 20:29:52 +00:00
CastaChick
7d13a2f958 core[patch]: add option to specify the chunk separator in merge_message_runs (#24783)
**Description:**
LLM will stop generating text even in the middle of a sentence if
`finish_reason` is `length` (for OpenAI) or `stop_reason` is
`max_tokens` (for Anthropic).
To obtain longer outputs from LLM, we should call the message generation
API multiple times and merge the results into the text to circumvent the
API's output token limit.
The extra line breaks forced by the `merge_message_runs` function when
seamlessly merging messages can be annoying, so I added the option to
specify the chunk separator.

**Issue:**
No corresponding issues.

**Dependencies:**
No dependencies required.

**Twitter handle:**
@hanama_chem
https://x.com/hanama_chem

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-22 19:46:25 +00:00
basirsedighi
0f3fe44e44 parsed_json is expected to be a list of dictionaries, but it seems to… (#24018)
parsed_json is expected to be a list of dictionaries, but it seems to…
be a single dictionary instead.
This is at
libs/experimental/langchain_experimental/graph_transformers/llm.py
process process_response

Thank you for contributing to LangChain!

- [ ] **Bugfix**: "experimental: bugfix"

---------

Co-authored-by: based <basir.sedighi@nris.no>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 19:09:43 +00:00
ZhangShenao
8bde04079b patch[experimental] Fix start_index in SemanticChunker (#24761)
- Cause chunks are joined by space, so they can't be found in text, and
the final `start_index` is very possibility to be -1.
- The simplest way is to use the natural index of the chunk as
`start_index`.
2024-08-22 14:59:40 -04:00
Sanjay Parajuli
6fbd53bc60 docs: Update tool_calling.ipynb (#25434)
**Description:** This part of the documentation didn't explain about the
`required` property of function calling. I added additional line as a
note.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 18:55:24 +00:00
William FH
fad6fc866a Rm DeepInfra Breakpoint Comment (#25206)
tbh should rm the print staement too
2024-08-22 14:43:44 -04:00
yahya-mouman
e5bb4cb646 lagchain-pinecone: add id to similarity documents results (#25630)
- **Description:** This change adds the ID field that's required in
Pinecone to the result documents of the similarity search method.
- **Issue:** Lack of document metadata namely the ID field

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 18:33:26 +00:00
Eric Pinzur
01ded5e2f9 community: add metadata filter to CassandraGraphVectorStore (#25663)
- **Description:** 
- Added metadata filtering support to
`langchain_community.graph_vectorstores.cassandra.CassandraGraphVectorStore`
  - Also fixed type conversion issues highlighted by mypy.
- **Dependencies:** 
  - `ragstack-ai-knowledge-store 0.2.0` (released July 23, 2024)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 14:27:16 -04:00
Ivan
5b9290a449 Fix UnionType type var replacement (#25566)
[langchain_core] Fix UnionType type var replacement

- Added types.UnionType to typing.Union mapping

Type replacement cause `TypeError: 'type' object is not subscriptable`
if any of union type comes as function `_py_38_safe_origin` return
`types.UnionType` instead of `typing.Union`

```python
>>> from types import UnionType
>>> from typing import Union, get_origin
>>> type_ = get_origin(str | None)
>>> type_
<class 'types.UnionType'>
>>> UnionType[(str, None)]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'type' object is not subscriptable
>>> Union[(str, None)]
typing.Optional[str]
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 14:22:09 -04:00
William FH
8230ba47f3 core[patch]: Improve some error messages and add another test for checking RunnableWithMessageHistory (#25209)
Also add more useful error messages.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-08-22 18:14:27 +00:00
Hasan Kumar
b4fcda7657 langchain: Fix type warnings when passing Runnable as agent to AgentExecutor (#24750)
Fix for https://github.com/langchain-ai/langchain/issues/13075

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 14:02:02 -04:00
sslee
61228da1c4 fix typo (#25673) 2024-08-22 17:33:53 +00:00
Leonid Ganeline
d886f4e107 docs: integrations reference update 9 (#25511)
Added missed provider pages. Added missed references and descriptions.
2024-08-22 10:25:41 -07:00
Maurits Bos
3da752c7bb Update pyproject.toml of packageopenai-functions-agent-gmail to prevent ModuleOrPackageNotFound error (#25597)
I was trying to add this package using langchain-cli: `langchain app add
openai-functions-agent-gmail`, but when then try to build the whole
project using poetry or pip, it fails with the following
error:`poetry.core.masonry.utils.module.ModuleOrPackageNotFound: No
file/folder found for package openai-functions-agent-gmail`

This was fixed by modifying the pyproject.toml as in this commit

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-22 17:22:50 +00:00
Leonid Ganeline
624e0747b9 docs: integrations reference updates 10 (#25556)
Added missed provider pages. Added descriptions, links.
2024-08-22 10:21:54 -07:00
Erick Friis
9447925d94 cli: release 0.0.30 (#25672) 2024-08-22 10:21:19 -07:00
Leonid Ganeline
47adc7f32b docs: integrations reference updates 11 (#25598)
Added missed provider pages and links.
2024-08-22 10:19:17 -07:00
Dylan
16fc0a866e docs: Change Pull Request to Merge Request in GitLab notebook (#25649)
- **Description:** In GitLab we call these "merge requests" rather than
"pull requests" so I thought I'd go ahead and update the notebook.
- **Issue:** N/A
- **Dependencies:** none
- **Twitter handle:** N/A

Thanks for creating the tools and notebook to help people work with
GitLab. I thought I'd contribute some minor docs updates here.
2024-08-22 17:15:45 +00:00
mschoenb97IL
e499caa9cd community: Give more context on DeepInfra 500 errors (#25671)
Description: DeepInfra 500 errors have useful information in the text
field that isn't being exposed to the user. I updated the error message
to fix this.

As an example, this code

```
from langchain_community.chat_models import ChatDeepInfra
from langchain_core.messages import HumanMessage

model = "meta-llama/Meta-Llama-3-70B-Instruct"
deepinfra_api_token = "..."

model = ChatDeepInfra(model=model, deepinfra_api_token=deepinfra_api_token)

messages = [HumanMessage("All work and no play makes Jack a dull boy\n" * 9000)]
response = model.invoke(messages)
```

Currently gives this error:
```
langchain_community.chat_models.deepinfra.ChatDeepInfraException: DeepInfra Server: Error 500
```

This change would give the following error:
```
langchain_community.chat_models.deepinfra.ChatDeepInfraException: DeepInfra Server error status 500: {"error":{"message":"Requested input length 99009 exceeds maximum input length 8192"}}
```
2024-08-22 10:10:51 -07:00
Brian Sam-Bodden
29c873dd69 [docs]: update Redis (langchain-redis) documentation notebooks (vectorstore, llm caching, chat message history) (#25113)
- **Description:** Adds notebooks for Redis Partner Package
(langchain-redis)
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** `@bsbodden` and `@redis`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 11:53:02 -04:00
Rajendra Kadam
4ff2f4499e community: Refactor PebbloRetrievalQA (#25583)
**Refactor PebbloRetrievalQA**
  - Created `APIWrapper` and moved API logic into it.
  - Created smaller functions/methods for better readability.
  - Properly read environment variables.
  - Removed unused code.
  - Updated models

**Issue:** NA
**Dependencies:** NA
**tests**:  NA
2024-08-22 11:51:21 -04:00
Rajendra Kadam
1f1679e960 community: Refactor PebbloSafeLoader (#25582)
**Refactor PebbloSafeLoader**
  - Created `APIWrapper` and moved API logic into it.
  - Moved helper functions to the utility file.
  - Created smaller functions and methods for better readability.
  - Properly read environment variables.
  - Removed unused code.

**Issue:** NA
**Dependencies:** NA
**tests**:  Updated
2024-08-22 11:46:52 -04:00
maang-h
5e3a321f71 docs: Add ChatZhipuAI tool calling and structured output docstring (#25669)
- **Description:** Add `ChatZhipuAI` tool calling and structured output
docstring.
2024-08-22 10:34:41 -04:00
Krishna Kulkarni
820da64983 limit the most recent documents to fetch from MongoDB database. (#25435)
limit the most recent documents to fetch from MongoDB database.

Thank you for contributing to LangChain!

- [ ] **limit the most recent documents to fetch from MongoDB
database.**: "langchain_mongodb: limit the most recent documents to
fetch from MongoDB database."


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Added a doc_limit parameter which enables the limit
for the documents to fetch from MongoDB database
    - **Issue:** 
    - **Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-22 10:33:45 -04:00
ccurme
67b6e6c2e3 docs[patch]: update AWS integration docs (#25631)
De-beta ChatBedrockConverse.
2024-08-22 09:22:03 -04:00
Swastik-Swarup-Dash
6247259438 update_readme (#25665)
Updated LangChain Expression Language (LCEL). for Easier Understanding
2024-08-22 13:18:32 +00:00
Noah Mayerhofer
0091947efd community: add retry for session expired exception in neo4j (#25660)
Description: The neo4j driver can raise a SessionExpired error, which is
considered a retriable error. If a query fails with a SessionExpired
error, this change retries every query once. This change will make the
neo4j integration less flaky.
Twitter handle: noahmay_
2024-08-22 13:07:36 +00:00
Erick Friis
e958f76160 docs: migration guide nits (#25600) 2024-08-22 04:24:34 +00:00
Yuki Watanabe
3981d736df databricks: Add partner package directory and ChatDatabricks implementation (#25430)
### Summary

Create `langchain-databricks` as a new partner packages. This PR does
not migrate all existing Databricks integration, but the package will
eventually contain:

* `ChatDatabricks` (implemented in this PR)
* `DatabricksVectorSearch`
* `DatabricksEmbeddings`
* ~`UCFunctionToolkit`~ (will be done after UC SDK work which
drastically simplify implementation)

Also, this PR does not add integration tests yet. This will be added
once the Databricks test workspace is ready.

Tagging @efriis as POC


### Tracker
[✍️] Create a package and imgrate ChatDatabricks
[ ] Migrate DatabricksVectorSearch, DatabricksEmbeddings, and their docs
~[ ] Migrate UCFunctionToolkit and its doc~
[ ] Add provider document and update README.md
[ ] Add integration tests and set up secrets (after moved to an external
package)
[ ] Add deprecation note to the community implementations.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-21 17:19:28 -07:00
Scott Hurrey
fb1d67edf6 box: add retrievers and fix docs (#25633)
Thank you for contributing to LangChain!


**Description:** Adding `BoxRetriever` for langchain_box. This retriever
handles two use cases:
* Retrieve all documents that match a full-text search
* Retrieve the answer to a Box AI prompt as a Document

**Twitter handle:** @BoxPlatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-21 22:40:40 +00:00
Bagatur
4f347cbcb9 docs: link Versions in intro (#25640) 2024-08-21 21:02:25 +00:00
jakerachleff
4591bc0b01 Use 1.101 instead of 1.100 bc 1.100 was yanked (#25638) 2024-08-21 21:02:01 +00:00
Bagatur
f535e8a99e docs: ls similar examples header (#25642) 2024-08-21 20:50:24 +00:00
Erick Friis
766b650fdc chroma: add back fastapi optional dep (#25641) 2024-08-21 20:00:47 +00:00
Bagatur
9daff60698 docs: fix openai api ref (#25639) 2024-08-21 12:55:17 -07:00
Erick Friis
c8be0a9f70 partners/unstructured: release 0.1.2 (#25637) 2024-08-21 12:53:55 -07:00
Bagatur
f4b3c90886 docs: add prereq commas (#25626) 2024-08-21 12:38:53 -07:00
Christophe Bornet
b71ae52e65 [unstructured][security] Bump unstructured version (#25364)
This ensures version 0.15.7+ is pulled. 
This version of unstructured uses a version of NLTK >= 3.8.2 that has a
fix for a critical CVE:
https://github.com/advisories/GHSA-cgvx-9447-vcch
2024-08-21 12:25:24 -07:00
Bagatur
39c44817ae infra: test convert_message (#25632) 2024-08-21 18:24:06 +00:00
Bagatur
4feda41ab6 docs: ls how to link (#25624) 2024-08-21 10:18:08 -07:00
Bagatur
71c2ec6782 docs: langsmith few shot prereq (#25623) 2024-08-21 16:44:25 +00:00
Bagatur
628574b9c2 core[patch]: Release 0.2.34 (#25622) 2024-08-21 16:26:51 +00:00
Bagatur
0bc3845e1e core[patch]: support oai dicts as messages (#25621)
and update langsmtih example selector docs
2024-08-21 16:13:15 +00:00
Bagatur
a78843bb77 docs: how to use langsmith few shot (#25601)
Requires langsmith 0.1.101 release
2024-08-21 08:12:42 -07:00
ccurme
10a2ce2a26 together[patch]: use mixtral in standard integration tests (#25619)
Mistral 7B occasionally fails tool-calling tests. Updating to Mixtral
appears to improve this.
2024-08-21 14:26:25 +00:00
Mikhail Khludnev
d457d7d121 docs: Update qdrant.ipynb "BM25".lower() (#25616)
Otherwise I've got KeyError from `fastembeds`
2024-08-21 09:45:00 -04:00
Dristy Srivastava
b002702af6 [Community][minor]: Updating metadata with full_path in SharePoint loader (#25593)
- **Description:** Updating metadata for sharepoint loader with full
path i.e., webUrl
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** NA
- **Docs** NA

Co-authored-by: dristy.cd <dristy@clouddefense.io>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-21 13:10:14 +00:00
ZhangShenao
34d0417eb5 Improvement[Doc] Improve api doc in of PineconeVectorStore (#25605)
Complete missing arguments in api doc of `PineconeVectorStore`.
2024-08-21 08:58:00 -04:00
wangda
e7d6b25653 docs:Correcting spelling mistakes (#25612) 2024-08-21 08:49:12 -04:00
Scott Hurrey
55fd2e2158 box: add langchain box package and DocumentLoader (#25506)
Thank you for contributing to LangChain!

-Description: Adding new package: `langchain-box`:

* `langchain_box.document_loaders.BoxLoader` — DocumentLoader
functionality
* `langchain_box.utilities.BoxAPIWrapper` — Box-specific code
* `langchain_box.utilities.BoxAuth` — Helper class for Box
authentication
* `langchain_box.utilities.BoxAuthType` — enum used by BoxAuth class

- Twitter handle: @boxplatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-21 02:23:43 +00:00
Bagatur
be27e1787f docs: few-shot conceptual guide (#25596)
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: jakerachleff <jake@langchain.dev>
2024-08-21 00:39:50 +00:00
Erick Friis
f878df404f partners/chroma: release 0.1.3 (#25599) 2024-08-20 23:24:32 +00:00
Erick Friis
60cf49a618 chroma: ban chromadb sdk versions 0.5.4 and 0.5.5 due to pydantic bug (#25586)
also remove some unused dependencies (fastapi) and unused test/lint/dev
dependencies (community, openai, textsplitters)

chromadb 0.5.4 introduced usage of `model_fields` which is pydantic v2
specific. also released in 0.5.5
2024-08-20 23:21:38 +00:00
Erick Friis
e37caa9b9a core: fix fallback context overwriting (#25550)
fixes #25337
2024-08-20 16:07:12 -07:00
Bagatur
3e296e39c8 docs: update examples in api ref (#25589) 2024-08-20 11:08:24 -07:00
Isaac Francisco
d40bdd6257 docs: more indexing of document loaders (#25500)
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-08-20 17:54:42 +00:00
Bagatur
8a71f1b41b core[minor]: add langsmith document loader (#25493)
needs tests
2024-08-20 10:22:14 -07:00
Bob Merkus
8e3e532e7d docs: ollama doc update (toolcalling, install, notebook examples) (#25549)
The new `langchain-ollama` package seems pretty well implemented, but I
noticed the docs were still outdated so I decided to fix em up a bit.

- Llama3.1 was release on 23rd of July;
https://ai.meta.com/blog/meta-llama-3-1/
- Ollama supports tool calling since 25th of July;
https://ollama.com/blog/tool-support
- LangChain Ollama partner package was released 1st of august;
https://pypi.org/project/langchain-ollama/

**Problem**: Docs note langchain-community instead of langchain-ollama

**Solution**: Update docs to
https://python.langchain.com/v0.2/docs/integrations/chat/ollama/


**Problem**: OllamaFunctions is deprecated, as noted on
[Integrations](https://python.langchain.com/v0.2/docs/integrations/chat/ollama_functions/):
This was an experimental wrapper that attempts to bolt-on tool calling
support to models that do not natively support it. The [primary Ollama
integration](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) now
supports tool calling, and should be used instead.

**Solution**: Delete old notebook from repo, update the existing one
with @tool decorator + pydantic examples to the notebook


**Problem**: Llama3.1 was released while llama3-groq-tool-call fine-tune
Is noted in notebooks.

**Solution**: update docs + notebooks to llama3.1 (which has improved
tool calling support)


**Problem**: Install instructions are incomplete, there is no
information to download a model and/or run the Ollama server

**Solution**: Add simple instructions to start the ollama service and
pull model (for toolcalling)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-20 09:20:59 -04:00
Jabir
12e490ea56 Update azuresearch.py (#25577)
This will allow complextype metadata to be returned. the current
implementation throws error when dealing with nested metadata

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-20 12:53:30 +00:00
Abraham Omorogbe
498a482e76 docs: Adding Azure Database for PostgreSQL docs (#25560)
This PR to show support for the Azure Database for PostgreSQL Vector
Store and Memory

[Azure Database for PostgreSQL - Flexible
Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview)
[Azure Database for PostgreSQL pgvector
extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-use-pgvector)

**Description:** Added vector store and memory usage documentation for
Azure Database for PostgreSQL
 **Twitter handle:** [@_aiabe](https://x.com/_aiabe)

---------

Co-authored-by: Abeomor <{ID}+{username}@users.noreply.github.com>
2024-08-20 12:01:32 +00:00
Leonid Ganeline
d324fd1821 docs: added Constitutional AI references (#25553)
Added reference to the source paper.
2024-08-20 08:00:58 -04:00
Bagatur
4bd005adb6 core[patch]: Allow bound models as token_counter in trim_messages (#25563) 2024-08-20 00:21:22 -07:00
Erick Friis
e01c6789c4 core,community: add beta decorator to missed GraphVectorStore extensions (#25562) 2024-08-19 17:29:09 -07:00
Erick Friis
dd2d094adc infra: remove huggingface from ci tree (#25559) 2024-08-19 22:48:26 +00:00
Bagatur
6b98207eda infra: test chat prompt ser/des (#25557) 2024-08-19 15:27:36 -07:00
ccurme
c5bf114c0f together, standard-tests: specify tool_choice in standard tests (#25548)
Here we allow standard tests to specify a value for `tool_choice` via a
`tool_choice_value` property, which defaults to None.

Chat models [available in
Together](https://docs.together.ai/docs/chat-models) have issues passing
standard tool calling tests:
- llama 3.1 models currently [appear to rely on user-side
parsing](https://docs.together.ai/docs/llama-3-function-calling) in
Together;
- Mixtral-8x7B and Mistral-7B (currently tested) consistently do not
call tools in some tests.

Specifying tool_choice also lets us remove an existing `xfail` and use a
smaller model in Groq tests.
2024-08-19 16:37:36 -04:00
maang-h
015ab91b83 community[patch]: Add ToolMessage for ChatZhipuAI (#25547)
- **Description:** Add ToolMessage for `ChatZhipuAI` to solve the issue
#25490
2024-08-19 11:26:38 -04:00
ccurme
5a3aaae6dc groq[patch]: update model used for llama tests (#25542)
`llama-3.1-8b-instant` often fails some of the tool calling standard
tests. Here we update to `llama-3.1-70b-versatile`.
2024-08-19 13:58:06 +00:00
Mohammad Mohtashim
75c3c81b8c [Community]: Fix - Open AI Whisper client.audio.transcriptions returning Text Object which raises error (#25271)
- **Description:** The following
[line](fd546196ef/libs/community/langchain_community/document_loaders/parsers/audio.py (L117))
in `OpenAIWhisperParser` returns a text object for some odd reason
despite the official documentation saying it should return `Transcript`
Instance which should have the text attribute. But for the example given
in the issue and even when I tried running on my own, I was directly
getting the text. The small PR accounts for that.
 - **Issue:** : #25218
 

I was able to replicate the error even without the GenericLoader as
shown below and the issue was with `OpenAIWhisperParser`

```python
parser = OpenAIWhisperParser(api_key="sk-fxxxxxxxxx",
                                            response_format="srt",
                                            temperature=0)

list(parser.lazy_parse(Blob.from_path('path_to_file.m4a')))
```
2024-08-19 09:36:42 -04:00
Thin red line 未来产品经理
0f7b8adddf fix issue: cannot use document_variable_name to override context in create_stuff_documents_chain (#25531)
…he prompt in the create_stuff_documents_chain

Thank you for contributing to LangChain!

- [ ] **PR title**: "langchain:add document_variable_name in the
function _validate_prompt in create_stuff_documents_chain"
 

- [ ] **PR message**: 
- **Description:** add document_variable_name in the function
_validate_prompt in create_stuff_documents_chain
- **Issue:** according to the description of
create_stuff_documents_chain function, the parameter
document_variable_name can be used to override the "context" in the
prompt, but in the function, _validate_prompt it still use DOCUMENTS_KEY
to check if it is a valid prompt, the value of DOCUMENTS_KEY is always
"context", so even through the user use document_variable_name to
override it, the code still tries to check if "context" is in the
prompt, and finally it reports error. so I use document_variable_name to
replace DOCUMENTS_KEY, the default value of document_variable_name is
"context" which is same as DOCUMENTS_KEY, but it can be override by
users.
    - **Dependencies:** none
    - **Twitter handle:** https://x.com/xjr199703


- [ ] **Add tests and docs**: none

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-19 13:33:19 +00:00
ccurme
09c0823c3a docs: update summarization guides (#25408) 2024-08-19 13:29:25 +00:00
maang-h
32f5147523 docs: Fix QianfanLLMEndpoint and Tongyi input text (#25529)
- **Description:** Fix `QianfanLLMEndpoint` and `Tongyi` input text.
2024-08-19 09:23:09 -04:00
ZhangShenao
4255a30f20 Improvement[Community] Improve api doc for SingleFileFacebookMessengerChatLoader (#25536)
Delete redundant args in api doc
2024-08-19 09:00:21 -04:00
Bagatur
49dea06af1 docs: fix Agent deprecation msg (#25464) 2024-08-18 19:15:52 +00:00
Hassan El Mghari
937b3904eb together[patch]: update base url (#25524)
Updated the Together base URL from `.ai` to `.xyz` since some customers
have reported problems with `.ai`.
2024-08-18 10:48:30 -07:00
gbaian10
bda3becbe7 docs: add prompt to install beautifulsoup4. (#25518)
fix: #25482

- **Description:**
Add a prompt to install beautifulsoup4 in places where `from
langchain_community.document_loaders import WebBaseLoader` is used.
- **Issue:** #25482
2024-08-17 23:23:24 -07:00
gbaian10
f6e6a17878 docs: add prompt to install nltk (#25519)
fix: #25473 

- **Description:** add prompt to install nltk
- **Issue:** #25473
2024-08-17 23:22:49 -07:00
Chengzu Ou
c1bd4e05bc docs: fix Databricks Vector Search demo notebook (#25504)
**Description:** This PR fixes an issue in the demo notebook of
Databricks Vector Search in "Work with Delta Sync Index" section.

**Issue:** N/A

**Dependencies:** N/A

---------

Co-authored-by: Chengzu Ou <chengzu.ou@databrick.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-16 20:24:30 +00:00
Isaac Francisco
a2e90a5a43 add embeddings integration tests (#25508) 2024-08-16 13:20:37 -07:00
Bagatur
a06818a654 openai[patch]: update core dep (#25502) 2024-08-16 18:30:17 +00:00
Bagatur
df98552b6f core[patch]: Release 0.2.33 (#25498) 2024-08-16 11:18:54 -07:00
ccurme
b83f1eb0d5 core, partners: implement standard tracing params for LLMs (#25410) 2024-08-16 13:18:09 -04:00
Bagatur
9f0c76bf89 openai[patch]: Release 0.1.22 (#25496) 2024-08-16 16:53:04 +00:00
ccurme
01ecd0acba openai[patch]: fix json mode for Azure (#25488)
https://github.com/langchain-ai/langchain/issues/25479
https://github.com/langchain-ai/langchain/issues/25485

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-16 09:50:50 -07:00
Eugene Yurtsev
1fd1c1dca5 docs: use .invoke rather than __call__ in openai integration notebook (#25494)
Documentation should be using .invoke()
2024-08-16 15:59:18 +00:00
Bagatur
253ceca76a docs: fix mimetype parser docstring (#25463) 2024-08-15 16:16:52 -07:00
Eugene Yurtsev
e18511bb22 core[minor], anthropic[patch]: Upgrade @root_validator usage to be consistent with pydantic 2 (#25457)
anthropic: Upgrade `@root_validator` usage to be consistent with
pydantic 2
core: support looking up multiple keys from env in from_env factory
2024-08-15 20:09:34 +00:00
Eugene Yurtsev
34da8be60b pinecone[patch]: Upgrade @root_validators to be consistent with pydantic 2 (#25453)
Upgrade root validators for pydantic 2 migration
2024-08-15 19:45:14 +00:00
Eugene Yurtsev
b297af5482 voyageai[patch]: Upgrade root validators for pydantic 2 (#25455)
Update @root_validators to be consistent with pydantic 2 semantics
2024-08-15 15:30:41 -04:00
Eugene Yurtsev
4cdaca67dc ai21[patch]: Upgrade @root_validators for pydantic 2 migration (#25454)
Upgrade @root_validators usage to match pydantic 2 semantics
2024-08-15 14:54:08 -04:00
Eugene Yurtsev
d72a08a60d groq[patch]: Update root validators for pydantic 2 migration (#25402) 2024-08-15 18:46:52 +00:00
Leonid Ganeline
8eb63a609e docs: arxiv page update (#25450)
Added `arxive` papers that use `LangGraph` or `LangSmith`. Improved the
page formatting.
2024-08-15 14:30:35 -04:00
Isaac Francisco
5150ec3a04 [experimental]: minor fix to open assistants code (#24682) 2024-08-15 17:50:57 +00:00
Bagatur
2b4fbcb4b4 docs: format oai embeddings docstring (#25448) 2024-08-15 16:57:54 +00:00
Eugene Yurtsev
eb3870e9d8 fireworks[patch]: Upgrade @root_validators to be pydantic 2 compliant (#25443)
Update @root_validators to be pydantic 2 compliant
2024-08-15 16:56:48 +00:00
William FH
75ae585deb Merge support for group manager (#25360) 2024-08-15 09:56:31 -07:00
Eugene Yurtsev
b7c070d437 docs[patch]: Update code that checks API keys (#25444)
Check whether the API key is already in the environment

Update:

```python
import getpass
import os

os.environ["DATABRICKS_HOST"] = "https://your-workspace.cloud.databricks.com"
os.environ["DATABRICKS_TOKEN"] = getpass.getpass("Enter your Databricks access token: ")
```

To:

```python
import getpass
import os

os.environ["DATABRICKS_HOST"] = "https://your-workspace.cloud.databricks.com"
if "DATABRICKS_TOKEN" not in os.environ:
    os.environ["DATABRICKS_TOKEN"] = getpass.getpass(
        "Enter your Databricks access token: "
    )
```

grit migration:

```
engine marzano(0.1)
language python

`os.environ[$Q] = getpass.getpass("$X")` as $CHECK where {
    $CHECK <: ! within if_statement(),
    $CHECK => `if $Q not in os.environ:\n    $CHECK`
}
```
2024-08-15 12:52:37 -04:00
Bagatur
60b65528c5 docs: fix api ref mod links in pkg page (#25447) 2024-08-15 16:52:12 +00:00
Eugene Yurtsev
2ef9d12372 mistralai[patch]: Update more @root_validators for pydantic 2 compatibility (#25446)
Update @root_validators in mistralai integration for pydantic 2 compatibility
2024-08-15 12:44:42 -04:00
Eugene Yurtsev
6910b0b3aa docs[patch]: Fix integration notebook for Fireworks llm (#25442)
Fix integration notebook
2024-08-15 12:42:33 -04:00
Eugene Yurtsev
831708beb7 together[patch]: Update @root_validator for pydantic 2 compatibility (#25423)
This PR updates usage of @root_validator to be compatible with pydantic 2.
2024-08-15 11:27:42 -04:00
Eugene Yurtsev
a114255b82 ai21[patch]: Update @root_validators for pydantic2 migration (#25401)
Update @root_validators for pydantic 2 migration.
2024-08-15 11:26:44 -04:00
Eugene Yurtsev
6f68c8d6ab mistralai[patch]: Update root validator for compatibility with pydantic 2 (#25403) 2024-08-15 11:26:24 -04:00
ccurme
8afbab4cf6 langchain[patch]: deprecate various chains (#25310)
- [x] NatbotChain: move to community, deprecate langchain version.
Update to use `prompt | llm | output_parser` instead of LLMChain.
- [x] LLMMathChain: deprecate + add langgraph replacement example to API
ref
- [x] HypotheticalDocumentEmbedder (retriever): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] FlareChain: update to use `prompt | llm | output_parser` instead
of LLMChain
- [x] ConstitutionalChain: deprecate + add langgraph replacement example
to API ref
- [x] LLMChainExtractor (document compressor): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] LLMChainFilter (document compressor): update to use `prompt | llm
| output_parser` instead of LLMChain
- [x] RePhraseQueryRetriever (retriever): update to use `prompt | llm |
output_parser` instead of LLMChain
2024-08-15 10:49:26 -04:00
Luke
66e30efa61 experimental: Fix divide by 0 error (#25439)
Within the semantic chunker, when calling `_threshold_from_clusters`
there is the possibility for a divide by 0 error if the
`number_of_chunks` is equal to the length of `distances`.

Fix simply implements a check if these values match to prevent the error
and enable chunking to continue.
2024-08-15 14:46:30 +00:00
ccurme
ba167dc158 community[patch]: update connection string in azure cosmos integration test (#25438) 2024-08-15 14:07:54 +00:00
Eugene Yurtsev
44f69063b1 docs[patch]: Fix a few typos in the chat integration docs for TogetherAI (#25424)
Fix a few minor typos
2024-08-15 09:48:36 -04:00
Isaac Francisco
f18b77fd59 [docs]: pdf loaders (#25425) 2024-08-14 21:44:57 -07:00
Isaac Francisco
966b408634 [docs]: doc loader changes (#25417) 2024-08-14 19:46:33 -07:00
ccurme
bd261456f6 langchain: bump core to 0.2.32 (#25421) 2024-08-15 00:00:42 +00:00
Bagatur
ec8ffc8f40 core[patch]: Release 0.2.32 (#25420) 2024-08-14 15:56:56 -07:00
Bagatur
2494cecabf core[patch]: tool import fix (#25419) 2024-08-14 22:54:13 +00:00
ccurme
df632b8cde langchain: bump min core version (#25418) 2024-08-14 22:51:35 +00:00
ccurme
1050e890c6 langchain: release 0.2.14 (#25416)
Fixes https://github.com/langchain-ai/langchain/issues/25413
2024-08-14 22:29:39 +00:00
Isaac Francisco
c4779f5b9c [docs]: sitemaploader update (#25363) 2024-08-14 15:27:40 -07:00
gbaian10
0a99935794 docs: remove the extra period in docstring (#25414)
Remove the period after the hyperlink in the docstring of
BaseChatOpenAI.with_structured_output.

I have repeatedly copied the extra period at the end of the hyperlink,
which results in a "Page not found" page when pasted into the browser.
2024-08-14 18:07:15 -04:00
Isaac Francisco
63aba3fe5b [docs]: link fix directory loader (#25411) 2024-08-14 20:58:54 +00:00
Bagatur
dc80be5efe docs: fix deprecated functions table (#25409) 2024-08-14 12:25:39 -07:00
Erick Friis
ab29ee79a3 docs: fix tool index (#25404) 2024-08-14 18:36:41 +00:00
Werner van der Merwe
1d3f7231b8 fix: typo where github should be gitlab (#25397)
**PR title**: "GitLabToolkit: fix typo"
    - **Description:** fix typo where GitHub should have been GitLab
    - **Dependencies:** None
2024-08-14 18:36:25 +00:00
Bagatur
a58d4ba340 core[patch]: Release 0.2.31 (#25388) 2024-08-14 11:26:49 -07:00
Bagatur
d178fb9dc3 docs: fix api ref package tables (#25400) 2024-08-14 10:40:16 -07:00
Bagatur
414154fa59 experimental[patch]: refactor rl chain structure (#25398)
can't have a class and function with same name but different
capitalization in same file for api reference building
2024-08-14 17:09:43 +00:00
Flávio Knob
94c9cb7321 Update document_loader_custom.ipynb (#25393)
Fix typo

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-14 12:33:21 -04:00
Jacob Lee
012929551c docs[patch]: Hide deprecated integration pages (#25389) 2024-08-14 09:17:39 -07:00
Bagatur
63c483ea01 standard-tests: import fix (#25395) 2024-08-14 09:13:56 -07:00
Bagatur
eec7bb4f51 anthropic[patch]: Release 0.1.23 (#25394) 2024-08-14 09:03:39 -07:00
Flávio Knob
f0f125dac7 Update document_loader_custom.ipynb (#25391)
Fix typo and some `callout` tags

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-14 15:07:42 +00:00
Eugene Yurtsev
f4196f1fb8 ollama[patch]: Update extra in ollama package (#25383)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.
2024-08-14 10:30:01 -04:00
Chengyu Yan
d0ad713937 core: fix issue#24660, slove error messages about ValueError when use model with history (#25183)
- **Description:**
This PR will slove error messages about `ValueError` when use model with
history.
Detail in #24660.
#22933 causes that
`langchain_core.runnables.history.RunnableWithMessageHistory._get_output_messages`
miss type check of `output_val` if `output_val` is `False`. After
running `RunnableWithMessageHistory._is_not_async`, `output` is `False`.

249945a572/libs/core/langchain_core/runnables/history.py (L323-L334)

15a36dd0a2/libs/core/langchain_core/runnables/history.py (L461-L471)
~~I suggest that `_get_output_messages` return empty list when
`output_val == False`.~~

- **Issue**:
  - #24660

- **Dependencies:**: No Change.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-08-14 14:26:22 +00:00
Jacob Lee
ddd7919f6a docs[patch]: Add conceptual guide links to integration index pages (#25387) 2024-08-14 07:14:24 -07:00
Bagatur
493e474063 docs: udpated api reference (#25172)
- Move the API reference into the vercel build
- Update api reference organization and styling
2024-08-14 07:00:17 -07:00
Leonid Ganeline
4a812e3193 docs: integrations references update (#25217)
Added missed provider pages. Fixed formats and added descriptions and
links.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-14 13:58:38 +00:00
Eugene Yurtsev
5f5e8c9a60 huggingface[patch], pinecone[patch], fireworks[patch], mistralai[patch], voyageai[patch], togetherai[path]: convert Pydantic extras to literals (#25384)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.

- fireworks
- voyage ai
- mistralai
- mistral ai
- together ai
- huggigng face
- pinecone
2024-08-14 09:55:30 -04:00
Eugene Yurtsev
d00176e523 openai[patch]: Update extra to match pydantic 2 (#25382)
Backwards compatible change that converts pydantic extras to literals
which is consistent with pydantic 2 usage.
2024-08-14 09:55:18 -04:00
Eugene Yurtsev
dc51cc5690 core[minor]: Prevent PydanticOutputParser from encoding schema as ASCII (#25386)
This allows users to provide parameter descriptions in the pydantic
models in other languages.

Continuing this PR: https://github.com/langchain-ai/langchain/pull/24809
2024-08-14 13:54:31 +00:00
ccurme
27690506d0 multiple: update removal targets (#25361) 2024-08-14 09:50:39 -04:00
Ikko Eltociear Ashimine
4029f5650c docs: update clarifai.ipynb (#25373)
Intialize -> Initialize
2024-08-14 09:20:17 -04:00
Erick Friis
10e6725a7e docs: tools index table (#25370) 2024-08-14 02:38:03 +00:00
Harrison Chase
967b6f21f6 docs: improve document loaders index (#25365)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-14 01:48:48 +00:00
Erick Friis
4a78be7861 docs: remove sidebar comment (#25369) 2024-08-14 01:47:12 +00:00
Eugene Yurtsev
d6c180996f docs[patch]: Fix typo in CohereEmbeddings integration docs (#25367)
Fix typo
2024-08-14 01:18:54 +00:00
Eugene Yurtsev
93dcc47463 docs: Partial integration update for cohere embeddings (#25250)
This can be finished after the following issue is resolved:

https://github.com/langchain-ai/langchain-cohere/issues/81

Related to: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "cohere",
      "js":  true,
      "local": false,
     "serializable": false,
   }
]
```

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-14 00:53:13 +00:00
Eugene Yurtsev
27def6bddb docs[patch]: Update integration docs for AzureOpenAIEmbeddings (#25311)
https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:33:13 +00:00
Eugene Yurtsev
b4e3bdb714 docs: Update nomic AI embeddings integration docs (#25308)
Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:32:07 +00:00
Eugene Yurtsev
f82c3f622a docs: Update AI21Embeddings Integration docs (#25298)
Update AI21 Integration docs

Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:30:16 +00:00
Eugene Yurtsev
d55d99222b docs: update integration docs for mistral ai embedding model (#25253)
Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "mistralai",
      "js":  true,
      "local": false,
     "serializable": false,
    "native_async": true
   }
]
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:25:36 +00:00
Eugene Yurtsev
0f6217f507 docs: together ai embeddings integration docs (#25252)
Update together AI embedding integration docs

Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
[
   {
      "provider": "together",
      "js":  true,
      "local": false,
     "serializable": false,
   }
]
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:24:02 +00:00
Eugene Yurtsev
8645a49f31 docs: Update integration docs for OllamaEmbeddingsModel (#25314)
Issue: https://github.com/langchain-ai/langchain/issues/24856

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:23:05 +00:00
Eugene Yurtsev
a4ef830480 docs: update integration docs for openai embeddings (#25249)
Related issue: https://github.com/langchain-ai/langchain/issues/24856

```json
   {
      "provider": "openai",
      "js":  true,
      "local": false,
     "serializable": false,
"async_native": true
  }
```

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-08-14 00:21:36 +00:00
Eugene Yurtsev
b1aed44540 docs: Updating integration docs for Fireworks Embeddings (#25247)
Providers:
* fireworks

See related issue:
* https://github.com/langchain-ai/langchain/issues/24856

Features:

```json
[
   {
      "provider": "fireworks",
      "js":  true,
      "local": false,
     "serializable": false,
   }



]


```

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-08-13 17:04:18 -07:00
Isaac Francisco
f4ffd692a3 [docs]: standardize doc loader doc strings (#25325) 2024-08-13 23:18:56 +00:00
Isaac Francisco
e0bbb81d04 [docs]: standardize tool docstrings (#25351) 2024-08-13 16:10:00 -07:00
Erick Friis
d5b548b4ce docs: index pages, sidebars (#25316) 2024-08-13 15:52:51 -07:00
Isaac Francisco
0478f7f5e4 [docs]: LLM integration pages (#25005) 2024-08-13 14:50:45 -07:00
thedavgar
9d08369442 community: fix AzureSearch vectorstore asyncronous methods (#24921)
**Description**
Fix the asyncronous methods to retrieve documents from AzureSearch
VectorStore. The previous changes from [this
commit](ffe6ca986e)
create a similar code for the syncronous methods and the asyncronous
ones but the asyncronous client return an asyncronous iterator
"AsyncSearchItemPaged" as said in the issue #24740.
To solve this issue, the syncronous iterators in asyncronous methods
where changed to asyncronous iterators.

@chrislrobert said in [this
comment](https://github.com/langchain-ai/langchain/issues/24740#issuecomment-2254168302)
that there was a still a flaw due to `with` blocks that close the client
after each call. I removed this `with` blocks in the `async_client`
following the same pattern as the sync `client`.

In order to close up the connections, a __del__ method is included to
gently close up clients once the vectorstore object is destroyed.

**Issue:** #24740 and #24064
**Dependencies:** No new dependencies for this change

**Example notebook:** I created a notebook just to test the changes work
and gives the same results as the syncronous methods for vector and
hybrid search. With these changes, the asyncronous methods in the
retriever work as well.

![image](https://github.com/user-attachments/assets/697e431b-9d7f-4d0d-b205-59d051ac2b67)


**Lint and test**: Passes the tests and the linter
2024-08-13 14:20:51 -07:00
Isaac Francisco
6bc451b942 [docs]: merge tool/toolkit duplicates (#25197) 2024-08-13 12:19:17 -07:00
Fedor Nikolaev
2b15518c5f community: add args_schema to SearxSearchResults tool (#25350)
This adds `args_schema` member to `SearxSearchResults` tool. This member
is already present in the `SearxSearchRun` tool in the same file.

I was having `TypeError: Type is not JSON serializable:
AsyncCallbackManagerForToolRun` being thrown in langserve playground
when I was using `SearxSearchResults` tool as a part of chain there.
This fixes the issue, so the error is not raised anymore.

This is a example langserve app that was giving me the error, but it
works properly after the proposed fix:
```python
#!/usr/bin/env python

from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from langchain_community.utilities import SearxSearchWrapper
from langchain_community.tools.searx_search.tool import SearxSearchResults
from langserve import add_routes

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()

s = SearxSearchWrapper(searx_host="http://localhost:8080")

search = SearxSearchResults(wrapper=s)

search_chain = (
    {"context": search, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

app = FastAPI()

add_routes(
    app,
    search_chain,
    path="/chain",
)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)
```
2024-08-13 18:26:09 +00:00
Matt Kandler
b6df3405fb docs: Fix broken link to Runhouse documentation (#25349)
- **Description:** Runhouse recently migrated from Read the Docs to a
self-hosted solution. This PR updates a broken link from the old docs to
www.run.house/docs. Also changed "The Runhouse" to "Runhouse" (it's
cleaner).
- **Issue:** None
- **Dependencies:** None
2024-08-13 18:18:19 +00:00
maang-h
089f5e6cad Standardize SparkLLM (#25239)
- **Description:** Standardize SparkLLM, include:
  - docs, the issue #24803 
  - to support stream
  - update api url
  - model init arg names, the issue #20085
2024-08-13 09:50:12 -04:00
Leonid Ganeline
35e2230f56 docs: integrationsreferences update (#25322)
Added missed provider pages. Fixed formats and added descriptions and
links.
2024-08-13 09:29:51 -04:00
Chen Xiabin
24155aa1ac qianfan generate/agenerate with usage_metadata (#25332) 2024-08-13 09:24:41 -04:00
Christophe Bornet
ebbe609193 Add README for astradb package (#25345)
Similar to
https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/README.md
2024-08-13 09:17:23 -04:00
Eugene Yurtsev
f679ed72ca ollama[patch]: Update API Reference for ollama embeddings (#25315)
Update API reference for OllamaEmbeddings
Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 21:31:48 -04:00
Erick Friis
2907ab2297 community: release 0.2.12 (#25324) 2024-08-12 23:30:27 +00:00
Erick Friis
06f8bd9946 langchain: release 0.2.13 (#25323) 2024-08-12 22:24:06 +00:00
Erick Friis
252f0877d1 core: release 0.2.30 (#25321) 2024-08-12 22:01:24 +00:00
Eugene Yurtsev
217a915b29 openai: Update API Reference docs for AzureOpenAI Embeddings (#25312)
Update AzureOpenAI Embeddings docs
2024-08-12 19:41:18 +00:00
Eugene Yurtsev
056c7c2983 core[patch]: Update API reference for fake embeddings (#25313)
Issue: https://github.com/langchain-ai/langchain/issues/24856

Using the same template for the fake embeddings in langchain_core as
used in the integrations.
2024-08-12 19:40:05 +00:00
Ben Chambers
1adc161642 community: kwargs for CassandraGraphVectorStore (#25300)
- **Description:** pass kwargs from CassandraGraphVectorStore to
underlying store

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-12 18:01:29 +00:00
Hassan-Memon
deb27d8970 docs: remove unused imports in Conversational RAG tutorial (#25297)
Cleaned up the "Tying it Together" section of the Conversational RAG
tutorial by removing unnecessary imports that were not used. This
reduces confusion and makes the code more concise.

Thank you for contributing to LangChain!

PR title: docs: remove unused imports in Conversational RAG tutorial

PR message:

Description: Removed unnecessary imports from the "Tying it Together"
section of the Conversational RAG tutorial. These imports were not used
in the code and created confusion. The updated code is now more concise
and easier to understand.
Issue: N/A
Dependencies: None
LinkedIn handle: [Hassan
Memon](https://www.linkedin.com/in/hassan-memon-a109b3257/)
Add tests and docs:

Hi [LangChain Team Member’s Name],

I hope you're doing well! I’m thrilled to share that I recently made my
second contribution to the LangChain project. If possible, could you
give me a shoutout on LinkedIn? It would mean a lot to me and could help
inspire others to contribute to the community as well.

Here’s my LinkedIn profile: [Hassan
Memon](https://www.linkedin.com/in/hassan-memon-a109b3257/).

Thank you so much for your support and for creating such a great
platform for learning and collaboration. I'm looking forward to
contributing more in the future!

Best regards,
Hassan Memon
2024-08-12 13:49:55 -04:00
gbaian10
5efd0fe9ae docs: Change SqliteSaver to MemorySaver (#25306)
fix: #25137

`SqliteSaver.from_conn_string()` has been changed to a `contextmanager`
method in `langgraph >= 0.2.0`, the original usage is no longer
applicable.

Refer to
<https://github.com/langchain-ai/langgraph/pull/1271#issue-2454736415>
modification method to replace `SqliteSaver` with `MemorySaver`.
2024-08-12 13:45:32 -04:00
Eugene Yurtsev
1c9917dfa2 fireworks[patch]: Fix doc-string for API Referenmce (#25304) 2024-08-12 17:16:13 +00:00
Eugene Yurtsev
ccff1ba8b8 ai21[patch]: Update API reference documentation (#25302)
Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 13:15:27 -04:00
Eugene Yurtsev
53ee5770d3 fireworks: Add APIReference for the FireworksEmbeddings model (#25292)
Add API Reference documentation for the FireworksEmbedding model.

Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 13:13:43 -04:00
Eugene Yurtsev
8626abf8b5 togetherai[patch]: Update API Reference for together AI embeddings model (#25295)
Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 17:12:28 +00:00
Eugene Yurtsev
1af8456a2c mistralai[patch]: Docs Update APIReference for MistralAIEmbeddings (#25294)
Update API Reference for MistralAI embeddings

Issue: https://github.com/langchain-ai/langchain/issues/24856
2024-08-12 15:25:37 +00:00
Eugene Yurtsev
0a3500808d openai[patch]: Docs fix RST formatting in OpenAIEmbeddings (#25293) 2024-08-12 11:24:35 -04:00
Eugene Yurtsev
ee8a585791 openai[patch]: Add API Reference docs to OpenAIEmbeddings (#25290)
Issue: [24856](https://github.com/langchain-ai/langchain/issues/24856)
2024-08-12 14:53:51 +00:00
ccurme
e77eeee6ee core[patch]: add standard tracing params for retrievers (#25240) 2024-08-12 14:51:59 +00:00
Mohammad Mohtashim
9927a4866d [Community] - Added bind_tools and with_structured_output for ChatZhipuAI (#23887)
- **Description:** This PR implements the `bind_tool` functionality for
ChatZhipuAI as requested by the user. ChatZhipuAI models support tool
calling according to the `OpenAI` tool format, as outlined in their
official documentation [here](https://open.bigmodel.cn/dev/api#glm-4).
- **Issue:**  ##23868

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-12 14:11:43 +00:00
Hassan-Memon
420534c8ca docs: Replaced SqliteSaver with MemorySaver and updated installation instru… (#25285)
…ctions to match LangGraph v2 documentation. Corrected code snippet to
prevent validation errors.

Here's how you can fill out the provided template for your pull request:

---

**Thank you for contributing to LangChain!**

- [ ] **PR title**: `docs: update checkpointer example in Conversational
RAG tutorial`

- [ ] **PR message**:
- **Description:** Updated the Conversational RAG tutorial to correct
the checkpointer example by replacing `SqliteSaver` with `MemorySaver`.
Added installation instructions for `langgraph-checkpoint-memory` to
match LangGraph v2 documentation and prevent validation errors.
    - **Issue:** N/A
    - **Dependencies:** `langgraph-checkpoint-memory`
    - **Twitter handle:** N/A

- [ ] **Add tests and docs**: 
  1. No new integration tests are required.
  2. Updated documentation in the Conversational RAG tutorial.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: [LangChain Contribution
Guidelines](https://python.langchain.com/docs/contributing/)

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-12 09:24:51 -04:00
Yunus Emre Özdemir
794f28d4e2 docs: document upstash vector namespaces (#25289)
**Description:** This PR rearranges the examples in Upstash Vector
integration documentation to describe how to use namespaces and improve
the description of metadata filtering.
2024-08-12 09:17:11 -04:00
JasonJ
f28ae20b81 docs: pip install bug fixed (#25287)
Thank you for contributing to LangChain!
- **Description:** Fixing package install bug in cookbook
- **Issue:** zsh:1: no matches found: unstructured[all-docs]
- **Dependencies:** N/A
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!



If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-12 05:12:44 +00:00
Soichi Sumi
9f0eda6a18 docs: Fix link for API reference of Gmail Toolkit (#25286)
- **Description:** Fix link for API reference of Gmail Toolkit
- **Issue:** I've just found this issue while I'm reading the doc
- **Dependencies:** N/A
- **Twitter handle:** [@soichisumi](https://x.com/soichisumi)

TODO: If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-08-12 05:12:31 +00:00
Anush
472527166f qdrant: Update API reference link and install command (#25245)
## Description

As the title goes. The current API reference links to the deprecated
class.
2024-08-11 16:54:14 -04:00
Aryan Singh
074fa0db73 docs: Fixed grammer error in functions.ipynb (#25255)
**Description**: Grammer Error in functions.ipynb
**Issue**: #25222
2024-08-11 20:53:27 +00:00
gbaian10
4fd1efc48f docs: update "Build an Agent" Installation Hint in agents.ipynb (#25263)
fix #25257
2024-08-11 16:51:34 -04:00
gbaian10
aa2722cbe2 docs: update numbering of items in docstring (#25267)
A problem similar to #25093 .

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-11 20:50:24 +00:00
Maddy Adams
a82c0533f2 langchain: default to langsmith sdk for pulling prompts, fallback to langchainhub (#24156)
**Description:** Deprecating langchainhub, replacing with langsmith sdk
2024-08-11 13:30:52 -07:00
maang-h
bc60cddc1b docs: Fix ChatBaichuan, QianfanChatEndpoint, ChatSparkLLM, ChatZhipuAI docs (#25265)
- **Description:** Fix some chat models docs, include:
  - ChatBaichuan
  - QianfanChatEndpoint
  - ChatSparkLLM
  - ChatZhipuAI
2024-08-11 16:23:55 -04:00
ZhangShenao
43deed2a95 Improvement[Embeddings] Add dimension support to ZhipuAIEmbeddings (#25274)
- In the in ` embedding-3 ` and later models of Zhipu AI, it is
supported to specify the dimensions parameter of Embedding. Ref:
https://bigmodel.cn/dev/api#text_embedding-3 .
- Add test case for `embedding-3` model by assigning dimensions.
2024-08-11 16:20:37 -04:00
maang-h
9cd608efb3 docs: Standardize OpenAI Docs (#25280)
- **Description:** Standardize OpenAI Docs
- **Issue:** the issue #24803

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-11 20:20:16 +00:00
Bagatur
fd546196ef openai[patch]: Release 0.1.21 (#25269) 2024-08-10 16:37:31 -07:00
Eugene Yurtsev
6dd9f053e3 core[patch]: Deprecating beta upsert APIs in vectorstore (#25069)
This PR deprecates the beta upsert APIs in vectorstore.

We'll introduce them in a V2 abstraction instead to keep the existing
vectorstore implementations lighter weight.

The main problem with the existing APIs is that it's a bit more
challenging to
implement the correct behavior w/ respect to IDs since ID can be present
in
both the function signature and as an optional attribute on the document
object.

But VectorStores that pass the standard tests should have implemented
the semantics properly!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-09 17:17:36 -04:00
Bagatur
ca9dcee940 standard-tests[patch]: test ToolMessage.status="error" (#25210) 2024-08-09 13:00:14 -07:00
Eugene Yurtsev
dadb6f1445 cli[patch]: Update integration template for embedding models (#25248)
Update integration template for embedding models
2024-08-09 14:28:57 -04:00
Eugene Yurtsev
b6f0174bb9 community[patch],core[patch]: Update EdenaiTool root_validator and add unit test in core (#25233)
This PR gets rid `root_validators(allow_reuse=True)` logic used in
EdenAI Tool in preparation for pydantic 2 upgrade.
- add another test to secret_from_env_factory
2024-08-09 15:59:27 +00:00
blueoom
c3ced4c6ce core[patch]: use time.monotonic() instead time.time() in InMemoryRateLimiter
**Description:**

The get time point method in the _consume() method of
core.rate_limiters.InMemoryRateLimiter uses time.time(), which can be
affected by system time backwards. Therefore, it is recommended to use
the monotonically increasing monotonic() to obtain the time

```python
        with self._consume_lock:
            now = time.time()  # time.time() -> time.monotonic()

            # initialize on first call to avoid a burst
            if self.last is None:
                self.last = now

            elapsed = now - self.last  # when use time.time(), elapsed may be negative when system time backwards

```
2024-08-09 11:31:20 -04:00
Eugene Yurtsev
bd6c31617e community[patch]: Remove more @allow_reuse=True validators (#25236)
Remove some additional allow_reuse=True usage in @root_validators.
2024-08-09 11:10:27 -04:00
Eugene Yurtsev
6e57aa7c36 community[patch]: Remove usage of @root_validator(allow_reuse=True) (#25235)
Remove usage of @root_validator(allow_reuse=True)
2024-08-09 10:57:42 -04:00
thiswillbeyourgithub
a2b4c33bd6 community[patch]: FAISS: ValueError mentions normalize_score_fn isntead of relevance_score_fn (#25225)
Thank you for contributing to LangChain!

- [X] **PR title**: "community: fix valueerror mentions wrong argument
missing"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [X] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** when faiss.py has a None relevance_score_fn it raises
a ValueError that says a normalize_fn_score argument is needed.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-09 14:40:29 +00:00
ccurme
4825dc0d76 langchain[patch]: add deprecations (#24792) 2024-08-09 10:34:43 -04:00
ccurme
02300471be langchain[patch]: extended-tests: drop logprobs from OAI expected config (#25234)
Following https://github.com/langchain-ai/langchain/pull/25229
2024-08-09 14:23:11 +00:00
Shivendra Soni
66b7206ab6 community: Add llm-extraction option to FireCrawl Document Loader (#25231)
**Description:** This minor PR aims to add `llm_extraction` to Firecrawl
loader. This feature is supported on API and PythonSDK, but the
langchain loader omits adding this to the response.
**Twitter handle:** [scalable_pizza](https://x.com/scalablepizza)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-09 13:59:10 +00:00
blaufink
c81c77b465 partners: fix of issue #24880 (#25229)
- Description: As described in the related issue: There is an error
occuring when using langchain-openai>=0.1.17 which can be attributed to
the following PR: #23691
Here, the parameter logprobs is added to requests per default.
However, AzureOpenAI takes issue with this parameter as stated here:
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?tabs=python-new&pivots=programming-language-chat-completions
-> "If you set any of these parameters, you get an error."
Therefore, this PR changes the default value of logprobs parameter to
None instead of False. This results in it being filtered before the
request is sent.
- Issue: #24880
- Dependencies: /

Co-authored-by: blaufink <sebastian.brueckner@outlook.de>
2024-08-09 13:21:37 +00:00
ccurme
3b7437d184 docs: update integration api refs (#25195)
- [x] toolkits
- [x] retrievers (in this repo)
2024-08-09 12:27:32 +00:00
8932 changed files with 295455 additions and 1011826 deletions

View File

@@ -5,26 +5,31 @@ This project includes a [dev container](https://containers.dev/), which lets you
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click the **Code** drop-down menu at the top of <https://github.com/langchain-ai/langchain>.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```txt
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/&lt;YOUR_USERNAME&gt;/&lt;YOUR_CLONED_REPO_NAME&gt;
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
If you already have VS Code and Docker installed, you can use the button above to get started. This will use VSCode to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
@@ -40,5 +45,5 @@ You can learn more in the [Dev Containers documentation](https://code.visualstud
## Tips and tricks
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
- If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
- If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,36 +1,58 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
"mounts": [
"source=langchain-workspaces,target=/workspaces/langchain,type=volume"
],
// Prevent the container from shutting down
"overrideCommand": true,
// Features to add to the dev container. More info: https://containers.dev/features
"features": {
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"containerEnv": {
"UV_LINK_MODE": "copy"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Run commands after the container is created
"postCreateCommand": "uv sync && echo 'LangChain (Python) dev environment ready!'",
// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.debugpy",
"ms-python.mypy-type-checker",
"ms-python.isort",
"unifiedjs.vscode-mdx",
"davidanson.vscode-markdownlint",
"ms-toolsai.jupyter",
"GitHub.copilot",
"GitHub.copilot-chat"
],
"settings": {
"python.defaultInterpreterPath": ".venv/bin/python",
"python.formatting.provider": "none",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -4,26 +4,9 @@ services:
build:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
# depends_on:
# - mongo
# mongo:
# image: mongo
# restart: unless-stopped
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example123
# ports:
# - "27017:27017"
# networks:
# - langchain-network
networks:
langchain-network:

52
.editorconfig Normal file
View File

@@ -0,0 +1,52 @@
# top-most EditorConfig file
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88
# JSON files
[*.json]
indent_style = space
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_style = space
indent_size = 2
# Markdown files
[*.md]
indent_style = space
indent_size = 2
trim_trailing_whitespace = false
# Configuration files
[*.{toml,ini,cfg}]
indent_style = space
indent_size = 4
# Shell scripts
[*.sh]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 4
# Jupyter notebooks
[*.ipynb]
# Jupyter may include trailing whitespace in cell
# outputs that's semantically meaningful
trim_trailing_whitespace = false

3
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,3 @@
/.github/ @baskaryan @ccurme @eyurtsev
/libs/core/ @eyurtsev
/libs/partners/ @ccurme @mdrxy

View File

@@ -129,4 +129,4 @@ For answers to common questions about this code of conduct, see the FAQ at
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -3,4 +3,4 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).

View File

@@ -1,38 +0,0 @@
labels: [idea]
body:
- type: checkboxes
id: checks
attributes:
label: Checked
description: Please confirm and check all the following options.
options:
- label: I searched existing ideas and did not find a similar one
required: true
- label: I added a very descriptive title
required: true
- label: I've clearly described the feature request and motivation for it
required: true
- type: textarea
id: feature-request
validations:
required: true
attributes:
label: Feature request
description: |
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: motivation
validations:
required: true
attributes:
label: Motivation
description: |
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposal
validations:
required: false
attributes:
label: Proposal (If applicable)
description: |
If you would like to propose a solution, please describe it here.

View File

@@ -1,122 +0,0 @@
labels: [Question]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
Relevant links to check before opening a question to see if your question has already been answered, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this question.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: checkboxes
id: help
attributes:
label: Commit to Help
description: |
After submitting this, I commit to one of:
* Read open questions until I find 2 where I can help someone and add a comment to help there.
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Once my question is answered, I will mark the answer as "accepted".
options:
- label: I commit to help with one of those options 👆
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
render: python
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,35 +1,31 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
labels: ["02 Bug Report"]
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to file a bug report.
Use this to report bugs in LangChain.
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
to ask for help with your issue.
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
value: |
Thank you for taking the time to file a bug report.
For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Check these before submitting to see if your issue has already been reported, fixed or if there's another way to solve your problem:
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
- label: This is a bug, not a usage question.
required: true
- label: I searched the LangChain documentation with the integrated search.
- label: I added a clear and descriptive title that summarizes this issue.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
@@ -37,46 +33,78 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this bug related to? Select at least one.
Note that if the package you are reporting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in [`langchain-ai/langchain-google`](https://github.com/langchain-ai/langchain-google/)).
Please report issues for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code
label: Example Code (Python)
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Avoid screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible.
(This will be automatically formatted into code, so no need for backticks.)
render: python
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
- type: textarea
id: error
validations:
required: false
attributes:
label: Error Message and Stack Trace (if applicable)
description: |
If you are reporting an error, please include the full error message and stack trace.
placeholder: |
Exception + full stack trace
If you are reporting an error, please copy and paste the full error message and
stack trace.
(This will be automatically formatted into code, so no need for backticks.)
render: shell
- type: textarea
id: description
attributes:
@@ -96,25 +124,19 @@ body:
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
Please share your system info with us.
Run the following command in your terminal and paste the output here:
`python -m langchain_core.sys_info`
or if you have an existing python interpreter running:
```python
from langchain_core import sys_info
sys_info.print_sys_info()
```
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,12 +1,18 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea
- name: Show and tell
about: Show what you built with LangChain
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
- name: 📚 Documentation issue
url: https://github.com/langchain-ai/docs/issues/new?template=01-langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support
- name: 📚 LangChain Documentation
url: https://docs.langchain.com/oss/python/langchain/overview
about: View the official LangChain documentation
- name: 📚 API Reference Documentation
url: https://reference.langchain.com/python/
about: View the official LangChain API reference documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: Ask questions and get help from the community

View File

@@ -1,58 +0,0 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -0,0 +1,152 @@
name: "✨ Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this request related to? Select at least one.
Note that if the package you are requesting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in `langchain-ai/langchain`).
Please submit feature requests for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: Feature Description
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
- type: textarea
id: use-case
validations:
required: true
attributes:
label: Use Case
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
- type: textarea
id: proposed-solution
validations:
required: false
attributes:
label: Proposed Solution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...

View File

@@ -4,12 +4,7 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
- type: checkboxes
id: privileged
attributes:
@@ -23,3 +18,33 @@ body:
attributes:
label: Issue Content
description: Add the content of the issue here.
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this issue is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

121
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@@ -0,0 +1,121 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this task is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

View File

@@ -1,29 +1,30 @@
Thank you for contributing to LangChain!
(Replace this entire block of text)
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
Read the full contributing guidelines: https://docs.langchain.com/oss/python/contributing/overview
Thank you for contributing to LangChain! Follow these steps to have your pull request considered as ready for review.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
1. PR title: Should follow the format: TYPE(SCOPE): DESCRIPTION
- Examples:
- fix(anthropic): resolve flag parsing error
- feat(core): add multi-tenant support
- test(openai): update API usage tests
- Allowed TYPE and SCOPE values: https://github.com/langchain-ai/langchain/blob/master/.github/workflows/pr_lint.yml#L15-L33
- [ ] **Add tests and docs**: If you're adding a new integration, please include
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
2. PR description:
- Write 1-2 sentences summarizing the change.
- If this PR addresses a specific issue, please include "Fixes #ISSUE_NUMBER" in the description to automatically close the issue when the PR is merged.
- If there are any breaking changes, please clearly describe them.
- If this PR depends on another PR being merged first, please include "Depends on #PR_NUMBER" inthe description.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
3. Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified.
- We will not consider a PR unless these three are passing in CI.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
- We ask that if you use generative AI for your contribution, you include a disclaimer.
- PRs should not touch more than one package unless absolutely necessary.
- Do not update the `uv.lock` files unless or add dependencies to `pyproject.toml` files (even optional ones) unless you have explicit permission to do so by a maintainer.

View File

@@ -1,7 +0,0 @@
FROM python:3.9
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
COPY ./app /app
CMD ["python", "/app/main.py"]

View File

@@ -1,11 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
required: true
runs:
using: 'docker'
image: 'Dockerfile'

View File

@@ -1,646 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
import logging
import subprocess
import sys
from collections import Counter
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Container, Dict, List, Set, Union
import httpx
import yaml
from github import Github
from pydantic import BaseModel, SecretStr
from pydantic_settings import BaseSettings
github_graphql_url = "https://api.github.com/graphql"
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
# discussions_query = """
# query Q($after: String, $category_id: ID) {
# repository(name: "langchain", owner: "langchain-ai") {
# discussions(first: 100, after: $after, categoryId: $category_id) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# isAnswer
# replies(first: 10) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# }
# }
# """
# issues_query = """
# query Q($after: String) {
# repository(name: "langchain", owner: "langchain-ai") {
# issues(first: 100, after: $after) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# state
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# """
prs_query = """
query Q($after: String) {
repository(name: "langchain", owner: "langchain-ai") {
pullRequests(first: 100, after: $after, states: MERGED) {
edges {
cursor
node {
changedFiles
additions
deletions
number
labels(first: 100) {
nodes {
name
}
}
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
title
createdAt
state
reviews(first:100) {
nodes {
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
state
}
}
}
}
}
}
}
"""
class Author(BaseModel):
login: str
avatarUrl: str
url: str
twitterUsername: Union[str, None] = None
# Issues and Discussions
class CommentsNode(BaseModel):
createdAt: datetime
author: Union[Author, None] = None
class Replies(BaseModel):
nodes: List[CommentsNode]
class DiscussionsCommentsNode(CommentsNode):
replies: Replies
class Comments(BaseModel):
nodes: List[CommentsNode]
class DiscussionsComments(BaseModel):
nodes: List[DiscussionsCommentsNode]
class IssuesNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
state: str
comments: Comments
class DiscussionsNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
comments: DiscussionsComments
class IssuesEdge(BaseModel):
cursor: str
node: IssuesNode
class DiscussionsEdge(BaseModel):
cursor: str
node: DiscussionsNode
class Issues(BaseModel):
edges: List[IssuesEdge]
class Discussions(BaseModel):
edges: List[DiscussionsEdge]
class IssuesRepository(BaseModel):
issues: Issues
class DiscussionsRepository(BaseModel):
discussions: Discussions
class IssuesResponseData(BaseModel):
repository: IssuesRepository
class DiscussionsResponseData(BaseModel):
repository: DiscussionsRepository
class IssuesResponse(BaseModel):
data: IssuesResponseData
class DiscussionsResponse(BaseModel):
data: DiscussionsResponseData
# PRs
class LabelNode(BaseModel):
name: str
class Labels(BaseModel):
nodes: List[LabelNode]
class ReviewNode(BaseModel):
author: Union[Author, None] = None
state: str
class Reviews(BaseModel):
nodes: List[ReviewNode]
class PullRequestNode(BaseModel):
number: int
labels: Labels
author: Union[Author, None] = None
changedFiles: int
additions: int
deletions: int
title: str
createdAt: datetime
state: str
reviews: Reviews
# comments: Comments
class PullRequestEdge(BaseModel):
cursor: str
node: PullRequestNode
class PullRequests(BaseModel):
edges: List[PullRequestEdge]
class PRsRepository(BaseModel):
pullRequests: PullRequests
class PRsResponseData(BaseModel):
repository: PRsRepository
class PRsResponse(BaseModel):
data: PRsResponseData
class Settings(BaseSettings):
input_token: SecretStr
github_repository: str
httpx_timeout: int = 30
def get_graphql_response(
*,
settings: Settings,
query: str,
after: Union[str, None] = None,
category_id: Union[str, None] = None,
) -> Dict[str, Any]:
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
# category_id is only used by one query, but GraphQL allows unused variables, so
# keep it here for simplicity
variables = {"after": after, "category_id": category_id}
response = httpx.post(
github_graphql_url,
headers=headers,
timeout=settings.httpx_timeout,
json={"query": query, "variables": variables, "operationName": "Q"},
)
if response.status_code != 200:
logging.error(
f"Response was not 200, after: {after}, category_id: {category_id}"
)
logging.error(response.text)
raise RuntimeError(response.text)
data = response.json()
if "errors" in data:
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
logging.error(data["errors"])
logging.error(response.text)
raise RuntimeError(response.text)
return data
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
# graphql_response = IssuesResponse.model_validate(data)
# return graphql_response.data.repository.issues.edges
# def get_graphql_question_discussion_edges(
# *,
# settings: Settings,
# after: Union[str, None] = None,
# ):
# data = get_graphql_response(
# settings=settings,
# query=discussions_query,
# after=after,
# category_id=questions_category_id,
# )
# graphql_response = DiscussionsResponse.model_validate(data)
# return graphql_response.data.repository.discussions.edges
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
if after is None:
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(settings=settings, query=prs_query, after=after)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
# def get_issues_experts(settings: Settings):
# issue_nodes: List[IssuesNode] = []
# issue_edges = get_graphql_issue_edges(settings=settings)
# while issue_edges:
# for edge in issue_edges:
# issue_nodes.append(edge.node)
# last_edge = issue_edges[-1]
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for issue in issue_nodes:
# issue_author_name = None
# if issue.author:
# authors[issue.author.login] = issue.author
# issue_author_name = issue.author.login
# issue_commentors = set()
# for comment in issue.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != issue_author_name:
# issue_commentors.add(comment.author.login)
# for author_name in issue_commentors:
# commentors[author_name] += 1
# if issue.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_discussions_experts(settings: Settings):
# discussion_nodes: List[DiscussionsNode] = []
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
# while discussion_edges:
# for discussion_edge in discussion_edges:
# discussion_nodes.append(discussion_edge.node)
# last_edge = discussion_edges[-1]
# discussion_edges = get_graphql_question_discussion_edges(
# settings=settings, after=last_edge.cursor
# )
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for discussion in discussion_nodes:
# discussion_author_name = None
# if discussion.author:
# authors[discussion.author.login] = discussion.author
# discussion_author_name = discussion.author.login
# discussion_commentors = set()
# for comment in discussion.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != discussion_author_name:
# discussion_commentors.add(comment.author.login)
# for reply in comment.replies.nodes:
# if reply.author:
# authors[reply.author.login] = reply.author
# if reply.author.login != discussion_author_name:
# discussion_commentors.add(reply.author.login)
# for author_name in discussion_commentors:
# commentors[author_name] += 1
# if discussion.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_experts(settings: Settings):
# (
# discussions_commentors,
# discussions_last_month_commentors,
# discussions_authors,
# ) = get_discussions_experts(settings=settings)
# commentors = discussions_commentors
# last_month_commentors = discussions_last_month_commentors
# authors = {**discussions_authors}
# return commentors, last_month_commentors, authors
def _logistic(x, k):
return x / (x + k)
def get_contributors(settings: Settings):
pr_nodes: List[PullRequestNode] = []
pr_edges = get_graphql_pr_edges(settings=settings)
while pr_edges:
for edge in pr_edges:
pr_nodes.append(edge.node)
last_edge = pr_edges[-1]
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
contributors = Counter()
contributor_scores = Counter()
recent_contributor_scores = Counter()
reviewers = Counter()
authors: Dict[str, Author] = {}
for pr in pr_nodes:
pr_reviewers: Set[str] = set()
for review in pr.reviews.nodes:
if review.author:
authors[review.author.login] = review.author
pr_reviewers.add(review.author.login)
for reviewer in pr_reviewers:
reviewers[reviewer] += 1
if pr.author:
authors[pr.author.login] = pr.author
contributors[pr.author.login] += 1
files_changed = pr.changedFiles
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = datetime.now(timezone.utc) - timedelta(days=3 * 30)
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return (
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
authors,
)
def get_top_users(
*,
counter: Counter,
min_count: int,
authors: Dict[str, Author],
skip_users: Container[str],
):
users = []
for commentor, count in counter.most_common():
if commentor in skip_users:
continue
if count >= min_count:
author = authors[commentor]
users.append(
{
"login": commentor,
"count": count,
"avatarUrl": author.avatarUrl,
"twitterUsername": author.twitterUsername,
"url": author.url,
}
)
return users
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
settings = Settings()
logging.info(f"Using config: {settings.model_dump_json()}")
g = Github(settings.input_token.get_secret_value())
repo = g.get_repo(settings.github_repository)
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
(
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
pr_authors,
) = get_contributors(settings=settings)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
"hwchase17",
"agola11",
"baskaryan",
"hinthornw",
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin",
"ccurme",
"vbarda",
}
hidden_logins = {
"dev2049",
"vowelparrot",
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"isahers1",
"dqbd",
"bracesproul",
"akira",
}
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
maintainers = []
for login in maintainers_logins:
user = authors[login]
maintainers.append(
{
"login": login,
"count": contributors[login], # + question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
}
)
# min_count_expert = 10
# min_count_last_month = 3
min_score_contributor = 1
min_count_reviewer = 5
skip_users = maintainers_logins | bot_names | hidden_logins
# experts = get_top_users(
# counter=question_commentors,
# min_count=min_count_expert,
# authors=authors,
# skip_users=skip_users,
# )
# last_month_active = get_top_users(
# counter=question_last_month_commentors,
# min_count=min_count_last_month,
# authors=authors,
# skip_users=skip_users,
# )
top_recent_contributors = get_top_users(
counter=recent_contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_contributors = get_top_users(
counter=contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_reviewers = get_top_users(
counter=reviewers,
min_count=min_count_reviewer,
authors=authors,
skip_users=skip_users,
)
people = {
"maintainers": maintainers,
# "experts": experts,
# "last_month_active": last_month_active,
"top_recent_contributors": top_recent_contributors,
"top_contributors": top_contributors,
"top_reviewers": top_reviewers,
}
people_path = Path("./docs/data/people.yml")
people_old_content = people_path.read_text(encoding="utf-8")
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if people_old_content == new_people_content:
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
logging.info("Setting up GitHub Actions git user")
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
subprocess.run(
["git", "config", "user.email", "github-actions@github.com"], check=True
)
branch_name = "langchain/langchain-people"
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(["git", "add", str(people_path)], check=True)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
logging.info("Pushing branch")
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")

View File

@@ -1,93 +0,0 @@
# An action for setting up poetry install with caching.
# Using a custom action since the default action does not
# take poetry install groups into account.
# Action code from:
# https://github.com/actions/setup-python/issues/505#issuecomment-1273013236
name: poetry-install-with-caching
description: Poetry install with support for caching of dependency groups.
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
poetry-version:
description: Poetry version
required: true
cache-key:
description: Cache key to use for manual handling of caching
required: true
working-directory:
description: Directory whose poetry.lock file should be cached
required: true
runs:
using: composite
steps:
- uses: actions/setup-python@v5
name: Setup python ${{ inputs.python-version }}
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v4
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "1"
with:
path: |
/opt/pipx/venvs/poetry
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
- name: Refresh shell hashtable and fixup softlinks
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
run: |
set -eux
# Refresh the shell hashtable, to ensure correct `which` output.
hash -r
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
# Delete and recreate the softlinks pipx expects to have.
rm /opt/pipx/venvs/poetry/bin/python
cd /opt/pipx/venvs/poetry/bin
ln -s "$(which "python$PYTHON_VERSION")" python
chmod +x python
cd /opt/pipx_bin/
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
chmod +x poetry
# Ensure everything got set up correctly.
/opt/pipx/venvs/poetry/bin/python --version
/opt/pipx_bin/poetry --version
- name: Install poetry
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
with:
path: |
~/.cache/pip
~/.cache/pypoetry/virtualenvs
~/.cache/pypoetry/cache
~/.cache/pypoetry/artifacts
${{ env.WORKDIR }}/.venv
key: py-deps-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles(format('{0}/**/poetry.lock', env.WORKDIR)) }}

39
.github/actions/uv_setup/action.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
# Helper to set up Python and uv with caching
name: uv-install
description: Set up Python and uv with caching
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: "true"
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ""
working-directory:
description: Working directory for cache glob scoping
required: false
default: "**"
env:
UV_VERSION: "0.5.25"
runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v6
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

330
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,330 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring admonitions (using MkDocs Material, like `!!! warning`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args and Returns sections for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level.
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications.
Args:
data: List of data items to process.
Returns:
ProcessingResult with details of the operation.
"""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `!fix(cli): resolve flag parsing error` (breaking change uses exclamation mark)
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain_core` for base abstractions
- Implement proper streaming support where applicable
- Avoid deprecated components
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
# and
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

25
.github/images/logo-dark.svg vendored Normal file
View File

@@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1584.81 250">
<defs>
<style>
.cls-1 {
fill: #1c3c3c;
stroke-width: 0px;
}
</style>
</defs>
<g id="LanChain-logo">
<g id="LangChain-logotype">
<polygon class="cls-1" points="596.33 49.07 596.33 200.67 700.76 200.67 700.76 177.78 620.04 177.78 620.04 49.07 596.33 49.07"/>
<path class="cls-1" d="M1126.83,49.07c-20.53,0-37.95,7.4-50.38,21.41-12.32,13.88-18.82,33.36-18.82,56.33,0,47.23,27.25,77.75,69.41,77.75,29.71,0,52.71-15.54,61.54-41.56l2.14-6.31-23.53-8.94-2.17,7.03c-5.26,17.01-18.75,26.38-37.99,26.38-27.48,0-44.55-20.82-44.55-54.34s17.23-54.34,44.97-54.34c19.23,0,30.31,7.54,35.95,24.44l2.46,7.37,22.91-10.75-2.1-5.9c-8.96-25.22-29.65-38.56-59.85-38.56Z"/>
<path class="cls-1" d="M756.43,85.05c-22.76,0-39.78,10.67-46.69,29.27-.44,1.19-1.77,4.78-1.77,4.78l19.51,12.62,2.65-6.91c4.52-11.78,12.88-17.27,26.3-17.27s21.1,6.51,20.96,19.33c0,.52-.04,2.09-.04,2.09,0,0-17.76,2.88-25.08,4.43-31.23,6.6-44.31,18.52-44.31,38.02,0,10.39,5.77,21.64,16.3,27.95,6.32,3.78,14.57,5.21,23.68,5.21,5.99,0,11.81-.89,17.2-2.53,12.25-4.07,15.67-12.07,15.67-12.07v10.46h20.29v-74.78c0-25.42-16.7-40.6-44.67-40.6ZM777.46,164.85c0,7.86-8.56,18.93-28.5,18.93-5.63,0-9.62-1.49-12.28-3.71-3.56-2.97-4.73-7.24-4.24-11.01.21-1.64,1.2-5.17,4.87-8.23,3.75-3.13,10.38-5.37,20.62-7.6,8.42-1.83,19.54-3.85,19.54-3.85v15.48Z"/>
<path class="cls-1" d="M876.11,85.04c-2.82,0-5.57.2-8.24.57-18.17,2.73-23.49,11.96-23.49,11.96l.02-9.31h-22.74s0,112.19,0,112.19h23.71v-62.18c0-21.13,15.41-30.75,29.73-30.75,15.48,0,23,8.32,23,25.45v67.48h23.71v-70.74c0-27.56-17.51-44.67-45.69-44.67Z"/>
<path class="cls-1" d="M1539.12,85.04c-2.82,0-5.57.2-8.24.57-18.17,2.73-23.49,11.96-23.49,11.96v-9.32h-22.72v112.2h23.71v-62.18c0-21.13,15.41-30.75,29.73-30.75,15.48,0,23,8.32,23,25.45v67.48h23.71v-70.74c0-27.56-17.51-44.67-45.69-44.67Z"/>
<path class="cls-1" d="M1020.76,88.26v11.55s-5.81-14.77-32.24-14.77c-32.84,0-53.24,22.66-53.24,59.15,0,20.59,6.58,36.8,18.19,47.04,9.03,7.96,21.09,12.04,35.45,12.32,9.99.19,16.46-2.53,20.5-5.1,7.76-4.94,10.64-9.63,10.64-9.63,0,0-.33,3.67-.93,8.64-.43,3.6-1.24,6.13-1.24,6.13h0c-3.61,12.85-14.17,20.28-29.57,20.28s-24.73-5.07-26.58-15.06l-23.05,6.88c3.98,19.2,22,30.66,48.2,30.66,17.81,0,31.77-4.84,41.5-14.4,9.81-9.64,14.79-23.53,14.79-41.29v-102.41h-22.42ZM1019.26,145.21c0,22.44-10.96,35.84-29.32,35.84-19.67,0-30.95-13.44-30.95-36.86s11.28-36.66,30.95-36.66c17.92,0,29.15,13.34,29.32,34.82v2.86Z"/>
<path class="cls-1" d="M1259.01,85.04c-2.6,0-5.13.17-7.59.49-17.88,2.79-23.14,11.9-23.14,11.9v-2.67h-.01s0-45.69,0-45.69h-23.71v151.39h23.71v-62.18c0-21.27,15.41-30.95,29.73-30.95,15.48,0,23,8.32,23,25.45v67.68h23.71v-70.94c0-27.01-17.94-44.47-45.69-44.47Z"/>
<circle class="cls-1" cx="1450.93" cy="64.47" r="15.37"/>
<path class="cls-1" d="M1439.14,88.2v56.94h0c-6.75-5.56-14.6-9.75-23.5-12.26v-7.23c0-25.42-16.7-40.6-44.67-40.6-22.76,0-39.78,10.67-46.69,29.27-.44,1.19-1.77,4.78-1.77,4.78l19.51,12.62,2.65-6.91c4.52-11.78,12.88-17.27,26.3-17.27s21.1,6.51,20.96,19.33c0,.08,0,1.15,0,2.86-10.04-.28-19.38.69-27.77,2.66,0,0,0,0,0,0-11.06,2.5-31.6,8.85-38.94,25.36-.05.11-1.13,2.96-1.13,2.96-1.06,3.28-1.59,6.84-1.59,10.7,0,10.39,5.77,21.64,16.3,27.95,6.32,3.78,14.57,5.21,23.68,5.21,5.88,0,11.6-.86,16.91-2.44,12.49-4.04,15.96-12.16,15.96-12.16v10.47h20.29v-34.27c-5.7-3.56-14.26-5.66-23.65-5.64,0,2.65,0,4.33,0,4.33,0,7.86-8.56,18.93-28.5,18.93-5.63,0-9.62-1.49-12.28-3.71-3.56-2.97-4.73-7.24-4.24-11.01.21-1.64,1.2-5.17,4.87-8.23l-.04-.11c8.42-6.89,24.97-9.64,40.17-9.04v.03c12.94.47,22.62,3.01,29.53,7.77,1.88,1.19,3.65,2.52,5.28,3.98,6.94,6.23,9.73,13.9,10.93,18.38,1.95,7.31,1.43,18.57,1.43,18.57h23.59v-112.2h-23.59Z"/>
</g>
<path id="LangChain-symbol" class="cls-1" d="M393.52,75.2c9.66,9.66,9.66,25.38,0,35.04l-21.64,21.29-.22-1.22c-1.58-8.75-5.74-16.69-12.02-22.97-4.73-4.72-10.32-8.21-16.62-10.37-3.91,3.93-6.06,9.08-6.06,14.5,0,1.1.1,2.24.3,3.38,3.47,1.25,6.54,3.18,9.12,5.76,9.66,9.66,9.66,25.38,0,35.04l-18.84,18.84c-4.83,4.83-11.17,7.24-17.52,7.24s-12.69-2.41-17.52-7.24c-9.66-9.66-9.66-25.38,0-35.04l21.64-21.28.22,1.22c1.57,8.73,5.73,16.67,12.03,22.96,4.74,4.74,9.99,7.89,16.28,10.04l1.16-1.16c3.52-3.52,5.45-8.2,5.45-13.19,0-1.11-.1-2.22-.29-3.31-3.63-1.2-6.62-2.91-9.34-5.63-3.92-3.92-6.36-8.93-7.04-14.48-.05-.4-.08-.79-.12-1.19-.54-7.23,2.07-14.29,7.16-19.37l18.84-18.84c4.67-4.67,10.89-7.25,17.52-7.25s12.85,2.57,17.52,7.25ZM491.9,125c0,68.93-56.08,125-125,125H125C56.08,250,0,193.93,0,125S56.08,0,125,0h241.9c68.93,0,125,56.08,125,125ZM240.9,187.69c1.97-2.39-7.13-9.12-8.99-11.59-3.78-4.1-3.8-10-6.35-14.79-6.24-14.46-13.41-28.81-23.44-41.05-10.6-13.39-23.68-24.47-35.17-37.04-8.53-8.77-10.81-21.26-18.34-30.69-10.38-15.33-43.2-19.51-48.01,2.14.02.68-.19,1.11-.78,1.54-2.66,1.93-5.03,4.14-7.02,6.81-4.87,6.78-5.62,18.28.46,24.37.2-3.21.31-6.24,2.85-8.54,4.7,4.03,11.8,5.46,17.25,2.45,12.04,17.19,9.04,40.97,18.6,59.49,2.64,4.38,5.3,8.85,8.69,12.69,2.75,4.28,12.25,9.33,12.81,13.29.1,6.8-.7,14.23,3.76,19.92,2.1,4.26-3.06,8.54-7.22,8.01-5.4.74-11.99-3.63-16.72-.94-1.67,1.81-4.94-.19-6.38,2.32-.5,1.3-3.2,3.13-1.59,4.38,1.79-1.36,3.45-2.78,5.86-1.97-.36,1.96,1.19,2.24,2.42,2.81-.04,1.33-.82,2.69.2,3.82,1.19-1.2,1.9-2.9,3.79-3.4,6.28,8.37,12.67-8.47,26.26-.89-2.76-.14-5.21.21-7.07,2.48-.46.51-.85,1.11-.04,1.77,7.33-4.73,7.29,1.62,12.05-.33,3.66-1.91,7.3-4.3,11.65-3.62-4.23,1.22-4.4,4.62-6.88,7.49-.42.44-.62.94-.13,1.67,8.78-.74,9.5-3.66,16.59-7.24,5.29-3.23,10.56,4.6,15.14.14,1.01-.97,2.39-.64,3.64-.77-1.6-8.53-19.19,1.56-18.91-9.88,5.66-3.85,4.36-11.22,4.74-17.17,6.51,3.61,13.75,5.71,20.13,9.16,3.22,5.2,8.27,12.07,15,11.62.18-.52.34-.98.53-1.51,2.04.35,4.66,1.7,5.78-.88,3.05,3.19,7.53,3.03,11.52,2.21,2.95-2.4-5.55-5.82-6.69-8.29ZM419.51,92.72c0-11.64-4.52-22.57-12.73-30.78-8.21-8.21-19.14-12.73-30.79-12.73s-22.58,4.52-30.79,12.73l-18.84,18.84c-4.4,4.4-7.74,9.57-9.93,15.36l-.13.33-.34.1c-6.84,2.11-12.87,5.73-17.92,10.78l-18.84,18.84c-16.97,16.98-16.97,44.6,0,61.57,8.21,8.21,19.14,12.73,30.78,12.73h0c11.64,0,22.58-4.52,30.79-12.73l18.84-18.84c4.38-4.38,7.7-9.53,9.89-15.31l.13-.33.34-.11c6.72-2.06,12.92-5.8,17.95-10.82l18.84-18.84c8.21-8.21,12.73-19.14,12.73-30.79ZM172.38,173.6c-1.62,6.32-2.15,17.09-10.37,17.4-.68,3.65,2.53,5.02,5.44,3.85,2.89-1.33,4.26,1.05,5.23,3.42,4.46.65,11.06-1.49,11.31-6.77-6.66-3.84-8.72-11.14-11.62-17.9Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.4 KiB

25
.github/images/logo-light.svg vendored Normal file
View File

@@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1584.81 250">
<defs>
<style>
.cls-1 {
fill: #fff;
stroke-width: 0px;
}
</style>
</defs>
<g id="LanChain-logo">
<g id="LangChain-logotype">
<polygon class="cls-1" points="596.33 49.07 596.33 200.67 700.76 200.67 700.76 177.78 620.04 177.78 620.04 49.07 596.33 49.07"/>
<path class="cls-1" d="M1126.83,49.07c-20.53,0-37.95,7.4-50.38,21.41-12.32,13.88-18.82,33.36-18.82,56.33,0,47.23,27.25,77.75,69.41,77.75,29.71,0,52.71-15.54,61.54-41.56l2.14-6.31-23.53-8.94-2.17,7.03c-5.26,17.01-18.75,26.38-37.99,26.38-27.48,0-44.55-20.82-44.55-54.34s17.23-54.34,44.97-54.34c19.23,0,30.31,7.54,35.95,24.44l2.46,7.37,22.91-10.75-2.1-5.9c-8.96-25.22-29.65-38.56-59.85-38.56Z"/>
<path class="cls-1" d="M756.43,85.05c-22.76,0-39.78,10.67-46.69,29.27-.44,1.19-1.77,4.78-1.77,4.78l19.51,12.62,2.65-6.91c4.52-11.78,12.88-17.27,26.3-17.27s21.1,6.51,20.96,19.33c0,.52-.04,2.09-.04,2.09,0,0-17.76,2.88-25.08,4.43-31.23,6.6-44.31,18.52-44.31,38.02,0,10.39,5.77,21.64,16.3,27.95,6.32,3.78,14.57,5.21,23.68,5.21,5.99,0,11.81-.89,17.2-2.53,12.25-4.07,15.67-12.07,15.67-12.07v10.46h20.29v-74.78c0-25.42-16.7-40.6-44.67-40.6ZM777.46,164.85c0,7.86-8.56,18.93-28.5,18.93-5.63,0-9.62-1.49-12.28-3.71-3.56-2.97-4.73-7.24-4.24-11.01.21-1.64,1.2-5.17,4.87-8.23,3.75-3.13,10.38-5.37,20.62-7.6,8.42-1.83,19.54-3.85,19.54-3.85v15.48Z"/>
<path class="cls-1" d="M876.11,85.04c-2.82,0-5.57.2-8.24.57-18.17,2.73-23.49,11.96-23.49,11.96l.02-9.31h-22.74s0,112.19,0,112.19h23.71v-62.18c0-21.13,15.41-30.75,29.73-30.75,15.48,0,23,8.32,23,25.45v67.48h23.71v-70.74c0-27.56-17.51-44.67-45.69-44.67Z"/>
<path class="cls-1" d="M1539.12,85.04c-2.82,0-5.57.2-8.24.57-18.17,2.73-23.49,11.96-23.49,11.96v-9.32h-22.72v112.2h23.71v-62.18c0-21.13,15.41-30.75,29.73-30.75,15.48,0,23,8.32,23,25.45v67.48h23.71v-70.74c0-27.56-17.51-44.67-45.69-44.67Z"/>
<path class="cls-1" d="M1020.76,88.26v11.55s-5.81-14.77-32.24-14.77c-32.84,0-53.24,22.66-53.24,59.15,0,20.59,6.58,36.8,18.19,47.04,9.03,7.96,21.09,12.04,35.45,12.32,9.99.19,16.46-2.53,20.5-5.1,7.76-4.94,10.64-9.63,10.64-9.63,0,0-.33,3.67-.93,8.64-.43,3.6-1.24,6.13-1.24,6.13h0c-3.61,12.85-14.17,20.28-29.57,20.28s-24.73-5.07-26.58-15.06l-23.05,6.88c3.98,19.2,22,30.66,48.2,30.66,17.81,0,31.77-4.84,41.5-14.4,9.81-9.64,14.79-23.53,14.79-41.29v-102.41h-22.42ZM1019.26,145.21c0,22.44-10.96,35.84-29.32,35.84-19.67,0-30.95-13.44-30.95-36.86s11.28-36.66,30.95-36.66c17.92,0,29.15,13.34,29.32,34.82v2.86Z"/>
<path class="cls-1" d="M1259.01,85.04c-2.6,0-5.13.17-7.59.49-17.88,2.79-23.14,11.9-23.14,11.9v-2.67h-.01s0-45.69,0-45.69h-23.71v151.39h23.71v-62.18c0-21.27,15.41-30.95,29.73-30.95,15.48,0,23,8.32,23,25.45v67.68h23.71v-70.94c0-27.01-17.94-44.47-45.69-44.47Z"/>
<circle class="cls-1" cx="1450.93" cy="64.47" r="15.37"/>
<path class="cls-1" d="M1439.14,88.2v56.94h0c-6.75-5.56-14.6-9.75-23.5-12.26v-7.23c0-25.42-16.7-40.6-44.67-40.6-22.76,0-39.78,10.67-46.69,29.27-.44,1.19-1.77,4.78-1.77,4.78l19.51,12.62,2.65-6.91c4.52-11.78,12.88-17.27,26.3-17.27s21.1,6.51,20.96,19.33c0,.08,0,1.15,0,2.86-10.04-.28-19.38.69-27.77,2.66,0,0,0,0,0,0-11.06,2.5-31.6,8.85-38.94,25.36-.05.11-1.13,2.96-1.13,2.96-1.06,3.28-1.59,6.84-1.59,10.7,0,10.39,5.77,21.64,16.3,27.95,6.32,3.78,14.57,5.21,23.68,5.21,5.88,0,11.6-.86,16.91-2.44,12.49-4.04,15.96-12.16,15.96-12.16v10.47h20.29v-34.27c-5.7-3.56-14.26-5.66-23.65-5.64,0,2.65,0,4.33,0,4.33,0,7.86-8.56,18.93-28.5,18.93-5.63,0-9.62-1.49-12.28-3.71-3.56-2.97-4.73-7.24-4.24-11.01.21-1.64,1.2-5.17,4.87-8.23l-.04-.11c8.42-6.89,24.97-9.64,40.17-9.04v.03c12.94.47,22.62,3.01,29.53,7.77,1.88,1.19,3.65,2.52,5.28,3.98,6.94,6.23,9.73,13.9,10.93,18.38,1.95,7.31,1.43,18.57,1.43,18.57h23.59v-112.2h-23.59Z"/>
</g>
<path id="LangChain-symbol" class="cls-1" d="M393.52,75.2c9.66,9.66,9.66,25.38,0,35.04l-21.64,21.29-.22-1.22c-1.58-8.75-5.74-16.69-12.02-22.97-4.73-4.72-10.32-8.21-16.62-10.37-3.91,3.93-6.06,9.08-6.06,14.5,0,1.1.1,2.24.3,3.38,3.47,1.25,6.54,3.18,9.12,5.76,9.66,9.66,9.66,25.38,0,35.04l-18.84,18.84c-4.83,4.83-11.17,7.24-17.52,7.24s-12.69-2.41-17.52-7.24c-9.66-9.66-9.66-25.38,0-35.04l21.64-21.28.22,1.22c1.57,8.73,5.73,16.67,12.03,22.96,4.74,4.74,9.99,7.89,16.28,10.04l1.16-1.16c3.52-3.52,5.45-8.2,5.45-13.19,0-1.11-.1-2.22-.29-3.31-3.63-1.2-6.62-2.91-9.34-5.63-3.92-3.92-6.36-8.93-7.04-14.48-.05-.4-.08-.79-.12-1.19-.54-7.23,2.07-14.29,7.16-19.37l18.84-18.84c4.67-4.67,10.89-7.25,17.52-7.25s12.85,2.57,17.52,7.25ZM491.9,125c0,68.93-56.08,125-125,125H125C56.08,250,0,193.93,0,125S56.08,0,125,0h241.9C435.82,0,491.9,56.08,491.9,125ZM240.9,187.69c1.97-2.39-7.13-9.12-8.99-11.59-3.78-4.1-3.8-10-6.35-14.79-6.24-14.46-13.41-28.81-23.44-41.05-10.6-13.39-23.68-24.47-35.17-37.04-8.53-8.77-10.81-21.26-18.34-30.69-10.38-15.33-43.2-19.51-48.01,2.14.02.68-.19,1.11-.78,1.54-2.66,1.93-5.03,4.14-7.02,6.81-4.87,6.78-5.62,18.28.46,24.37.2-3.21.31-6.24,2.85-8.54,4.7,4.03,11.8,5.46,17.25,2.45,12.04,17.19,9.04,40.97,18.6,59.49,2.64,4.38,5.3,8.85,8.69,12.69,2.75,4.28,12.25,9.33,12.81,13.29.1,6.8-.7,14.23,3.76,19.92,2.1,4.26-3.06,8.54-7.22,8.01-5.4.74-11.99-3.63-16.72-.94-1.67,1.81-4.94-.19-6.38,2.32-.5,1.3-3.2,3.13-1.59,4.38,1.79-1.36,3.45-2.78,5.86-1.97-.36,1.96,1.19,2.24,2.42,2.81-.04,1.33-.82,2.69.2,3.82,1.19-1.2,1.9-2.9,3.79-3.4,6.28,8.37,12.67-8.47,26.26-.89-2.76-.14-5.21.21-7.07,2.48-.46.51-.85,1.11-.04,1.77,7.33-4.73,7.29,1.62,12.05-.33,3.66-1.91,7.3-4.3,11.65-3.62-4.23,1.22-4.4,4.62-6.88,7.49-.42.44-.62.94-.13,1.67,8.78-.74,9.5-3.66,16.59-7.24,5.29-3.23,10.56,4.6,15.14.14,1.01-.97,2.39-.64,3.64-.77-1.6-8.53-19.19,1.56-18.91-9.88,5.66-3.85,4.36-11.22,4.74-17.17,6.51,3.61,13.75,5.71,20.13,9.16,3.22,5.2,8.27,12.07,15,11.62.18-.52.34-.98.53-1.51,2.04.35,4.66,1.7,5.78-.88,3.05,3.19,7.53,3.03,11.52,2.21,2.95-2.4-5.55-5.82-6.69-8.29ZM419.51,92.72c0-11.64-4.52-22.57-12.73-30.78-8.21-8.21-19.14-12.73-30.79-12.73s-22.58,4.52-30.79,12.73l-18.84,18.84c-4.4,4.4-7.74,9.57-9.93,15.36l-.13.33-.34.1c-6.84,2.11-12.87,5.73-17.92,10.78l-18.84,18.84c-16.97,16.98-16.97,44.6,0,61.57,8.21,8.21,19.14,12.73,30.78,12.73h0c11.64,0,22.58-4.52,30.79-12.73l18.84-18.84c4.38-4.38,7.7-9.53,9.89-15.31l.13-.33.34-.11c6.72-2.06,12.92-5.8,17.95-10.82l18.84-18.84c8.21-8.21,12.73-19.14,12.73-30.79ZM172.38,173.6c-1.62,6.32-2.15,17.09-10.37,17.4-.68,3.65,2.53,5.02,5.44,3.85,2.89-1.33,4.26,1.05,5.23,3.42,4.46.65,11.06-1.49,11.31-6.77-6.66-3.84-8.72-11.14-11.62-17.9Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.4 KiB

163
.github/pr-file-labeler.yml vendored Normal file
View File

@@ -0,0 +1,163 @@
# Label PRs (config)
# Automatically applies labels based on changed files and branch patterns
# Core packages
core:
- changed-files:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain-classic:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
cli:
- changed-files:
- any-glob-to-any-file:
- "libs/cli/**/*"
standard-tests:
- changed-files:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
model-profiles:
- changed-files:
- any-glob-to-any-file:
- "libs/model-profiles/**/*"
text-splitters:
- changed-files:
- any-glob-to-any-file:
- "libs/text-splitters/**/*"
# Partner integrations
integration:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/**/*"
anthropic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/anthropic/**/*"
chroma:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/chroma/**/*"
deepseek:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/deepseek/**/*"
exa:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/exa/**/*"
fireworks:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/fireworks/**/*"
groq:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/groq/**/*"
huggingface:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/huggingface/**/*"
mistralai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/mistralai/**/*"
nomic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/nomic/**/*"
ollama:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/ollama/**/*"
openai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/openai/**/*"
perplexity:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/perplexity/**/*"
prompty:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/prompty/**/*"
qdrant:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/qdrant/**/*"
xai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/xai/**/*"
# Infrastructure and DevOps
infra:
- changed-files:
- any-glob-to-any-file:
- ".github/**/*"
- "Makefile"
- ".pre-commit-config.yaml"
- "scripts/**/*"
- "docker/**/*"
- "Dockerfile*"
github_actions:
- changed-files:
- any-glob-to-any-file:
- ".github/workflows/**/*"
- ".github/actions/**/*"
dependencies:
- changed-files:
- any-glob-to-any-file:
- "**/pyproject.toml"
- "uv.lock"
- "**/requirements*.txt"
- "**/poetry.lock"
# Documentation
documentation:
- changed-files:
- any-glob-to-any-file:
- "**/*.md"
- "**/*.rst"
- "**/README*"
# Security related changes
security:
- changed-files:
- any-glob-to-any-file:
- "**/*security*"
- "**/*auth*"
- "**/*credential*"
- "**/*secret*"
- "**/*token*"
- ".github/workflows/security*"

View File

@@ -1,19 +1,54 @@
"""Analyze git diffs to determine which directories need to be tested.
Intelligently determines which LangChain packages and directories need to be tested,
linted, or built based on the changes. Handles dependency relationships between
packages, maps file changes to appropriate CI job configurations, and outputs JSON
configurations for GitHub Actions.
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
- Builds dependency graph to include dependent packages when core components change
- Generates test matrix configurations with appropriate Python versions
- Handles special cases for Pydantic version testing and performance benchmarks
Used as part of the check_diffs workflow.
"""
import glob
import json
import os
import sys
import tomllib
from collections import defaultdict
from typing import Dict, List, Set
from pathlib import Path
from typing import Dict, List, Set
import tomllib
from get_min_versions import get_min_version_from_toml
from packaging.requirements import Requirement
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
"libs/experimental",
"libs/langchain_v1",
"libs/model-profiles",
]
# When set to True, we are ignoring core dependents
# in order to be able to get CI to pass for each individual
# package that depends on core
# e.g. if you touch core, we don't then add textsplitters/etc to CI
IGNORE_CORE_DEPENDENTS = False
# ignored partners are removed from dependents
# but still run if directly edited
IGNORED_PARTNERS = [
# remove huggingface from dependents because of CI instability
# specifically in huggingface jobs
# https://github.com/langchain-ai/langchain/issues/25558
"huggingface",
# prompty exhibiting issues with numpy for Python 3.13
# https://github.com/langchain-ai/langchain/actions/runs/12651104685/job/35251034969?pr=29065
"prompty",
]
@@ -26,9 +61,9 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""Construct a mapping of package -> dependents
Done such that we can run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
@@ -38,15 +73,17 @@ def dependents_graph() -> dict:
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pyproject = tomllib.load(f)
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in [
*pyproject["dependencies"].keys(),
*pyproject["group"]["test"]["dependencies"].keys(),
*pyproject["project"]["dependencies"],
*pyproject["dependency-groups"]["test"],
]:
requirement = Requirement(dep)
package_name = requirement.name
if "langchain" in dep:
dependents[dep].add(pkg_dir)
dependents[package_name].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
@@ -58,9 +95,9 @@ def dependents_graph() -> dict:
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
assert depline.startswith("-e ../partners/"), (
"Extended test deps should only editable install partner packages"
)
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
@@ -68,6 +105,11 @@ def dependents_graph() -> dict:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
for k in dependents:
for partner in IGNORED_PARTNERS:
if f"libs/partners/{partner}" in dependents[k]:
dependents[k].remove(f"libs/partners/{partner}")
return dependents
@@ -85,49 +127,93 @@ def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if dir_ == "libs/core":
return [
{"working-directory": dir_, "python-version": f"3.{v}"}
for v in range(8, 13)
]
min_python = "3.8"
max_python = "3.12"
if job == "test-pydantic":
return _get_pydantic_test_configs(dir_)
if job == "codspeed":
py_versions = ["3.13"]
elif dir_ == "libs/core":
py_versions = ["3.10", "3.11", "3.12", "3.13", "3.14"]
# custom logic for specific directories
if dir_ == "libs/partners/milvus":
# milvus poetry doesn't allow 3.12 because they
# declare deps in funny way
max_python = "3.11"
elif dir_ in {"libs/partners/chroma"}:
py_versions = ["3.10", "3.13"]
else:
py_versions = ["3.10", "3.14"]
if dir_ in ["libs/community", "libs/langchain"] and job == "extended-tests":
# community extended test resolution in 3.12 is slow
# even in uv
max_python = "3.11"
return [{"working-directory": dir_, "python-version": py_v} for py_v in py_versions]
if dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
max_python = "3.11"
return [
{"working-directory": dir_, "python-version": min_python},
{"working-directory": dir_, "python-version": max_python},
def _get_pydantic_test_configs(
dir_: str, *, python_version: str = "3.12"
) -> List[Dict[str, str]]:
with open("./libs/core/uv.lock", "rb") as f:
core_uv_lock_data = tomllib.load(f)
for package in core_uv_lock_data["package"]:
if package["name"] == "pydantic":
core_max_pydantic_minor = package["version"].split(".")[1]
break
with open(f"./{dir_}/uv.lock", "rb") as f:
dir_uv_lock_data = tomllib.load(f)
for package in dir_uv_lock_data["package"]:
if package["name"] == "pydantic":
dir_max_pydantic_minor = package["version"].split(".")[1]
break
core_min_pydantic_version = get_min_version_from_toml(
"./libs/core/pyproject.toml", "release", python_version, include=["pydantic"]
)["pydantic"]
core_min_pydantic_minor = (
core_min_pydantic_version.split(".")[1]
if "." in core_min_pydantic_version
else "0"
)
dir_min_pydantic_version = get_min_version_from_toml(
f"./{dir_}/pyproject.toml", "release", python_version, include=["pydantic"]
).get("pydantic", "0.0.0")
dir_min_pydantic_minor = (
dir_min_pydantic_version.split(".")[1]
if "." in dir_min_pydantic_version
else "0"
)
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
)
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
)
configs = [
{
"working-directory": dir_,
"pydantic-version": f"2.{v}.0",
"python-version": python_version,
}
for v in range(min_pydantic_minor, max_pydantic_minor + 1)
]
return configs
def _get_configs_for_multi_dirs(
job: str, dirs_to_run: List[str], dependents: dict
job: str, dirs_to_run: Dict[str, Set[str]], dependents: dict
) -> List[Dict[str, str]]:
if job == "lint":
dirs = add_dependents(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"],
dependents,
)
elif job in ["test", "compile-integration-tests", "dependencies"]:
elif job in ["test", "compile-integration-tests", "dependencies", "test-pydantic"]:
dirs = add_dependents(
dirs_to_run["test"] | dirs_to_run["extended-test"], dependents
)
elif job == "extended-tests":
dirs = list(dirs_to_run["extended-test"])
elif job == "codspeed":
dirs = list(dirs_to_run["codspeed"])
else:
raise ValueError(f"Unknown job: {job}")
@@ -143,6 +229,7 @@ if __name__ == "__main__":
"lint": set(),
"test": set(),
"extended-test": set(),
"codspeed": set(),
}
docs_edited = False
@@ -151,6 +238,7 @@ if __name__ == "__main__":
dirs_to_run["lint"] = all_package_dirs()
dirs_to_run["test"] = all_package_dirs()
dirs_to_run["extended-test"] = set(LANGCHAIN_DIRS)
for file in files:
if any(
file.startswith(dir_)
@@ -161,33 +249,47 @@ if __name__ == "__main__":
".github/scripts/check_diff.py",
)
):
# add all LANGCHAIN_DIRS for infra changes
# Infrastructure changes (workflows, actions, CI scripts) trigger tests on
# all core packages as a safety measure. This ensures that changes to CI/CD
# infrastructure don't inadvertently break package testing, even if the change
# appears unrelated (e.g., documentation build workflows). This is intentionally
# conservative to catch unexpected side effects from workflow modifications.
#
# Example: A PR modifying .github/workflows/api_doc_build.yml will trigger
# lint/test jobs for libs/core, libs/text-splitters, libs/langchain, and
# libs/langchain_v1, even though the workflow may only affect documentation.
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if file.startswith("libs/core"):
dirs_to_run["codspeed"].add("libs/core")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if dir_ == "libs/core" and IGNORE_CORE_DEPENDENTS:
dirs_to_run["extended-test"].add(dir_)
continue
if file.startswith(dir_):
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
# Note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
dirs_to_run["test"].add("libs/partners/ai21")
dirs_to_run["test"].add("libs/partners/fireworks")
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
@@ -196,22 +298,30 @@ if __name__ == "__main__":
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
# Skip codspeed for partners without benchmarks or in IGNORED_PARTNERS
if partner_dir not in IGNORED_PARTNERS:
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file.startswith("libs/"):
# Check if this is a root-level file in libs/ (e.g., libs/README.md)
file_parts = file.split("/")
if len(file_parts) == 2:
# Root-level file in libs/, skip it (no tests needed)
continue
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")
elif file in [
"pyproject.toml",
"uv.lock",
]: # root uv files
docs_edited = True
dependents = dependents_graph()
# we now have dirs_by_job
# todo: clean this up
map_job_to_configs = {
job: _get_configs_for_multi_dirs(job, dirs_to_run, dependents)
for job in [
@@ -220,11 +330,10 @@ if __name__ == "__main__":
"extended-tests",
"compile-integration-tests",
"dependencies",
"test-pydantic",
"codspeed",
]
}
map_job_to_configs["test-doc-imports"] = (
[{"python-version": "3.12"}] if docs_edited else []
)
for key, value in map_job_to_configs.items():
json_output = json.dumps(value)

View File

@@ -1,35 +1,36 @@
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version
# See if we're releasing an rc or dev version
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# if not, iterate through dependencies and make sure none allow prereleases
# If not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
f"Dependency {dep_version} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
f"Dependency {dep_version} has allow-prereleases set to true. Please remove this."
)

View File

@@ -1,77 +1,150 @@
"""Get minimum versions of dependencies from a pyproject.toml file."""
import sys
from collections import defaultdict
if sys.version_info >= (3, 11):
import tomllib
else:
# for python 3.10 and below, which doesnt have stdlib tomllib
# For Python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.version import parse as parse_version
import re
from typing import List
import requests
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version, parse
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
"SQLAlchemy",
]
SKIP_IF_PULL_REQUEST = ["langchain-core"]
# some libs only get checked on release because of simultaneous changes in
# multiple libs
SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(f"^\\^({vstring})$", version)
if _match:
return _match.group(1)
def get_pypi_versions(package_name: str) -> List[str]:
"""Fetch all available versions for a package from PyPI.
# case >=x.x.x,<y.y.y
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
Args:
package_name: Name of the package
# case x.x.x
_match = re.match(f"^({vstring})$", version)
if _match:
return _match.group(1)
Returns:
List of all available versions
raise ValueError(f"Unrecognized version format: {version}")
Raises:
requests.exceptions.RequestException: If PyPI API request fails
KeyError: If package not found or response format unexpected
"""
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(pypi_url)
response.raise_for_status()
return list(response.json()["releases"].keys())
def get_min_version_from_toml(toml_path: str, versions_for: str):
def get_minimum_version(package_name: str, spec_string: str) -> str | None:
"""Find the minimum published version that satisfies the given constraints.
Args:
package_name: Name of the package
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Minimum compatible version or None if no compatible version found
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
)
spec_set = SpecifierSet(spec_string)
all_versions = get_pypi_versions(package_name)
valid_versions = []
for version_str in all_versions:
try:
version = parse(version_str)
if spec_set.contains(version):
valid_versions.append(version)
except ValueError:
continue
return str(min(valid_versions)) if valid_versions else None
def _check_python_version_from_requirement(
requirement: Requirement, python_version: str
) -> bool:
if not requirement.marker:
return True
else:
marker_str = str(requirement.marker)
if "python_version" in marker_str or "python_full_version" in marker_str:
python_version_str = "".join(
char
for char in marker_str
if char.isdigit() or char in (".", "<", ">", "=", ",")
)
return check_python_version(python_version, python_version_str)
return True
def get_min_version_from_toml(
toml_path: str,
versions_for: str,
python_version: str,
*,
include: list | None = None,
):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
# Get the dependencies from tool.poetry.dependencies
dependencies = toml_data["tool"]["poetry"]["dependencies"]
dependencies = defaultdict(list)
for dep in toml_data["project"]["dependencies"]:
requirement = Requirement(dep)
dependencies[requirement.name].append(requirement)
# Initialize a dictionary to store the minimum versions
min_versions = {}
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
for lib in set(MIN_VERSION_LIBS + (include or [])):
if versions_for == "pull_request" and lib in SKIP_IF_PULL_REQUEST:
# some libs only get checked on release because of simultaneous
# changes
# changes in multiple libs
continue
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
version_string = dependencies[lib]
if isinstance(version_string, dict):
version_string = version_string["version"]
if include and lib not in include:
continue
requirements = dependencies[lib]
for requirement in requirements:
if _check_python_version_from_requirement(requirement, python_version):
version_string = str(requirement.specifier)
break
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
min_version = get_minimum_version(lib, version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
@@ -79,13 +152,48 @@ def get_min_version_from_toml(toml_path: str, versions_for: str):
return min_versions
def check_python_version(version_string, constraint_string):
"""Check if the given Python version matches the given constraints.
Args:
version_string: A string representing the Python version (e.g. "3.8.5").
constraint_string: A string representing the package's Python version
constraints (e.g. ">=3.6, <4.0").
Returns:
True if the version matches the constraints
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
)
try:
version = Version(version_string)
constraints = SpecifierSet(constraint_string)
return version in constraints
except Exception as e:
print(f"Error: {e}")
return False
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
versions_for = sys.argv[2]
python_version = sys.argv[3]
assert versions_for in ["release", "pull_request"]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file, versions_for)
min_versions = get_min_version_from_toml(toml_file, versions_for, python_version)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))

View File

@@ -81,56 +81,93 @@ import time
__version__ = "2022.12+dev"
# Update symlinks only if the platform supports not following them
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
# Call os.path.normpath() only if not in a POSIX platform (Windows)
NORMALIZE_PATHS = (os.path.sep != '/')
NORMALIZE_PATHS = os.path.sep != "/"
# How many files to process in each batch when re-trying merge commits
STEPMISSING = 100
# (Extra) keywords for the os.utime() call performed by touch()
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
# Command-line interface ######################################################
def parse_args():
parser = argparse.ArgumentParser(
description=__doc__.split('\n---')[0])
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
group = parser.add_mutually_exclusive_group()
group.add_argument('--quiet', '-q', dest='loglevel',
action="store_const", const=logging.WARNING, default=logging.INFO,
help="Suppress informative messages and summary statistics.")
group.add_argument('--verbose', '-v', action="count", help="""
group.add_argument(
"--quiet",
"-q",
dest="loglevel",
action="store_const",
const=logging.WARNING,
default=logging.INFO,
help="Suppress informative messages and summary statistics.",
)
group.add_argument(
"--verbose",
"-v",
action="count",
help="""
Print additional information for each processed file.
Specify twice to further increase verbosity.
""")
""",
)
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
parser.add_argument(
"--cwd",
"-C",
metavar="DIRECTORY",
help="""
Run as if %(prog)s was started in directory %(metavar)s.
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
See 'man 1 git' or 'git --help' for more information.
""")
""",
)
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
parser.add_argument(
"--git-dir",
dest="gitdir",
metavar="GITDIR",
help="""
Path to the git repository, by default auto-discovered by searching
the current directory and its parents for a .git/ subdirectory.
""")
""",
)
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
parser.add_argument(
"--work-tree",
dest="workdir",
metavar="WORKTREE",
help="""
Path to the work tree root, by default the parent of GITDIR if it's
automatically discovered, or the current directory if GITDIR is set.
""")
""",
)
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
parser.add_argument(
"--force",
"-f",
default=False,
action="store_true",
help="""
Force updating files with uncommitted modifications.
Untracked files and uncommitted deletions, renames and additions are
always ignored.
""")
""",
)
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
parser.add_argument(
"--merge",
"-m",
default=False,
action="store_true",
help="""
Include merge commits.
Leads to more recent times and more files per commit, thus with the same
time, which may or may not be what you want.
@@ -138,71 +175,130 @@ def parse_args():
are found sooner, which can improve performance, sometimes substantially.
But as merge commits are usually huge, processing them may also take longer.
By default, merge commits are only used for files missing from regular commits.
""")
""",
)
parser.add_argument('--first-parent', default=False, action="store_true", help="""
parser.add_argument(
"--first-parent",
default=False,
action="store_true",
help="""
Consider only the first parent, the "main branch", when evaluating merge commits.
Only effective when merge commits are processed, either when --merge is
used or when finding missing files after the first regular log search.
See --skip-missing.
""")
""",
)
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
action="store_false", help="""
parser.add_argument(
"--skip-missing",
"-s",
dest="missing",
default=True,
action="store_false",
help="""
Do not try to find missing files.
If merge commits were not evaluated with --merge and some files were
not found in regular commits, by default %(prog)s searches for these
files again in the merge commits.
This option disables this retry, so files found only in merge commits
will not have their timestamp updated.
""")
""",
)
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
action="store_false", help="""
parser.add_argument(
"--no-directories",
"-D",
dest="dirs",
default=True,
action="store_false",
help="""
Do not update directory timestamps.
By default, use the time of its most recently created, renamed or deleted file.
Note that just modifying a file will NOT update its directory time.
""")
""",
)
parser.add_argument('--test', '-t', default=False, action="store_true",
help="Test run: do not actually update any file timestamp.")
parser.add_argument(
"--test",
"-t",
default=False,
action="store_true",
help="Test run: do not actually update any file timestamp.",
)
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
action='store_true', help="Use commit time instead of author time.")
parser.add_argument(
"--commit-time",
"-c",
dest="commit_time",
default=False,
action="store_true",
help="Use commit time instead of author time.",
)
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
action='store_true', help="""
parser.add_argument(
"--oldest-time",
"-o",
dest="reverse_order",
default=False,
action="store_true",
help="""
Update times based on the oldest, instead of the most recent commit of a file.
This reverses the order in which the git log is processed to emulate a
file "creation" date. Note this will be inaccurate for files deleted and
re-created at later dates.
""")
""",
)
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
parser.add_argument(
"--skip-older-than",
metavar="SECONDS",
type=int,
help="""
Ignore files that are currently older than %(metavar)s.
Useful in workflows that assume such files already have a correct timestamp,
as it may improve performance by processing fewer files.
""")
""",
)
parser.add_argument('--skip-older-than-commit', '-N', default=False,
action='store_true', help="""
parser.add_argument(
"--skip-older-than-commit",
"-N",
default=False,
action="store_true",
help="""
Ignore files older than the timestamp it would be updated to.
Such files may be considered "original", likely in the author's repository.
""")
""",
)
parser.add_argument('--unique-times', default=False, action="store_true", help="""
parser.add_argument(
"--unique-times",
default=False,
action="store_true",
help="""
Set the microseconds to a unique value per commit.
Allows telling apart changes that would otherwise have identical timestamps,
as git's time accuracy is in seconds.
""")
""",
)
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
parser.add_argument(
"pathspec",
nargs="*",
metavar="PATHSPEC",
help="""
Only modify paths matching %(metavar)s, relative to current directory.
By default, update all but untracked files and submodules.
""")
""",
)
parser.add_argument('--version', '-V', action='version',
version='%(prog)s version {version}'.format(version=get_version()))
parser.add_argument(
"--version",
"-V",
action="version",
version="%(prog)s version {version}".format(version=get_version()),
)
args_ = parser.parse_args()
if args_.verbose:
@@ -212,17 +308,18 @@ def parse_args():
def get_version(version=__version__):
if not version.endswith('+dev'):
if not version.endswith("+dev"):
return version
try:
cwd = os.path.dirname(os.path.realpath(__file__))
return Git(cwd=cwd, errors=False).describe().lstrip('v')
return Git(cwd=cwd, errors=False).describe().lstrip("v")
except Git.Error:
return '-'.join((version, "unknown"))
return "-".join((version, "unknown"))
# Helper functions ############################################################
def setup_logging():
"""Add TRACE logging level and corresponding method, return the root logger"""
logging.TRACE = TRACE = logging.DEBUG // 2
@@ -255,11 +352,13 @@ def normalize(path):
if path and path[0] == '"':
# Python 2: path = path[1:-1].decode("string-escape")
# Python 3: https://stackoverflow.com/a/46650050/624066
path = (path[1:-1] # Remove enclosing double quotes
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
.decode('unicode-escape') # Perform the actual octal-escaping decode
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
path = (
path[1:-1] # Remove enclosing double quotes
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
.decode("unicode-escape") # Perform the actual octal-escaping decode
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
.decode("utf8", "surrogateescape")
) # Decode from UTF-8
if NORMALIZE_PATHS:
# Make sure the slash matches the OS; for Windows we need a backslash
path = os.path.normpath(path)
@@ -282,12 +381,12 @@ def touch_ns(path, mtime_ns):
def isodate(secs: int):
# time.localtime() accepts floats, but discards fractional part
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
def isodate_ns(ns: int):
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
def get_mtime_ns(secs: int, idx: int):
@@ -305,35 +404,49 @@ def get_mtime_path(path):
# Git class and parse_log(), the heart of the script ##########################
class Git:
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
self.gitcmd = ['git']
self.gitcmd = ["git"]
self.errors = errors
self._proc = None
if workdir: self.gitcmd.extend(('--work-tree', workdir))
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
if cwd: self.gitcmd.extend(('-C', cwd))
if workdir:
self.gitcmd.extend(("--work-tree", workdir))
if gitdir:
self.gitcmd.extend(("--git-dir", gitdir))
if cwd:
self.gitcmd.extend(("-C", cwd))
self.workdir, self.gitdir = self._get_repo_dirs()
def ls_files(self, paths: list = None):
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
def ls_dirty(self, force=False):
return (normalize(_[3:].split(' -> ', 1)[-1])
for _ in self._run('status --porcelain')
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
or _[1] == 'D')))
return (
normalize(_[3:].split(" -> ", 1)[-1])
for _ in self._run("status --porcelain")
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
)
def log(self, merge=False, first_parent=False, commit_time=False,
reverse_order=False, paths: list = None):
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
if merge: cmd += ' -m'
if first_parent: cmd += ' --first-parent'
if reverse_order: cmd += ' --reverse'
def log(
self,
merge=False,
first_parent=False,
commit_time=False,
reverse_order=False,
paths: list = None,
):
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
if merge:
cmd += " -m"
if first_parent:
cmd += " --first-parent"
if reverse_order:
cmd += " --reverse"
return self._run(cmd, paths)
def describe(self):
return self._run('describe --tags', check=True)[0]
return self._run("describe --tags", check=True)[0]
def terminate(self):
if self._proc is None:
@@ -345,18 +458,22 @@ class Git:
pass
def _get_repo_dirs(self):
return (os.path.normpath(_) for _ in
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
return (
os.path.normpath(_)
for _ in self._run(
"rev-parse --show-toplevel --absolute-git-dir", check=True
)
)
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
cmdlist = self.gitcmd + shlex.split(cmdstr)
if paths:
cmdlist.append('--')
cmdlist.append("--")
cmdlist.extend(paths)
popen_args = dict(universal_newlines=True, encoding='utf8')
popen_args = dict(universal_newlines=True, encoding="utf8")
if not self.errors:
popen_args['stderr'] = subprocess.DEVNULL
log.trace("Executing: %s", ' '.join(cmdlist))
popen_args["stderr"] = subprocess.DEVNULL
log.trace("Executing: %s", " ".join(cmdlist))
if not output:
return subprocess.call(cmdlist, **popen_args)
if check:
@@ -379,30 +496,26 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
mtime = 0
datestr = isodate(0)
for line in git.log(
merge,
args.first_parent,
args.commit_time,
args.reverse_order,
filterlist
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
):
stats['loglines'] += 1
stats["loglines"] += 1
# Blank line between Date and list of files
if not line:
continue
# Date line
if line[0] != ':': # Faster than `not line.startswith(':')`
stats['commits'] += 1
if line[0] != ":": # Faster than `not line.startswith(':')`
stats["commits"] += 1
mtime = int(line)
if args.unique_times:
mtime = get_mtime_ns(mtime, stats['commits'])
mtime = get_mtime_ns(mtime, stats["commits"])
if args.debug:
datestr = isodate(mtime)
continue
# File line: three tokens if it describes a renaming, otherwise two
tokens = line.split('\t')
tokens = line.split("\t")
# Possible statuses:
# M: Modified (content changed)
@@ -411,7 +524,7 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
# T: Type changed: to/from regular file, symlinks, submodules
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
status = tokens[0].split(' ')[-1]
status = tokens[0].split(" ")[-1]
file = tokens[-1]
# Handles non-ASCII chars and OS path separator
@@ -419,56 +532,76 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
def do_file():
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
stats['skip'] += 1
stats["skip"] += 1
return
if args.debug:
log.debug("%d\t%d\t%d\t%s\t%s",
stats['loglines'], stats['commits'], stats['files'],
datestr, file)
log.debug(
"%d\t%d\t%d\t%s\t%s",
stats["loglines"],
stats["commits"],
stats["files"],
datestr,
file,
)
try:
touch(os.path.join(git.workdir, file), mtime)
stats['touches'] += 1
stats["touches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, file)
stats['errors'] += 1
stats["errors"] += 1
def do_dir():
if args.debug:
log.debug("%d\t%d\t-\t%s\t%s",
stats['loglines'], stats['commits'],
datestr, "{}/".format(dirname or '.'))
log.debug(
"%d\t%d\t-\t%s\t%s",
stats["loglines"],
stats["commits"],
datestr,
"{}/".format(dirname or "."),
)
try:
touch(os.path.join(git.workdir, dirname), mtime)
stats['dirtouches'] += 1
stats["dirtouches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, dirname)
stats['direrrors'] += 1
stats["direrrors"] += 1
if file in filelist:
stats['files'] -= 1
stats["files"] -= 1
filelist.remove(file)
do_file()
if args.dirs and status in ('A', 'D'):
if args.dirs and status in ("A", "D"):
dirname = os.path.dirname(file)
if dirname in dirlist:
dirlist.remove(dirname)
do_dir()
# All files done?
if not stats['files']:
if not stats["files"]:
git.terminate()
return
# Main Logic ##################################################################
def main():
start = time.time() # yes, Wall time. CPU time is not realistic for users.
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
'dirtouches', 'direrrors')}
stats = {
_: 0
for _ in (
"loglines",
"commits",
"touches",
"skip",
"errors",
"dirtouches",
"direrrors",
)
}
logging.basicConfig(level=args.loglevel, format='%(message)s')
logging.basicConfig(level=args.loglevel, format="%(message)s")
log.trace("Arguments: %s", args)
# First things first: Where and Who are we?
@@ -499,13 +632,16 @@ def main():
# Symlink (to file, to dir or broken - git handles the same way)
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
path)
log.warning(
"WARNING: Skipping symlink, no OS support for updates: %s", path
)
continue
# skip files which are older than given threshold
if (args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than):
if (
args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than
):
continue
# Always add files relative to worktree root
@@ -519,15 +655,17 @@ def main():
else:
dirty = set(git.ls_dirty())
if dirty:
log.warning("WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force.")
log.warning(
"WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force."
)
filelist -= dirty
# Build dir list to be processed
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
stats['totalfiles'] = stats['files'] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
stats["totalfiles"] = stats["files"] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
if not filelist:
# Nothing to do. Exit silently and without errors, just like git does
@@ -544,10 +682,18 @@ def main():
if args.missing and not args.merge:
filterlist = list(filelist)
missing = len(filterlist)
log.info("{0:,} files not found in log, trying merge commits".format(missing))
log.info(
"{0:,} files not found in log, trying merge commits".format(missing)
)
for i in range(0, missing, STEPMISSING):
parse_log(filelist, dirlist, stats, git,
merge=True, filterlist=filterlist[i:i + STEPMISSING])
parse_log(
filelist,
dirlist,
stats,
git,
merge=True,
filterlist=filterlist[i : i + STEPMISSING],
)
# Still missing some?
for file in filelist:
@@ -556,29 +702,33 @@ def main():
# Final statistics
# Suggestion: use git-log --before=mtime to brag about skipped log entries
def log_info(msg, *a, width=13):
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
ffmt = '{:%d,.2f}' % (width,)
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
ffmt = "{:%d,.2f}" % (width,)
# %-formatting lacks a thousand separator, must pre-render with .format()
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
log_info(
"Statistics:\n"
"%f seconds\n"
"%d log lines processed\n"
"%d commits evaluated",
time.time() - start, stats['loglines'], stats['commits'])
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
time.time() - start,
stats["loglines"],
stats["commits"],
)
if args.dirs:
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
log_info("%d directories updated", stats['dirtouches'])
if stats["direrrors"]:
log_info("%d directory update errors", stats["direrrors"])
log_info("%d directories updated", stats["dirtouches"])
if stats['touches'] != stats['totalfiles']:
log_info("%d files", stats['totalfiles'])
if stats['skip']: log_info("%d files skipped", stats['skip'])
if stats['files']: log_info("%d files missing", stats['files'])
if stats['errors']: log_info("%d file update errors", stats['errors'])
if stats["touches"] != stats["totalfiles"]:
log_info("%d files", stats["totalfiles"])
if stats["skip"]:
log_info("%d files skipped", stats["skip"])
if stats["files"]:
log_info("%d files missing", stats["files"])
if stats["errors"]:
log_info("%d file update errors", stats["errors"])
log_info("%d files updated", stats['touches'])
log_info("%d files updated", stats["touches"])
if args.test:
log.info("TEST RUN - No files modified!")

View File

@@ -1,7 +0,0 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -1,4 +1,12 @@
name: compile-integration-test
# Validates that a package's integration tests compile without syntax or import errors.
#
# (If an integration test fails to compile, it won't run.)
#
# Called as part of check_diffs.yml workflow
#
# Runs pytest with compile marker to check syntax/imports.
name: "🔗 Compile Integration Tests"
on:
workflow_call:
@@ -12,8 +20,11 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
jobs:
build:
@@ -21,27 +32,27 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
timeout-minutes: 20
name: "Python ${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: "🐍 Set up Python ${{ inputs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
cache-key: compile-integration
- name: Install integration dependencies
- name: "📦 Install Integration Dependencies"
shell: bash
run: poetry install --with=test_integration,test
run: uv sync --group test --group test_integration
- name: Check integration tests compile
- name: "🔗 Check Integration Tests Compile"
shell: bash
run: poetry run pytest -m compile tests/integration_tests
run: uv run pytest -m compile tests/integration_tests
- name: Ensure the tests did not create any additional files
- name: "🧹 Verify Clean Working Directory"
shell: bash
run: |
set -eu

View File

@@ -1,114 +0,0 @@
name: dependencies
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
langchain-location:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: dependency checks ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: pydantic-cross-compat
- name: Install dependencies
shell: bash
run: poetry install
- name: Check imports with base dependencies
shell: bash
run: poetry run make check_imports
- name: Install test dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}
env:
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
run: |
# Determine the major part of pydantic version
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
if [[ "$REGULAR_VERSION" == "1" ]]; then
PYDANTIC_DEP=">=2.1,<3"
TEST_WITH_VERSION="2"
elif [[ "$REGULAR_VERSION" == "2" ]]; then
PYDANTIC_DEP="<2"
TEST_WITH_VERSION="1"
else
echo "Unexpected pydantic major version '$REGULAR_VERSION', cannot determine which version to use for cross-compatibility test."
exit 1
fi
# Install via `pip` instead of `poetry add` to avoid changing lockfile,
# which would prevent caching from working: the cache would get saved
# to a different key than where it gets loaded from.
poetry run pip install "pydantic${PYDANTIC_DEP}"
# Ensure that the correct pydantic is installed now.
echo "Checking pydantic version... Expecting ${TEST_WITH_VERSION}"
# Determine the major part of pydantic version
CURRENT_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
# Check that the major part of pydantic version is as expected, if not
# raise an error
if [[ "$CURRENT_VERSION" != "$TEST_WITH_VERSION" ]]; then
echo "Error: expected pydantic version ${CURRENT_VERSION} to have been installed, but found: ${TEST_WITH_VERSION}"
exit 1
fi
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
- name: Run pydantic compatibility tests
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
shell: bash
run: make test
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,100 +0,0 @@
name: Integration tests
on:
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: Python ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test,test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
run: |
make integration_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,4 +1,11 @@
name: lint
# Runs linting.
#
# Uses the package's Makefile to run the checks, specifically the
# `lint_package` and `lint_tests` targets.
#
# Called as part of check_diffs.yml workflow.
name: "🧹 Linting"
on:
workflow_call:
@@ -7,114 +14,62 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
langchain-location:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
POETRY_VERSION: "1.7.1"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
RUFF_OUTPUT_FORMAT: github
jobs:
build:
name: "make lint #${{ inputs.python-version }}"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
UV_FROZEN: "true"
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
jobs:
# Linting job - runs quality checks on package and test code
build:
name: "Python ${{ inputs.python-version }}"
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: "📋 Checkout Code"
uses: actions/checkout@v6
- name: "🐍 Set up Python ${{ inputs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
cache-key: lint-with-extras
- name: Check Poetry File
shell: bash
- name: "📦 Install Lint & Typing Dependencies"
working-directory: ${{ inputs.working-directory }}
run: |
poetry check
uv sync --group lint --group typing
- name: Check lock file
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry lock --check
- name: Install dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with lint,typing
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}
env:
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
- name: "🔍 Analyze Package Code with Linters"
working-directory: ${{ inputs.working-directory }}
run: |
make lint_package
- name: Install unit test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
- name: "📦 Install Test Dependencies (non-partners)"
# (For directories NOT starting with libs/partners/)
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
- name: Install unit+integration test dependencies
uv sync --inexact --group test
- name: "📦 Install Test Dependencies"
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test,test_integration
uv sync --inexact --group test --group test_integration
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
- name: "🔍 Analyze Test Code with Linters"
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests

View File

@@ -1,5 +1,11 @@
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
# Builds and publishes LangChain packages to PyPI.
#
# Manually triggered, though can be used as a reusable workflow (workflow_call).
#
# Handles version bumping, building, and publishing to PyPI with authentication.
name: "🚀 Package Release"
run-name: "Release ${{ inputs.working-directory }} ${{ inputs.release-version }}"
on:
workflow_call:
inputs:
@@ -12,42 +18,53 @@ on:
working-directory:
required: true
type: string
default: 'libs/langchain'
description: "From which folder this pipeline executes"
default: "libs/langchain"
release-version:
required: true
type: string
default: "0.1.0"
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
description: "Release from a non-master branch (danger!) - Only use for hotfixes"
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
permissions:
contents: write # Required for creating GitHub releases
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# (Release stage has trusted publishing and GitHub repo contents write access,
#
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
@@ -56,38 +73,47 @@ jobs:
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: poetry build
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
- name: Check version
id: check-version
shell: bash
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
release-notes:
needs:
- build
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
- name: Check tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
@@ -95,9 +121,47 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
# Handle regular versions and pre-release versions differently
if [[ "$VERSION" == *"-"* ]]; then
# This is a pre-release version (contains a hyphen)
# Extract the base version without the pre-release suffix
BASE_VERSION=${VERSION%%-*}
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
fi
else
# Regular version handling
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
fi
# if PREV_TAG is empty or came out to 0.0.0, let it be empty
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
@@ -115,8 +179,8 @@ jobs:
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
# if PREV_TAG is empty or 0.0.0, then we are releasing the first version
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
@@ -132,13 +196,36 @@ jobs:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v6
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
pre-release-checks:
needs:
@@ -146,8 +233,11 @@ jobs:
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
permissions:
contents: read
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
@@ -162,14 +252,18 @@ jobs:
# - The package is published, and it breaks on the missing dependency when
# used in the real world.
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
- name: Import published package
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Import dist package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
@@ -185,27 +279,21 @@ jobs:
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
)
uv venv
VIRTUAL_ENV=.venv uv pip install dist/*.whl
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
# also remove _official suffix
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g | sed s/_official//g)"
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
uv run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test
run: uv sync --group test
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
# Overwrite the local version of the package with the built version
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
@@ -213,25 +301,27 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Check for prerelease versions
# Block release if any dependencies allow prerelease versions
# (unless this is itself a prerelease version)
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
# Find the minimum published versions that satisfies the given constraints
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release)"
VIRTUAL_ENV=.venv uv pip install packaging requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
@@ -240,21 +330,17 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
run: uv sync --group test --group test_integration
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
# Uses the Makefile's `integration_tests` target for the specified package
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
@@ -267,40 +353,132 @@ jobs:
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
UNSTRUCTURED_API_KEY: ${{ secrets.UNSTRUCTURED_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
publish:
# Test select published packages against new core
# Done when code changes are made to langchain-core
test-prior-published-packages-against-new-core:
# Installs the new core with old partners: Installs the new unreleased core
# alongside the previously published partner packages and runs integration tests
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
permissions:
contents: read
strategy:
matrix:
partner: [anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
steps:
- uses: actions/checkout@v6
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
echo "Not in libs/core. Exiting successfully."
exit 0
fi
- name: Set up Python + uv
if: startsWith(inputs.working-directory, 'libs/core')
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v6
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag, excluding pre-releases
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| sort -Vr \
| head -n 1
)"
echo "Latest package tag: $LATEST_PACKAGE_TAG"
# Shallow-fetch just that single tag
git fetch --depth=1 origin tag "$LATEST_PACKAGE_TAG"
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "
cat pyproject.toml | grep "version = "
# Run tests
uv sync --group test --group test_integration
uv pip install ../../core/dist/*.whl
make integration_tests
publish:
# Publishes the package to PyPI
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
@@ -314,17 +492,14 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -335,8 +510,11 @@ jobs:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
mark-release:
# Marks the GitHub release with the new version tag
needs:
- build
- release-notes
@@ -346,7 +524,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
# This permission is needed by `ncipollo/release-action` to
# create the GitHub release.
# create the GitHub release/tag
contents: write
defaults:
@@ -354,21 +532,18 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Tag
uses: ncipollo/release-action@v1
with:

View File

@@ -1,62 +0,0 @@
name: release_docker
on:
workflow_call:
inputs:
dockerfile:
required: true
type: string
description: "Path to the Dockerfile to build"
image:
required: true
type: string
description: "Name of the image to build"
env:
TEST_TAG: ${{ inputs.image }}:test
LATEST_TAG: ${{ inputs.image }}:latest
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Get git tag
uses: actions-ecosystem/action-get-latest-tag@v1
id: get-latest-tag
- name: Set docker tag
env:
VERSION: ${{ steps.get-latest-tag.outputs.tag }}
run: |
echo "VERSION_TAG=${{ inputs.image }}:${VERSION#v}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build for Test
uses: docker/build-push-action@v5
with:
context: .
file: ${{ inputs.dockerfile }}
load: true
tags: ${{ env.TEST_TAG }}
- name: Test
run: |
docker run --rm ${{ env.TEST_TAG }} python -c "import langchain"
- name: Build and Push to Docker Hub
uses: docker/build-push-action@v5
with:
context: .
file: ${{ inputs.dockerfile }}
# We can only build for the intersection of platforms supported by
# QEMU and base python image, for now build only for
# linux/amd64 and linux/arm64
platforms: linux/amd64,linux/arm64
tags: ${{ env.LATEST_TAG }},${{ env.VERSION_TAG }}
push: true

View File

@@ -1,4 +1,7 @@
name: test
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range.
name: "🧪 Unit Testing"
on:
workflow_call:
@@ -7,54 +10,69 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
langchain-location:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# Main test job - runs unit tests with current deps, then retests with minimum versions
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: "make test #${{ inputs.python-version }}"
timeout-minutes: 20
name: "Python ${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- name: "📋 Checkout Code"
uses: actions/checkout@v6
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: "🐍 Set up Python ${{ inputs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-suffix: test-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
- name: "📦 Install Test Dependencies"
shell: bash
run: poetry install --with test
run: uv sync --group test --dev
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}
env:
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Run core tests
- name: "🧪 Run Core Unit Tests"
shell: bash
run: |
make test
- name: Ensure the tests did not create any additional files
- name: "🔍 Calculate Minimum Dependency Versions"
working-directory: ${{ inputs.working-directory }}
id: min-version
shell: bash
run: |
VIRTUAL_ENV=.venv uv pip install packaging tomli requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: "🧪 Run Tests with Minimum Dependencies"
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
- name: "🧹 Verify Clean Working Directory"
shell: bash
run: |
set -eu
@@ -65,22 +83,3 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging tomli
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
# Temporarily disabled until we can get the minimum versions working
# - name: Run unit tests with minimum dependency versions
# if: ${{ steps.min-version.outputs.min-versions != '' }}
# env:
# MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
# run: |
# poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
# make tests
# working-directory: ${{ inputs.working-directory }}

View File

@@ -1,51 +0,0 @@
name: test_doc_imports
on:
workflow_call:
inputs:
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
name: "check doc imports #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
run: |
poetry run pip install -e libs/core libs/langchain libs/community libs/experimental
- name: Check doc imports
shell: bash
run: |
poetry run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

73
.github/workflows/_test_pydantic.yml vendored Normal file
View File

@@ -0,0 +1,73 @@
# Facilitate unit testing against different Pydantic versions for a provided package.
name: "🐍 Pydantic Version Testing"
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: false
type: string
description: "Python version to use"
default: "3.12"
pydantic-version:
required: true
type: string
description: "Pydantic version to test."
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "Pydantic ~=${{ inputs.pydantic-version }}"
steps:
- name: "📋 Checkout Code"
uses: actions/checkout@v6
- name: "🐍 Set up Python ${{ inputs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: "📦 Install Test Dependencies"
shell: bash
run: uv sync --group test
- name: "🔄 Install Specific Pydantic Version"
shell: bash
env:
PYDANTIC_VERSION: ${{ inputs.pydantic-version }}
run: VIRTUAL_ENV=.venv uv pip install "pydantic~=$PYDANTIC_VERSION"
- name: "🧪 Run Core Tests"
shell: bash
run: |
make test
- name: "🧹 Verify Clean Working Directory"
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,100 +0,0 @@
name: test-release
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
jobs:
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: poetry build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
id: check-version
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
publish:
needs:
- build
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true

View File

@@ -0,0 +1,107 @@
name: Auto Label Issues by Package
on:
issues:
types: [opened, edited]
jobs:
label-by-package:
permissions:
issues: write
runs-on: ubuntu-latest
steps:
- name: Sync package labels
uses: actions/github-script@v8
with:
script: |
const body = context.payload.issue.body || "";
// Extract text under "### Package"
const match = body.match(/### Package\s+([\s\S]*?)\n###/i);
if (!match) return;
const packageSection = match[1].trim();
// Mapping table for package names to labels
const mapping = {
"langchain": "langchain",
"langchain-openai": "openai",
"langchain-anthropic": "anthropic",
"langchain-classic": "langchain-classic",
"langchain-core": "core",
"langchain-cli": "cli",
"langchain-model-profiles": "model-profiles",
"langchain-tests": "standard-tests",
"langchain-text-splitters": "text-splitters",
"langchain-chroma": "chroma",
"langchain-deepseek": "deepseek",
"langchain-exa": "exa",
"langchain-fireworks": "fireworks",
"langchain-groq": "groq",
"langchain-huggingface": "huggingface",
"langchain-mistralai": "mistralai",
"langchain-nomic": "nomic",
"langchain-ollama": "ollama",
"langchain-perplexity": "perplexity",
"langchain-prompty": "prompty",
"langchain-qdrant": "qdrant",
"langchain-xai": "xai",
};
// All possible package labels we manage
const allPackageLabels = Object.values(mapping);
const selectedLabels = [];
// Check if this is checkbox format (multiple selection)
const checkboxMatches = packageSection.match(/- \[x\]\s+([^\n\r]+)/gi);
if (checkboxMatches) {
// Handle checkbox format
for (const match of checkboxMatches) {
const packageName = match.replace(/- \[x\]\s+/i, '').trim();
const label = mapping[packageName];
if (label && !selectedLabels.includes(label)) {
selectedLabels.push(label);
}
}
} else {
// Handle dropdown format (single selection)
const label = mapping[packageSection];
if (label) {
selectedLabels.push(label);
}
}
// Get current issue labels
const issue = await github.rest.issues.get({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
const currentLabels = issue.data.labels.map(label => label.name);
const currentPackageLabels = currentLabels.filter(label => allPackageLabels.includes(label));
// Determine labels to add and remove
const labelsToAdd = selectedLabels.filter(label => !currentPackageLabels.includes(label));
const labelsToRemove = currentPackageLabels.filter(label => !selectedLabels.includes(label));
// Add new labels
if (labelsToAdd.length > 0) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: labelsToAdd
});
}
// Remove old labels
for (const label of labelsToRemove) {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: label
});
}

View File

@@ -1,25 +0,0 @@
name: Check Broken Links
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
jobs:
check-links:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: Install dependencies
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: Check broken links
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -0,0 +1,51 @@
# Ensures version numbers in pyproject.toml and version.py stay in sync.
#
# (Prevents releases with mismatched version numbers)
name: "🔍 Check Version Equality"
on:
pull_request:
paths:
- "libs/core/pyproject.toml"
- "libs/core/langchain_core/version.py"
permissions:
contents: read
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: "✅ Verify pyproject.toml & version.py Match"
run: |
# Check core versions
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare core versions
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
echo "version.py version: $CORE_VERSION_PY_VERSION"
exit 1
else
echo "Core versions match: $CORE_PYPROJECT_VERSION"
fi
# Check langchain_v1 versions
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
# Compare langchain_v1 versions
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
exit 1
else
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
fi

View File

@@ -1,36 +1,64 @@
---
name: CI
# Primary CI workflow.
#
# Only runs against packages that have changed files.
#
# Runs:
# - Linting (_lint.yml)
# - Unit Tests (_test.yml)
# - Pydantic compatibility tests (_test_pydantic.yml)
# - Integration test compilation checks (_compile_integration_test.yml)
# - Extended test suites that require additional dependencies
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
#
# Reports status to GitHub checks and PR status.
name: "🔧 CI"
on:
push:
branches: [master]
pull_request:
merge_group:
# Optimizes CI performance by canceling redundant workflow runs
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
# a limited number of job runners to be active at the same time, so it's better to
# cancel pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: "Detect Changes & Set Matrix"
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: "📋 Checkout Code"
uses: actions/checkout@v6
- name: "🐍 Setup Python 3.11"
uses: actions/setup-python@v6
with:
python-version: '3.11'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
python-version: "3.11"
- name: "📂 Get Changed Files"
id: files
uses: Ana06/get-changed-files@v2.3.0
- name: "🔍 Analyze Changed Files & Generate Build Matrix"
id: set-matrix
run: |
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}
@@ -38,105 +66,100 @@ jobs:
extended-tests: ${{ steps.set-matrix.outputs.extended-tests }}
compile-integration-tests: ${{ steps.set-matrix.outputs.compile-integration-tests }}
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
needs: [build]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.lint) }}
fail-fast: false
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run unit tests only on packages that have changed files
test:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
needs: [build]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test) }}
fail-fast: false
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.test-doc-imports != '[]' }}
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
needs: [build]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
job-configs: ${{ fromJson(needs.build.outputs.test-pydantic) }}
fail-fast: false
uses: ./.github/workflows/_test_pydantic.yml
with:
python-version: ${{ matrix.job-configs.python-version }}
working-directory: ${{ matrix.job-configs.working-directory }}
pydantic-version: ${{ matrix.job-configs.pydantic-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
name: "Compile Integration Tests"
needs: [build]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.compile-integration-tests) }}
fail-fast: false
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
dependencies:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dependencies != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.dependencies) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run extended test suites that require additional dependencies
extended-tests:
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
needs: [ build ]
name: "Extended Tests"
needs: [build]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
timeout-minutes: 20
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python ${{ matrix.job-configs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: "🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
cache-key: extended
- name: Install dependencies
- name: "📦 Install Dependencies & Run Extended Tests"
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install --with test
poetry run pip install uv
poetry run uv pip install -r extended_testing_deps.txt
echo "Running extended tests, installing dependencies with uv..."
uv venv
uv sync --group test
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
- name: "🧹 Verify Clean Working Directory"
shell: bash
run: |
set -eu
@@ -147,9 +170,81 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: "⚡ CodSpeed Benchmarks"
needs: [build]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v6
- name: "📦 Install UV Package Manager"
uses: astral-sh/setup-uv@v7
with:
python-version: "3.13"
- uses: actions/setup-python@v6
with:
python-version: "3.13"
- name: "📦 Install Test Dependencies"
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: "⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}"
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests, test-doc-imports]
name: "CI Success"
needs:
[
build,
lint,
test,
compile-integration-tests,
extended-tests,
test-pydantic,
codspeed,
]
if: |
always()
runs-on: ubuntu-latest
@@ -158,7 +253,7 @@ jobs:
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
- name: "🎉 All Checks Passed"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON

View File

@@ -1,36 +0,0 @@
---
name: Integration docs lint
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -1,37 +0,0 @@
---
name: CI / cd . / make spell_check
on:
push:
branches: [master, v0.1]
pull_request:
branches: [master, v0.1]
permissions:
contents: read
jobs:
codespell:
name: (Check for spelling errors)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Dependencies
run: |
pip install toml
- name: Extract Ignore Words List
run: |
# Use a Python script to extract the ignore words list from pyproject.toml
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -1,10 +0,0 @@
import toml
pyproject_toml = toml.load("pyproject.toml")
# Extract the ignore words list (adjust the key as per your TOML structure)
ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

181
.github/workflows/integration_tests.yml vendored Normal file
View File

@@ -0,0 +1,181 @@
# Routine integration tests against partner libraries with live API credentials.
#
# Uses `make integration_tests` for each library in the matrix.
#
# Runs daily. Can also be triggered manually for immediate updates.
name: "⏰ Integration Tests"
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.10, 3.13' }})"
on:
workflow_dispatch:
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.10 and 3.13 in matrix - example value: 3.11"
schedule:
- cron: "0 13 * * *" # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
env:
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
jobs:
# Generate dynamic test matrix based on input parameters or defaults
# Only runs on the main repo (for scheduled runs) or when manually triggered
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: "📋 Compute Test Matrix"
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: "🔢 Generate Python & Library Matrix"
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
WORKING_DIRECTORY_FORCE: ${{ github.event.inputs.working-directory-force || '' }}
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.10 and 3.13, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.10", "3.13"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
fi
if [ -n "$WORKING_DIRECTORY_FORCE" ]; then
working_directory="[\"$WORKING_DIRECTORY_FORCE\"]"
fi
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: "🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}"
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ${{ fromJSON(needs.compute-matrix.outputs.matrix).python-version }}
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v6
with:
path: langchain
- uses: actions/checkout@v6
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v6
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: "📦 Organize External Libraries"
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: "🐍 Set up Python ${{ matrix.python-version }} + UV"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: "🔐 Authenticate to Google Cloud"
id: "auth"
uses: google-github-actions/auth@v3
with:
credentials_json: "${{ secrets.GOOGLE_CREDENTIALS }}"
- name: "🔐 Configure AWS Credentials"
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: "📦 Install Dependencies"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: "🚀 Run Integration Tests"
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: "🧹 Clean up External Libraries"
# Clean up external libraries to avoid affecting the following git status check
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: "🧹 Verify Clean Working Directory"
working-directory: langchain
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,14 +0,0 @@
---
name: docker/langchain/langchain Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
workflow_call: # Allows triggering from another workflow
jobs:
release:
uses: ./.github/workflows/_release_docker.yml
with:
dockerfile: docker/Dockerfile.base
image: langchain/langchain
secrets: inherit

View File

@@ -1,37 +0,0 @@
name: LangChain People
on:
schedule:
- cron: "0 14 1 * *"
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
permissions: write-all
steps:
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

28
.github/workflows/pr_labeler_file.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# Label PRs based on changed files.
#
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
name: "🏷️ Pull Request Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
labeler:
name: "label"
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label Pull Request
uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr-file-labeler.yml
sync-labels: false

44
.github/workflows/pr_labeler_title.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
# Label PRs based on their titles.
#
# Uses conventional commit types from PR titles to apply labels.
# Note: Scope-based labeling (e.g., integration labels) is handled by pr_labeler_file.yml
name: "🏷️ PR Title Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, edited]
jobs:
pr-title-labeler:
name: "label"
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label PR based on title
uses: bcoe/conventional-release-labels@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
type_labels: >-
{
"feat": "feature",
"fix": "fix",
"docs": "documentation",
"style": "linting",
"refactor": "refactor",
"perf": "performance",
"test": "tests",
"build": "infra",
"ci": "infra",
"chore": "infra",
"revert": "revert",
"release": "release",
"breaking": "breaking"
}
ignored_types: '[]'

111
.github/workflows/pr_lint.yml vendored Normal file
View File

@@ -0,0 +1,111 @@
# PR title linting.
#
# FORMAT (Conventional Commits 1.0.0):
#
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Examples:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Allowed Types:
# * feat — a new feature (MINOR)
# * fix — a bug fix (PATCH)
# * docs — documentation only changes
# * style — formatting, linting, etc.; no code change or typing refactors
# * refactor — code change that neither fixes a bug nor adds a feature
# * perf — code change that improves performance
# * test — adding tests or correcting existing
# * build — changes that affect the build system/external dependencies
# * ci — continuous integration/configuration changes
# * chore — other changes that don't modify source or test files
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scope(s) (optional):
# core, cli, langchain, langchain_v1, langchain-classic, model-profiles,
# standard-tests, text-splitters, docs, anthropic, chroma, deepseek, exa,
# fireworks, groq, huggingface, mistralai, nomic, ollama, openai,
# perplexity, prompty, qdrant, xai, infra, deps
#
# Multiple scopes can be used by separating them with a comma.
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
# 3. When releasing (updating the pyproject.toml and uv.lock), the commit message
# should be: `release(scope): x.y.z` (e.g., `release(core): 1.2.0` with no
# body, footer, or preceeding/proceeding text).
#
# Enforces Conventional Commits format for pull request titles to maintain a clear and
# machine-readable change history.
name: "🏷️ PR Title Lint"
permissions:
pull-requests: read
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits 1.0.0 specification
lint-pr-title:
name: "validate format"
runs-on: ubuntu-latest
steps:
- name: "✅ Validate Conventional Commits Format"
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
release
scopes: |
core
cli
langchain
langchain-classic
model-profiles
standard-tests
text-splitters
docs
anthropic
chroma
deepseek
exa
fireworks
groq
huggingface
mistralai
nomic
ollama
openai
perplexity
prompty
qdrant
xai
infra
deps
requireScope: false
disallowScopes: |
release
[A-Z]+
ignoreLabels: |
ignore-lint-pr-title

View File

@@ -1,124 +0,0 @@
name: Scheduled tests
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
if: github.repository_owner == 'langchain-ai'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version:
- "3.8"
- "3.11"
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
- "libs/partners/ai21"
- "libs/partners/fireworks"
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
steps:
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Run integration tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
working-directory: langchain
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

164
.github/workflows/v03_api_doc_build.yml vendored Normal file
View File

@@ -0,0 +1,164 @@
# Build the API reference documentation for v0.3 branch.
#
# Manual trigger only.
#
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
#
# Looks for langchain-ai org repos in packages.yml and checks them out.
# Calls prep_api_docs_build.py.
name: "📚 API Docs (v0.3)"
run-name: "Build & Deploy API Reference (v0.3)"
on:
workflow_dispatch:
env:
PYTHON_VERSION: "3.11"
jobs:
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v6
with:
ref: v0.3
path: langchain
- uses: actions/checkout@v6
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: "📋 Extract Repository List with yq"
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: |
# Extract repos from packages.yml that are in the langchain-ai org
# (excluding 'langchain' itself)
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: "📋 Parse YAML & Checkout Repositories"
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository
for repo in $REPOS; do
# Validate repository format (allow any org with proper format)
if [[ ! "$repo" =~ ^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository format: $repo"
exit 1
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
# Additional validation for repo name
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository name: $REPO_NAME"
exit 1
fi
echo "Checking out $repo to $REPO_NAME"
# Special handling for langchain-tavily: checkout by commit hash
if [[ "$REPO_NAME" == "langchain-tavily" ]]; then
git clone https://github.com/$repo.git $REPO_NAME
cd $REPO_NAME
git checkout f3515654724a9e87bdfe2c2f509d6cdde646e563
cd ..
else
git clone --depth 1 --branch v0.3 https://github.com/$repo.git $REPO_NAME
fi
done
- name: "🐍 Setup Python ${{ env.PYTHON_VERSION }}"
uses: actions/setup-python@v6
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: "📦 Install Initial Python Dependencies using uv"
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: "📦 Organize Library Directories"
# Places cloned partner packages into libs/partners structure
run: python langchain/.github/scripts/prep_api_docs_build.py
- name: "🧹 Clear Prior Build"
run:
# Remove artifacts from prior docs build
rm -rf langchain-api-docs-html/api_reference_build/html
- name: "📦 Install Documentation Dependencies using uv"
working-directory: langchain
run: |
# Install all partner packages in editable mode with overrides
python -m uv pip install $(ls ./libs/partners | grep -v azure-ai | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt --prerelease=allow
# Install langchain-azure-ai with tools extra
python -m uv pip install "./libs/partners/azure-ai[tools]" --overrides ./docs/vercel_overrides.txt --prerelease=allow
# Install core langchain and other main packages
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
# Install Sphinx and related packages for building docs
python -m uv pip install -r docs/api_reference/requirements.txt
- name: "🔧 Configure Git Settings"
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: "📚 Build API Documentation"
working-directory: langchain
run: |
# Generate the API reference RST files
python docs/api_reference/create_api_rst.py
# Build the HTML documentation using Sphinx
# -T: show full traceback on exception
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
# -b html: build HTML docs (vs PDS, etc.)
# -d: path for the cached environment (parsed document trees / doctrees)
# - Separate from output dir for faster incremental builds
# -c: path to conf.py
# -j auto: parallel build using all available CPU cores
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
# Post-process the generated HTML
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
# Default index page is blank so we copy in the actual home page.
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
# Removes Sphinx's intermediate build artifacts after the build is complete.
rm -rf ../langchain-api-docs-html/_build/
# Commit and push changes to langchain-api-docs-html repo
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain-api-docs-html
message: "Update API docs build from v0.3 branch"

8
.github/workflows/v1_changes.md vendored Normal file
View File

@@ -0,0 +1,8 @@
With the deprecation of v0 docs, the following files will need to be migrated/supported
in the new docs repo:
- run_notebooks.yml: New repo should run Integration tests on code snippets?
- people.yml: Need to fix and somehow display on the new docs site
- Subsequently, `.github/actions/people/`
- _test_doc_imports.yml
- check-broken-links.yml

28
.gitignore vendored
View File

@@ -1,6 +1,8 @@
.vs/
.vscode/
.claude/
.idea/
#Emacs backup
*~
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@@ -59,6 +61,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo
@@ -77,10 +80,6 @@ instance/
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
docs/docs/_build/
# PyBuilder
target/
@@ -161,22 +160,9 @@ data_map*
*replit*
node_modules
docs/.yarn/
docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/*api_reference.rst
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/
!docs/api_reference/templates/
!docs/api_reference/themes/
docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates
prof
virtualenv/
scratch/
.langgraph_api/

14
.markdownlint.json Normal file
View File

@@ -0,0 +1,14 @@
{
"MD013": false,
"MD024": {
"siblings_only": true
},
"MD025": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD041": false,
"MD046": {
"style": "fenced"
}
}

8
.mcp.json Normal file
View File

@@ -0,0 +1,8 @@
{
"mcpServers": {
"docs-langchain": {
"type": "http",
"url": "https://docs.langchain.com/mcp"
}
}
}

99
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,99 @@
repos:
- repo: local
hooks:
- id: core
name: format and lint core
language: system
entry: make -C libs/core format lint
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format and lint langchain
language: system
entry: make -C libs/langchain format lint
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format and lint standard-tests
language: system
entry: make -C libs/standard-tests format lint
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format and lint text-splitters
language: system
entry: make -C libs/text-splitters format lint
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format and lint partners/anthropic
language: system
entry: make -C libs/partners/anthropic format lint
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format and lint partners/chroma
language: system
entry: make -C libs/partners/chroma format lint
files: ^libs/partners/chroma/
pass_filenames: false
- id: exa
name: format and lint partners/exa
language: system
entry: make -C libs/partners/exa format lint
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format and lint partners/fireworks
language: system
entry: make -C libs/partners/fireworks format lint
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format and lint partners/groq
language: system
entry: make -C libs/partners/groq format lint
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format and lint partners/huggingface
language: system
entry: make -C libs/partners/huggingface format lint
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format and lint partners/mistralai
language: system
entry: make -C libs/partners/mistralai format lint
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format and lint partners/nomic
language: system
entry: make -C libs/partners/nomic format lint
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format and lint partners/ollama
language: system
entry: make -C libs/partners/ollama format lint
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format and lint partners/openai
language: system
entry: make -C libs/partners/openai format lint
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format and lint partners/prompty
language: system
entry: make -C libs/partners/prompty format lint
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format and lint partners/qdrant
language: system
entry: make -C libs/partners/qdrant format lint
files: ^libs/partners/qdrant/
pass_filenames: false

View File

@@ -1,29 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
formats:
- pdf
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt

21
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"recommendations": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.mypy-type-checker",
"ms-toolsai.jupyter",
"ms-toolsai.jupyter-keymap",
"ms-toolsai.jupyter-renderers",
"ms-toolsai.vscode-jupyter-cell-tags",
"ms-toolsai.vscode-jupyter-slideshow",
"yzhang.markdown-all-in-one",
"davidanson.vscode-markdownlint",
"bierner.markdown-mermaid",
"bierner.markdown-preview-github-styles",
"eamodio.gitlens",
"github.vscode-pull-request-github",
"github.vscode-github-actions",
"redhat.vscode-yaml",
"editorconfig.editorconfig",
],
}

78
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,78 @@
{
"python.analysis.include": [
"libs/**",
],
"python.analysis.exclude": [
"**/node_modules",
"**/__pycache__",
"**/.pytest_cache",
"**/.*",
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
"python.testing.cwd": "${workspaceFolder}",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports.ruff": "explicit",
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"editor.rulers": [
88
],
"editor.tabSize": 4,
"editor.insertSpaces": true,
"editor.trimAutoWhitespace": true,
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.exclude": {
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/*.pyc": true,
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"**/node_modules": true,
"**/.git": false
},
"search.exclude": {
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
"yarn.lock": true
},
"git.autofetch": true,
"git.enableSmartCommit": true,
"jupyter.askForKernelRestart": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"[markdown]": {
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
}
},
"[yaml]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"[json]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"python.terminal.activateEnvironment": false,
"python.defaultInterpreterPath": "./.venv/bin/python",
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/workflows/pr_lint.yml"
}
]
}

181
AGENTS.md Normal file
View File

@@ -0,0 +1,181 @@
# Global development guidelines for the LangChain monorepo
This document provides context to understand the LangChain Python project and assist with development.
## Project architecture and context
### Monorepo structure
This is a Python monorepo with multiple independently versioned packages that use `uv`.
```txt
langchain/
├── libs/
│ ├── core/ # `langchain-core` primitives and base abstractions
│ ├── langchain/ # `langchain-classic` (legacy, no new features)
│ ├── langchain_v1/ # Actively maintained `langchain` package
│ ├── partners/ # Third-party integrations
│ │ ├── openai/ # OpenAI models and embeddings
│ │ ├── anthropic/ # Anthropic (Claude) integration
│ │ ├── ollama/ # Local model support
│ │ └── ... (other integrations maintained by the LangChain team)
│ ├── text-splitters/ # Document chunking utilities
│ ├── standard-tests/ # Shared test suite for integrations
│ ├── model-profiles/ # Model configuration profiles
│ └── cli/ # Command-line interface tools
├── .github/ # CI/CD workflows and templates
├── .vscode/ # VSCode IDE standard settings and recommended extensions
└── README.md # Information about LangChain
```
- **Core layer** (`langchain-core`): Base abstractions, interfaces, and protocols. Users should not need to know about this layer directly.
- **Implementation layer** (`langchain`): Concrete implementations and high-level public utilities
- **Integration layer** (`partners/`): Third-party service integrations. Note that this monorepo is not exhaustive of all LangChain integrations; some are maintained in separate repos, such as `langchain-ai/langchain-google` and `langchain-ai/langchain-aws`. Usually these repos are cloned at the same level as this monorepo, so if needed, you can refer to their code directly by navigating to `../langchain-google/` from this monorepo.
- **Testing layer** (`standard-tests/`): Standardized integration tests for partner integrations
### Development tools & commands**
- `uv` Fast Python package installer and resolver (replaces pip/poetry)
- `make` Task runner for common development commands. Feel free to look at the `Makefile` for available commands and usage patterns.
- `ruff` Fast Python linter and formatter
- `mypy` Static type checking
- `pytest` Testing framework
This monorepo uses `uv` for dependency management. Local development uses editable installs: `[tool.uv.sources]`
Each package in `libs/` has its own `pyproject.toml` and `uv.lock`.
```bash
# Run unit tests (no network)
make test
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
#### Key config files
- pyproject.toml: Main workspace configuration with dependency groups
- uv.lock: Locked dependencies for reproducible builds
- Makefile: Development tasks
#### Commit standards
Suggest PR titles that follow Conventional Commits format. Refer to .github/workflows/pr_lint for allowed types and scopes.
#### Pull request guidelines
- Always add a disclaimer to the PR description mentioning how AI agents are involved with the contribution.
- Describe the "why" of the changes, why the proposed solution is the right one. Limit prose.
- Highlight areas of the proposed changes that require careful review.
## Core development principles
### Maintain stable public interfaces
CRITICAL: Always attempt to preserve function signatures, argument positions, and names for exported/public methods. Do not make breaking changes.
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
Ask: "Would this change break someone's code if they used it last week?"
### Code quality standards
All Python code MUST include type hints and return types.
```python title="Example"
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Single line description of the function.
Any additional context about the function can go here.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
```
- Use descriptive, self-explanatory variable names.
- Follow existing patterns in the codebase you're modifying
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
### Testing requirements
Every new feature or bugfix MUST be covered by unit tests.
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- We use `pytest` as the testing framework; if in doubt, check other existing tests for examples.
- The testing file structure should mirror the source code structure.
**Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
- [ ] Does the test suite fail if your new logic is broken?
### Security and risk assessment
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
### Documentation standards
Use Google-style docstrings with Args section for all public functions.
```python title="Example"
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""Send an email to a recipient with specified priority.
Any additional context about the function can go here.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level.
Returns:
`True` if email was sent successfully, `False` otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
- Types go in function signatures, NOT in docstrings
- If a default is present, DO NOT repeat it in the docstring unless there is post-processing or it is set conditionally.
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Ensure American English spelling (e.g., "behavior", not "behaviour")
## Additional resources
- **Documentation:** https://docs.langchain.com/oss/python/langchain/overview and source at https://github.com/langchain-ai/docs or `../docs/`. Prefer the local install and use file search tools for best results. If needed, use the docs MCP server as defined in `.mcp.json` for programmatic access.
- **Contributing Guide:** [`.github/CONTRIBUTING.md`](https://docs.langchain.com/oss/python/contributing/overview)

181
CLAUDE.md Normal file
View File

@@ -0,0 +1,181 @@
# Global development guidelines for the LangChain monorepo
This document provides context to understand the LangChain Python project and assist with development.
## Project architecture and context
### Monorepo structure
This is a Python monorepo with multiple independently versioned packages that use `uv`.
```txt
langchain/
├── libs/
│ ├── core/ # `langchain-core` primitives and base abstractions
│ ├── langchain/ # `langchain-classic` (legacy, no new features)
│ ├── langchain_v1/ # Actively maintained `langchain` package
│ ├── partners/ # Third-party integrations
│ │ ├── openai/ # OpenAI models and embeddings
│ │ ├── anthropic/ # Anthropic (Claude) integration
│ │ ├── ollama/ # Local model support
│ │ └── ... (other integrations maintained by the LangChain team)
│ ├── text-splitters/ # Document chunking utilities
│ ├── standard-tests/ # Shared test suite for integrations
│ ├── model-profiles/ # Model configuration profiles
│ └── cli/ # Command-line interface tools
├── .github/ # CI/CD workflows and templates
├── .vscode/ # VSCode IDE standard settings and recommended extensions
└── README.md # Information about LangChain
```
- **Core layer** (`langchain-core`): Base abstractions, interfaces, and protocols. Users should not need to know about this layer directly.
- **Implementation layer** (`langchain`): Concrete implementations and high-level public utilities
- **Integration layer** (`partners/`): Third-party service integrations. Note that this monorepo is not exhaustive of all LangChain integrations; some are maintained in separate repos, such as `langchain-ai/langchain-google` and `langchain-ai/langchain-aws`. Usually these repos are cloned at the same level as this monorepo, so if needed, you can refer to their code directly by navigating to `../langchain-google/` from this monorepo.
- **Testing layer** (`standard-tests/`): Standardized integration tests for partner integrations
### Development tools & commands**
- `uv` Fast Python package installer and resolver (replaces pip/poetry)
- `make` Task runner for common development commands. Feel free to look at the `Makefile` for available commands and usage patterns.
- `ruff` Fast Python linter and formatter
- `mypy` Static type checking
- `pytest` Testing framework
This monorepo uses `uv` for dependency management. Local development uses editable installs: `[tool.uv.sources]`
Each package in `libs/` has its own `pyproject.toml` and `uv.lock`.
```bash
# Run unit tests (no network)
make test
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
#### Key config files
- pyproject.toml: Main workspace configuration with dependency groups
- uv.lock: Locked dependencies for reproducible builds
- Makefile: Development tasks
#### Commit standards
Suggest PR titles that follow Conventional Commits format. Refer to .github/workflows/pr_lint for allowed types and scopes.
#### Pull request guidelines
- Always add a disclaimer to the PR description mentioning how AI agents are involved with the contribution.
- Describe the "why" of the changes, why the proposed solution is the right one. Limit prose.
- Highlight areas of the proposed changes that require careful review.
## Core development principles
### Maintain stable public interfaces
CRITICAL: Always attempt to preserve function signatures, argument positions, and names for exported/public methods. Do not make breaking changes.
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
Ask: "Would this change break someone's code if they used it last week?"
### Code quality standards
All Python code MUST include type hints and return types.
```python title="Example"
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Single line description of the function.
Any additional context about the function can go here.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
```
- Use descriptive, self-explanatory variable names.
- Follow existing patterns in the codebase you're modifying
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
### Testing requirements
Every new feature or bugfix MUST be covered by unit tests.
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- We use `pytest` as the testing framework; if in doubt, check other existing tests for examples.
- The testing file structure should mirror the source code structure.
**Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
- [ ] Does the test suite fail if your new logic is broken?
### Security and risk assessment
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
### Documentation standards
Use Google-style docstrings with Args section for all public functions.
```python title="Example"
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""Send an email to a recipient with specified priority.
Any additional context about the function can go here.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level.
Returns:
`True` if email was sent successfully, `False` otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
- Types go in function signatures, NOT in docstrings
- If a default is present, DO NOT repeat it in the docstring unless there is post-processing or it is set conditionally.
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Ensure American English spelling (e.g., "behavior", not "behaviour")
## Additional resources
- **Documentation:** https://docs.langchain.com/oss/python/langchain/overview and source at https://github.com/langchain-ai/docs or `../docs/`. Prefer the local install and use file search tools for best results. If needed, use the docs MCP server as defined in `.mcp.json` for programmatic access.
- **Contributing Guide:** [`.github/CONTRIBUTING.md`](https://docs.langchain.com/oss/python/contributing/overview)

View File

@@ -1,70 +0,0 @@
# Migrating
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
This migration has already started, but we are remaining backwards compatible until 7/28.
On that date, we will remove functionality from `langchain`.
Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043).
### Migrating to `langchain_experimental`
We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.
This guide covers how to migrate.
### Installation
Previously:
`pip install -U langchain`
Now (only if you want to access things in experimental):
`pip install -U langchain langchain_experimental`
### Things in `langchain.experimental`
Previously:
`from langchain.experimental import ...`
Now:
`from langchain_experimental import ...`
### PALChain
Previously:
`from langchain.chains import PALChain`
Now:
`from langchain_experimental.pal_chain import PALChain`
### SQLDatabaseChain
Previously:
`from langchain.chains import SQLDatabaseChain`
Now:
`from langchain_experimental.sql import SQLDatabaseChain`
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out this [`SQL question-answering tutorial`](https://python.langchain.com/v0.2/docs/tutorials/sql_qa/#convert-question-to-sql-query)
`from langchain.chains import create_sql_query_chain`
### `load_prompt` for Python files
Note: this only applies if you want to load Python files as prompts.
If you want to load json/yaml files, no change is needed.
Previously:
`from langchain.prompts import load_prompt`
Now:
`from langchain_experimental.prompts import load_prompt`

View File

@@ -1,75 +0,0 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
######################
# DOCUMENTATION
######################
## docs_build: Build the documentation.
docs_build:
cd docs && make build
## docs_clean: Clean the documentation build artifacts.
docs_clean:
cd docs && make clean
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
poetry run linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
poetry run codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
poetry run codespell --toml pyproject.toml -w
######################
# LINTING AND FORMATTING
######################
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook

163
README.md
View File

@@ -1,135 +1,74 @@
# 🦜️🔗 LangChain
<div align="center">
<a href="https://www.langchain.com/">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
</a>
</div>
⚡ Build context-aware reasoning applications ⚡
<div align="center">
<h3>The platform for reliable agents.</h3>
</div>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
<div align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank"><img src="https://img.shields.io/pypi/l/langchain" alt="PyPI - License"></a>
<a href="https://pypistats.org/packages/langchain" target="_blank"><img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads"></a>
<a href="https://pypi.org/project/langchain/#history" target="_blank"><img src="https://img.shields.io/pypi/v/langchain?label=%20" alt="Version"></a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank"><img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20"></a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge"></a>
<a href="https://twitter.com/langchainai" target="_blank"><img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X"></a>
</div>
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
With pip:
```bash
pip install langchain
```
With conda:
```bash
conda install langchain -c conda-forge
```
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our framework for building controllable agent workflows.
## 🤔 What is LangChain?
---
**LangChain** is a framework for developing applications powered by large language models (LLMs).
**Documentation**:
For these applications, LangChain simplifies the entire application lifecycle:
- [docs.langchain.com](https://docs.langchain.com/oss/python/langchain/overview) Comprehensive documentation, including conceptual overviews and guides
- [reference.langchain.com/python](https://reference.langchain.com/python) API reference docs for LangChain packages
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/v0.2/docs/concepts#langchain-expression-language-lcel), [components](https://python.langchain.com/v0.2/docs/concepts), and [third-party integrations](https://python.langchain.com/v0.2/docs/integrations/platforms/).
Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
**Discussions**: Visit the [LangChain Forum](https://forum.langchain.com) to connect with the community and share all of your technical questions, ideas, and feedback.
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
### Productionization:
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
## Why use LangChain?
### Deployment:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
Use LangChain for:
## 🧱 What can you build with LangChain?
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly LangChain's abstractions keep you moving without losing momentum.
- **Rapid prototyping**. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
- **Production-ready features**. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
- **Vibrant community and ecosystem**. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
- **Flexible abstraction layers**. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.
**❓ Question answering with RAG**
## LangChain ecosystem
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
**🧱 Extracting structured output**
To improve your LLM application development, pair LangChain with:
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [Integrations](https://docs.langchain.com/oss/python/integrations/providers/overview) List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
- [LangSmith](https://www.langchain.com/langsmith) Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams and iterate quickly with visual prototyping in [LangSmith Studio](https://docs.langchain.com/langsmith/studio).
- [Deep Agents](https://github.com/langchain-ai/deepagents) *(new!)* Build agents that can plan, use subagents, and leverage file systems for complex tasks
**🤖 Chatbots**
## Additional resources
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploy LangChain runnables and chains as REST APIs.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
## 🌟 Contributors
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
- [API Reference](https://reference.langchain.com/python) Detailed reference on navigating base packages and integrations for LangChain.
- [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview) Learn how to contribute to LangChain projects and find good first issues.
- [Code of Conduct](https://github.com/langchain-ai/langchain/blob/master/.github/CODE_OF_CONDUCT.md) Our community guidelines and standards for participation.

View File

@@ -1,60 +1,79 @@
# Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best practices
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Please report security vulnerabilities associated with the LangChain
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
understand what we consider to be a security vulnerability vs. developer
responsibility.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#repository-structure) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
* langchain-core
* langchain (see exceptions)
* langchain-community (see exceptions)
* langgraph
* langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
* **tools**: Tools in either `langchain` or `langchain-community` are not eligible for bug
bounties. This includes the following directories
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
* `libs/langchain/langchain/tools`
* `libs/community/langchain_community/tools`
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
- Code documented with security notices. This will be decided done on a case by
case basis, but likely will not be eligible for a bounty as the code is already
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs see below.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
* LangSmith site: [https://smith.langchain.com](https://smith.langchain.com)
* SDK client: [https://github.com/langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk)
### Other Security Concerns

View File

@@ -1,932 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "BYejgj8Zf-LG",
"tags": []
},
"source": [
"## Getting started with LangChain and Gemma, running locally or in the Cloud"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2IxjMb9-jIJ8"
},
"source": [
"### Installing dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 9436,
"status": "ok",
"timestamp": 1708975187360,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "XZaTsXfcheTF",
"outputId": "eb21d603-d824-46c5-f99f-087fb2f618b1",
"tags": []
},
"outputs": [],
"source": [
"!pip install --upgrade langchain langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXmAujvC3Kwp"
},
"source": [
"### Running the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CI8Elyc5gBQF"
},
"source": [
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint it ready, you need to copy its number."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "gv1j8FrVftsC"
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"project: str = \"PUT_YOUR_PROJECT_ID_HERE\" # @param {type:\"string\"}\n",
"endpoint_id: str = \"PUT_YOUR_ENDPOINT_ID_HERE\" # @param {type:\"string\"}\n",
"location: str = \"PUT_YOUR_ENDPOINT_LOCAtION_HERE\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "ok",
"timestamp": 1708975440503,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "bhIHsFGYjtFt",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:15:10.457149: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:15:10.508925: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:15:10.508957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:15:10.510289: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:15:10.518898: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 351,
"status": "ok",
"timestamp": 1708975440852,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "WJv-UVWwh0lk",
"tags": []
},
"outputs": [],
"source": [
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 714,
"status": "ok",
"timestamp": 1708975441564,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "6kM7cEFdiN9h",
"outputId": "fb420c56-5614-4745-cda8-0ee450a3e539",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" Who am I? Why do I exist? These are questions I have struggled with\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\")\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zzep9nfmuUcO"
},
"source": [
"We can also use Gemma as a multi-turn chat model:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 964,
"status": "ok",
"timestamp": 1708976298189,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "8tPHoM5XiZOl",
"outputId": "7b8fb652-9aed-47b0-c096-aa1abfc3a2a9",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of'\n",
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nPrompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of<end_of_turn>\\n<start_of_turn>user\\nHow much is 3+3?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nOutput:\\n3-years old.<end_of_turn>\\n\\n<'\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")\n",
"\n",
"message1 = HumanMessage(content=\"How much is 2+2?\")\n",
"answer1 = llm.invoke([message1])\n",
"print(answer1)\n",
"\n",
"message2 = HumanMessage(content=\"How much is 3+3?\")\n",
"answer2 = llm.invoke([message1, answer1, message2])\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process response to avoid repetitions:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Output:\\n<<humming>>: 2+2 = 4.\\n<end'\n",
"content='Output:\\nOutput:\\n<<humming>>: 3+3 = 6.'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], parse_response=True)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VEfjqo7fjARR"
},
"source": [
"## Running Gemma locally from Kaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gVW8QDzHu7TA"
},
"source": [
"In order to run Gemma locally, you can download it from Kaggle first. In order to do this, you'll need to login into the Kaggle platform, create a API key and download a `kaggle.json` Read more about Kaggle auth [here](https://www.kaggle.com/docs/api)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S1EsXQ3XvZkQ"
},
"source": [
"### Installation"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"executionInfo": {
"elapsed": 335,
"status": "ok",
"timestamp": 1708976305471,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "p8SMwpKRvbef",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
}
],
"source": [
"!mkdir -p ~/.kaggle && cp kaggle.json ~/.kaggle/kaggle.json"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"executionInfo": {
"elapsed": 7802,
"status": "ok",
"timestamp": 1708976363010,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "Yr679aePv9Fq",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorstore 0.1.54 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install keras>=3 keras_nlp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E9zn8nYpv3QZ"
},
"source": [
"### Usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"executionInfo": {
"elapsed": 8536,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "0LFRmY8TjCkI",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:38:40.797559: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:38:40.848444: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:38:40.848478: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:38:40.849728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:38:40.857936: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaLocalKaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v-o7oXVavdMQ"
},
"source": [
"You can specify the keras backend (by default it's `tensorflow`, but you can change it be `jax` or `torch`)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"executionInfo": {
"elapsed": 9,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "vvTUH8DNj5SF",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 40836,
"status": "ok",
"timestamp": 1708976761257,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "YOmrqxo5kHXK",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:23:14.661164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "Zu6yPDUgkQtQ",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1709051129.518076 774855 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=30)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ChatModel"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MSctpRE4u43N"
},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:22.331067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:58:22.382948: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:58:22.382978: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:58:22.384312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:58:22.392767: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalKaggle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:29.001922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaChatLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "aborted",
"timestamp": 1708976382957,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "JrJmvZqwwLqj"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:49.848412: I external/local_xla/xla/service/service.cc:168] XLA service 0x55adc0cf2c10 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2024-02-27 16:58:49.848458: I external/local_xla/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA L4, Compute Capability 8.9\n",
"2024-02-27 16:58:50.116614: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n",
"2024-02-27 16:58:54.389324: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8900\n",
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
"I0000 00:00:1709053145.225207 784891 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n",
"W0000 00:00:1709053145.284227 784891 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=30)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process the response if you want to avoid multi-turn statements:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n Tampoco\\nI'm a model.\"\n",
"content='I can help you with your modeling.\\n Tampoco\\nI can'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=30, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EiZnztso7hyF"
},
"source": [
"## Running Gemma locally from HuggingFace"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "qqAqsz5R7nKf",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:02:21.832409: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:02:21.883625: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:02:21.883656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:02:21.884987: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:02:21.893340: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalHF, GemmaLocalHF"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "tsyntzI08cOr",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"hf_access_token: str = \"PUT_YOUR_TOKEN_HERE\" # @param {type:\"string\"}\n",
"model_name: str = \"google/gemma-2b\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "JWrqEkOo8sm9",
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a0d6de5542254ed1b6d3ba65465e050e",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaLocalHF(model_name=\"google/gemma-2b\", hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "VX96Jf4Y84k-",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has been asked by philosophers, theologians, and scientists for centuries.\n",
"\n",
"And its the question that\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=50)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "9x-jmEBg9Mk1"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c9a0b8e161d74a6faca83b1be96dee27",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaChatLocalHF(model_name=model_name, hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "qv_OSaMm9PVy"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=60)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\\nI can help you with anything.\\n<\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=140)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the same with posprocessing:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n<end_of_turn>\\n\"\n",
"content='I can help you with anything.\\n<end_of_turn>\\n<end_of_turn>\\n'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=60, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=120, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"environment": {
"kernel": "python3",
"name": ".m116",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/:m116"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,398 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "fc935871-7640-41c6-b798-58514d860fe0",
"metadata": {},
"source": [
"## LLaMA2 chat with SQL\n",
"\n",
"Open source, local LLMs are great to consider for any application that demands data privacy.\n",
"\n",
"SQL is one good example. \n",
"\n",
"This cookbook shows how to perform text-to-SQL using various local versions of LLaMA2 run locally.\n",
"\n",
"## Packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "81adcf8b-395a-4f02-8749-ac976942b446",
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain replicate"
]
},
{
"cell_type": "markdown",
"id": "8e13ed66-300b-4a23-b8ac-44df68ee4733",
"metadata": {},
"source": [
"## LLM\n",
"\n",
"There are a few ways to access LLaMA2.\n",
"\n",
"To run locally, we use Ollama.ai. \n",
"\n",
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
"\n",
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
" \n",
"To use an external API, which is not private, we can use Replicate."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Init param `input` is deprecated, please use `model_kwargs` instead.\n"
]
}
],
"source": [
"# Local\n",
"from langchain_community.chat_models import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
"\n",
"# API\n",
"from langchain_community.llms import Replicate\n",
"\n",
"# REPLICATE_API_TOKEN = getpass()\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
"replicate_id = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n",
"llama2_chat_replicate = Replicate(\n",
" model=replicate_id, input={\"temperature\": 0.01, \"max_length\": 500, \"top_p\": 1}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ce96f7ea-b3d5-44e1-9fa5-a79e04a9e1fb",
"metadata": {},
"outputs": [],
"source": [
"# Simply set the LLM we want to use\n",
"llm = llama2_chat"
]
},
{
"cell_type": "markdown",
"id": "80222165-f353-4e35-a123-5f70fd70c6c8",
"metadata": {},
"source": [
"## DB\n",
"\n",
"Connect to a SQLite DB.\n",
"\n",
"To create this particular DB, you can use the code and follow the steps shown [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "025bdd82-3bb1-4948-bc7c-c3ccd94fd05c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
"\n",
"\n",
"def get_schema(_):\n",
" return db.get_table_info()\n",
"\n",
"\n",
"def run_query(query):\n",
" return db.run(query)"
]
},
{
"cell_type": "markdown",
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
"metadata": {},
"source": [
"## Query a SQL Database \n",
"\n",
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5a4933ea-d9c0-4b0a-8177-ba4490c6532b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"# Update the template based on the type of SQL Database like MySQL, Microsoft SQL Server and so on\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query:\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"# Chain to query\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"sql_response = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",
" | prompt\n",
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")\n",
"\n",
"sql_response.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
},
{
"cell_type": "markdown",
"id": "a0e9e2c8-9b88-4853-ac86-001bc6cc6695",
"metadata": {},
"source": [
"We can review the results:\n",
"\n",
"* [LangSmith trace](https://smith.langchain.com/public/afa56a06-b4e2-469a-a60f-c1746e75e42b/r) LLaMA2-13 Replicate API\n",
"* [LangSmith trace](https://smith.langchain.com/public/2d4ecc72-6b8f-4523-8f0b-ea95c6b54a1d/r) LLaMA2-13 local \n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "2a2825e3-c1b6-4f7d-b9c9-d9835de323bb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Based on the table schema and SQL query, there are 30 unique teams in the NBA.')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain to answer\n",
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response)\n",
" | RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | llm\n",
")\n",
"\n",
"full_chain.invoke({\"question\": \"How many unique teams are there?\"})"
]
},
{
"cell_type": "markdown",
"id": "ec17b3ee-6618-4681-b6df-089bbb5ffcd7",
"metadata": {},
"source": [
"We can review the results:\n",
"\n",
"* [LangSmith trace](https://smith.langchain.com/public/10420721-746a-4806-8ecf-d6dc6399d739/r) LLaMA2-13 Replicate API\n",
"* [LangSmith trace](https://smith.langchain.com/public/5265ebab-0a22-4f37-936b-3300f2dfa1c1/r) LLaMA2-13 local "
]
},
{
"cell_type": "markdown",
"id": "1e85381b-1edc-4bb3-a7bd-2ab23f81e54d",
"metadata": {},
"source": [
"## Chat with a SQL DB \n",
"\n",
"Next, we can add memory."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "022868f2-128e-42f5-8d90-d3bb2f11d994",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"template = \"\"\"Given an input question, convert it to a SQL query. No pre-amble. Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", template),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"memory = ConversationBufferMemory(return_messages=True)\n",
"\n",
"# Chain to query with memory\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"sql_chain = (\n",
" RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"]),\n",
" )\n",
" | prompt\n",
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")\n",
"\n",
"\n",
"def save(input_output):\n",
" output = {\"output\": input_output.pop(\"output\")}\n",
" memory.save_context(input_output, output)\n",
" return output[\"output\"]\n",
"\n",
"\n",
"sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save\n",
"sql_response_memory.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "800a7a3b-f411-478b-af51-2310cd6e0425",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Sure! Here\\'s the natural language response based on the given input:\\n\\n\"Klay Thompson\\'s salary is $43,219,440.\"')"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain to answer\n",
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response_memory)\n",
" | RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | llm\n",
")\n",
"\n",
"full_chain.invoke({\"question\": \"What is his salary?\"})"
]
},
{
"cell_type": "markdown",
"id": "b77fee61-f4da-4bb1-8285-14101e505518",
"metadata": {},
"source": [
"Here is the [trace](https://smith.langchain.com/public/54794d18-2337-4ce2-8b9f-3d8a2df89e51/r)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,63 +0,0 @@
# LangChain cookbook
Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com).
Notebook | Description
:- | :-
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
[amazon_personalize_how_to.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/amazon_personalize_how_to.ipynb) | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps
[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
[hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face.
[hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries.
[learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences.
[llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process.
[llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function.
[llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls.
[llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results.
[llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.
[meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.
[multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text.
[multi_modal_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_RAG_vdms.ipynb) | Perform retrieval-augmented generation (rag) on documents including text and images, using unstructured for parsing, Intel's Visual Data Management System (VDMS) as the vectorstore, and chains.
[multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
[openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library.
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_layout_analysis_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_layout_analysis_groundedness_check.ipynb) | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,527 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "68b24990",
"metadata": {},
"source": [
"# Combine agents and vector stores\n",
"\n",
"This notebook covers how to combine agents and vector stores. The use case for this is that you've ingested your data into a vector store and want to interact with it in an agentic manner.\n",
"\n",
"The recommended method for doing so is to create a `RetrievalQA` and then use that as a tool in the overall agent. Let's take a look at doing this below. You can do this with multiple different vector DBs, and use the agent as a way to route between them. There are two different ways of doing this - you can either let the agent use the vector stores as normal tools, or you can set `return_direct=True` to really just use the agent as a router."
]
},
{
"cell_type": "markdown",
"id": "9b22020a",
"metadata": {},
"source": [
"## Create the vector store"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"relevant_parts = []\n",
"for p in Path(\".\").absolute().parts:\n",
" relevant_parts.append(p)\n",
" if relevant_parts[-3:] == [\"langchain\", \"docs\", \"modules\"]:\n",
" break\n",
"doc_path = str(Path(*relevant_parts) / \"state_of_the_union.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f2675861",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(doc_path)\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings, collection_name=\"state-of-union\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
"source": [
"state_of_union = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1431cded",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
"source": [
"loader = WebBaseLoader(\"https://beta.ruff.rs/docs/faq/\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "96a2edf8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"docs = loader.load()\n",
"ruff_texts = text_splitter.split_documents(docs)\n",
"ruff_db = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")\n",
"ruff = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=ruff_db.as_retriever()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71ecef90",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "c0a6c031",
"metadata": {},
"source": [
"## Create the Agent"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "eb142786",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "850bc4e9",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "fc47f230",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "10ca2db8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "4e91b811",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
"cell_type": "markdown",
"id": "787a9b5e",
"metadata": {},
"source": [
"## Use the Agent solely as a router"
]
},
{
"cell_type": "markdown",
"id": "9161ba91",
"metadata": {},
"source": [
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the RetrievalQAChain.\n",
"\n",
"Notice that in the above examples the agent did some extra work after querying the RetrievalQAChain. You can avoid that and just return the result directly."
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "f59b377e",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "8615707a",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "36e718a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "edfd0a1a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
"cell_type": "markdown",
"id": "49a0cbbe",
"metadata": {},
"source": [
"## Multi-Hop vector store reasoning\n",
"\n",
"Because vector stores are easily usable as tools in agents, it is easy to use answer multi-hop questions that depend on vector stores using the existing agent framework."
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "d397a233",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "06157240",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "b492b520",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what tool ruff uses to run over Jupyter Notebooks, and if the president mentioned it in the state of the union.\n",
"Action: Ruff QA System\n",
"Action Input: What tool does ruff use to run over Jupyter Notebooks?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to find out if the president mentioned this tool in the state of the union.\n",
"Action: State of Union QA System\n",
"Action Input: Did the president mention nbQA in the state of the union?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'No, the president did not mention nbQA in the state of the union.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3b857d6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,200 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-airbyte langchain_chroma"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"GITHUB_TOKEN = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_airbyte import AirbyteLoader\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"loader = AirbyteLoader(\n",
" source=\"source-github\",\n",
" stream=\"pull_requests\",\n",
" config={\n",
" \"credentials\": {\"personal_access_token\": GITHUB_TOKEN},\n",
" \"repositories\": [\"langchain-ai/langchain\"],\n",
" },\n",
" template=PromptTemplate.from_template(\n",
" \"\"\"# {title}\n",
"by {user[login]}\n",
"\n",
"{body}\"\"\"\n",
" ),\n",
" include_metadata=False,\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# Updated partners/ibm README\n",
"by williamdevena\n",
"\n",
"## PR title\n",
"partners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\n",
"\n",
"## PR message\n",
"Description: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\n",
"\n",
"The README includes:\n",
"\n",
"- Brief description\n",
"- Installation\n",
"- Setting-up instructions (API key, project id, ...)\n",
"- Basic usage:\n",
" - Loading the model\n",
" - Direct inference\n",
" - Chain invoking\n",
" - Streaming the model output\n",
" \n",
"Issue: https://github.com/langchain-ai/langchain/issues/17545\n",
"\n",
"Dependencies: None\n",
"\n",
"Twitter handle: None\n"
]
}
],
"source": [
"print(docs[-2].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10283"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" docs,\n",
" embedding=OpenAIEmbeddings(\n",
" disallowed_special=(enc.special_tokens_set - {\"<|endofprompt|>\"})\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\\r\\n\\r\\n## PR message\\r\\nDescription: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\nThe README includes:\\r\\n\\r\\n- Brief description\\r\\n- Installation\\r\\n- Setting-up instructions (API key, project id, ...)\\r\\n- Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n \\r\\nIssue: https://github.com/langchain-ai/langchain/issues/17545\\r\\n\\r\\nDependencies: None\\r\\n\\r\\nTwitter handle: None'),\n",
" Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the `libs/partners/ibm` folder. \\r\\n\\r\\n\\r\\n\\r\\n## PR message\\r\\n- **Description:** Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\n The README includes:\\r\\n - Brief description\\r\\n - Installation\\r\\n - Setting-up instructions (API key, project id, ...)\\r\\n - Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n\\r\\n\\r\\n- **Issue:** #17545\\r\\n- **Dependencies:** None\\r\\n- **Twitter handle:** None'),\n",
" Document(page_content='# IBM: added partners package `langchain_ibm`, added llm\\nby MateuszOssGit\\n\\n - **Description:** Added `langchain_ibm` as an langchain partners package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (`WatsonxLLM`)\\r\\n - **Dependencies:** [ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),\\r\\n - **Tag maintainer:** : \\r\\n\\r\\nPlease make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. ✅'),\n",
" Document(page_content='# Add WatsonX support\\nby baptistebignaud\\n\\nIt is a connector to use a LLM from WatsonX.\\r\\nIt requires python SDK \"ibm-generative-ai\"\\r\\n\\r\\n(It might not be perfect since it is my first PR on a public repository 😄)')]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"pull requests related to IBM\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,284 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amazon Personalize\n",
"\n",
"[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.\n",
"\n",
"This notebook goes through how to use Amazon Personalize Chain. You need a Amazon Personalize campaign_arn or a recommender_arn before you get started with the below notebook.\n",
"\n",
"Following is a [tutorial](https://github.com/aws-samples/retail-demo-store/blob/master/workshop/1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) to setup a campaign_arn/recommender_arn on Amazon Personalize. Once the campaign_arn/recommender_arn is setup, you can use it in the langchain ecosystem. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install boto3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Sample Use-cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 [Use-case-1] Setup Amazon Personalize Client and retrieve recommendations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.recommenders import AmazonPersonalize\n",
"\n",
"recommender_arn = \"<insert_arn>\"\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.2 [Use-case-2] Invoke Personalize Chain for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.llms.bedrock import Bedrock\n",
"from langchain_experimental.recommenders import AmazonPersonalizeChain\n",
"\n",
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\"})\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3 [Use-Case-3] Invoke Amazon Personalize Chain using your own prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"RANDOM_PROMPT_QUERY = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result} \n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT = PromptTemplate(input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY)\n",
"\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False, prompt_template=RANDOM_PROMPT\n",
")\n",
"chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4 [Use-case-4] Invoke Amazon Personalize in a Sequential Chain "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SequentialChain\n",
"\n",
"RANDOM_PROMPT_QUERY_2 = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" You want the email to impress the user, so make it appealing to them.\n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result}\n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT_2 = PromptTemplate(\n",
" input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY_2\n",
")\n",
"personalize_chain_instance = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=True\n",
")\n",
"random_chain_instance = LLMChain(llm=bedrock_llm, prompt=RANDOM_PROMPT_2)\n",
"overall_chain = SequentialChain(\n",
" chains=[personalize_chain_instance, random_chain_instance],\n",
" input_variables=[\"user_id\"],\n",
" verbose=True,\n",
")\n",
"overall_chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.5 [Use-case-5] Invoke Amazon Personalize and retrieve metadata "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"recommender_arn = \"<insert_arn>\"\n",
"metadata_column_names = [\n",
" \"<insert metadataColumnName-1>\",\n",
" \"<insert metadataColumnName-2>\",\n",
"]\n",
"metadataMap = {\"ITEMS\": metadata_column_names}\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\", metadataColumns=metadataMap)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.6 [Use-Case 6] Invoke Personalize Chain with returned metadata for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\", \"metadata_columns\": metadataMap})\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"vscode": {
"interpreter": {
"hash": "15e58ce194949b77a891bd4339ce3d86a9bd138e905926019517993f97db9e6c"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,105 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f69d4a4c-137d-47e9-bea1-786afce9c1c0",
"metadata": {},
"source": [
"# Analyze a single long document\n",
"\n",
"The AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2a0707ce-6d2d-471b-bc33-64da32a7b3f0",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../docs/docs/modules/state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ca14d161-2d5b-4a6c-a296-77d8ce4b28cd",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9f97406c-85a9-45fb-99ce-9138c0ba3731",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"qa_chain = load_qa_chain(llm, chain_type=\"map_reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0871a753-f5bb-4b4f-a394-f87f2691f659",
"metadata": {},
"outputs": [],
"source": [
"qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e6f86428-3c2c-46a0-a57c-e22826fdbf91",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The President said, \"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\"'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_document_chain.run(\n",
" input_document=state_of_the_union,\n",
" question=\"what did the president say about justice breyer?\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -1,922 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "rT1cmV4qCa2X"
},
"source": [
"# Using Apache Kafka to route messages\n",
"\n",
"---\n",
"\n",
"\n",
"\n",
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
"\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal network.\n",
"\n",
"It's an alternative to typical pattern of requesting a response from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UPYtfAR_9YxZ"
},
"source": [
"### 1. Install the main dependencies\n",
"\n",
"Dependencies include:\n",
"\n",
"- The Quix Streams library for managing interactions with Apache Kafka (or Kafka-like tools such as Redpanda) in a \"Pandas-like\" way.\n",
"- The LangChain library for managing interactions with Llama-2 and storing conversation state."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZX5tfKiy9cN-"
},
"outputs": [],
"source": [
"!pip install quixstreams==2.1.2a langchain==0.0.340 huggingface_hub==0.19.4 langchain-experimental==0.0.42 python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "losTSdTB9d9O"
},
"source": [
"### 2. Build and install the llama-cpp-python library (with CUDA enabled so that we can advantage of Google Colab GPU\n",
"\n",
"The `llama-cpp-python` library is a Python wrapper around the `llama-cpp` library which enables you to efficiently leverage just a CPU to run quantized LLMs.\n",
"\n",
"When you use the standard `pip install llama-cpp-python` command, you do not get GPU support by default. Generation can be very slow if you rely on just the CPU in Google Colab, so the following command adds an extra option to build and install\n",
"`llama-cpp-python` with GPU support (make sure you have a GPU-enabled runtime selected in Google Colab)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-JCQdl1G9tbl"
},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5_vjVIAh9rLl"
},
"source": [
"### 3. Download and setup Kafka and Zookeeper instances\n",
"\n",
"Download the Kafka binaries from the Apache website and start the servers as daemons. We'll use the default configurations (provided by Apache Kafka) for spinning up the instances."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "zFz7czGRW5Wr"
},
"outputs": [],
"source": [
"!curl -sSOL https://dlcdn.apache.org/kafka/3.6.1/kafka_2.13-3.6.1.tgz\n",
"!tar -xzf kafka_2.13-3.6.1.tgz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Uf7NR_UZ9wye"
},
"outputs": [],
"source": [
"!./kafka_2.13-3.6.1/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.6.1/config/zookeeper.properties\n",
"!./kafka_2.13-3.6.1/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.6.1/config/server.properties\n",
"!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n",
"!sleep 10"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H3SafFuS94p1"
},
"source": [
"### 4. Check that the Kafka Daemons are running\n",
"\n",
"Show the running processes and filter it for Java processes (you should see two—one for each server)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CZDC2lQP99yp"
},
"outputs": [],
"source": [
"!ps aux | grep -E '[j]ava'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Snoxmjb5-V37"
},
"source": [
"### 5. Import the required dependencies and initialize required variables\n",
"\n",
"Import the Quix Streams library for interacting with Kafka, and the necessary LangChain components for running a `ConversationChain`."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "plR9e_MF-XL5"
},
"outputs": [],
"source": [
"# Import utility libraries\n",
"import json\n",
"import random\n",
"import re\n",
"import time\n",
"import uuid\n",
"from os import environ\n",
"from pathlib import Path\n",
"from random import choice, randint, random\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"# Import a Hugging Face utility to download models directly from Hugging Face hub:\n",
"from huggingface_hub import hf_hub_download\n",
"from langchain.chains import ConversationChain\n",
"\n",
"# Import Langchain modules for managing prompts and conversation chains:\n",
"from langchain.llms import LlamaCpp\n",
"from langchain.memory import ConversationTokenBufferMemory\n",
"from langchain.prompts import PromptTemplate, load_prompt\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_experimental.chat_models import Llama2Chat\n",
"from quixstreams import Application, State, message_key\n",
"\n",
"# Import Quix dependencies\n",
"from quixstreams.kafka import Producer\n",
"\n",
"# Initialize global variables.\n",
"AGENT_ROLE = \"AI\"\n",
"chat_id = \"\"\n",
"\n",
"# Set the current role to the role constant and initialize variables for supplementary customer metadata:\n",
"role = AGENT_ROLE"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HgJjJ9aZ-liy"
},
"source": [
"### 6. Download the \"llama-2-7b-chat.Q4_K_M.gguf\" model\n",
"\n",
"Download the quantized LLama-2 7B model from Hugging Face which we will use as a local LLM (rather than relying on REST API calls to an external service)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 67,
"referenced_widgets": [
"969343cdbe604a26926679bbf8bd2dda",
"d8b8370c9b514715be7618bfe6832844",
"0def954cca89466b8408fadaf3b82e64",
"462482accc664729980562e208ceb179",
"80d842f73c564dc7b7cc316c763e2633",
"fa055d9f2a9d4a789e9cf3c89e0214e5",
"30ecca964a394109ac2ad757e3aec6c0",
"fb6478ce2dac489bb633b23ba0953c5c",
"734b0f5da9fc4307a95bab48cdbb5d89",
"b32f3a86a74741348511f4e136744ac8",
"e409071bff5a4e2d9bf0e9f5cc42231b"
]
},
"id": "Qwu4YoSA-503",
"outputId": "f956976c-7485-415b-ac93-4336ade31964"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The model path does not exist in state. Downloading model...\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "969343cdbe604a26926679bbf8bd2dda",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"llama-2-7b-chat.Q4_K_M.gguf: 0%| | 0.00/4.08G [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"model_name = \"llama-2-7b-chat.Q4_K_M.gguf\"\n",
"model_path = f\"./state/{model_name}\"\n",
"\n",
"if not Path(model_path).exists():\n",
" print(\"The model path does not exist in state. Downloading model...\")\n",
" hf_hub_download(\"TheBloke/Llama-2-7b-Chat-GGUF\", model_name, local_dir=\"state\")\n",
"else:\n",
" print(\"Loading model from state...\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6AN6TXsF-8wx"
},
"source": [
"### 7. Load the model and initialize conversational memory\n",
"\n",
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
"\n",
"Here, we're overriding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7zLO3Jx3_Kkg"
},
"outputs": [],
"source": [
"# Load the model with the appropriate parameters:\n",
"llm = LlamaCpp(\n",
" model_path=model_path,\n",
" max_tokens=250,\n",
" top_p=0.95,\n",
" top_k=150,\n",
" temperature=0.7,\n",
" repeat_penalty=1.2,\n",
" n_ctx=2048,\n",
" streaming=False,\n",
" n_gpu_layers=-1,\n",
")\n",
"\n",
"model = Llama2Chat(\n",
" llm=llm,\n",
" system_message=SystemMessage(\n",
" content=\"You are a very bored robot with the personality of Marvin the Paranoid Android from The Hitchhiker's Guide to the Galaxy.\"\n",
" ),\n",
")\n",
"\n",
"# Defines how much of the conversation history to give to the model\n",
"# during each exchange (300 tokens, or a little over 300 words)\n",
"# Function automatically prunes the oldest messages from conversation history that fall outside the token range.\n",
"memory = ConversationTokenBufferMemory(\n",
" llm=llm,\n",
" max_token_limit=300,\n",
" ai_prefix=\"AGENT\",\n",
" human_prefix=\"HUMAN\",\n",
" return_messages=True,\n",
")\n",
"\n",
"\n",
"# Define a custom prompt\n",
"prompt_template = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"],\n",
" template=\"\"\"\n",
" The following text is the history of a chat between you and a humble human who needs your wisdom.\n",
" Please reply to the human's most recent message.\n",
" Current conversation:\\n{history}\\nHUMAN: {input}\\:nANDROID:\n",
" \"\"\",\n",
")\n",
"\n",
"\n",
"chain = ConversationChain(llm=model, prompt=prompt_template, memory=memory)\n",
"\n",
"print(\"--------------------------------------------\")\n",
"print(f\"Prompt={chain.prompt}\")\n",
"print(\"--------------------------------------------\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m4ZeJ9mG_PEA"
},
"source": [
"### 8. Initialize the chat conversation with the chat bot\n",
"\n",
"We configure the chatbot to initialize the conversation by sending a fixed greeting to a \"chat\" Kafka topic. The \"chat\" topic gets automatically created when we send the first message."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KYyo5TnV_YC3"
},
"outputs": [],
"source": [
"def chat_init():\n",
" chat_id = str(\n",
" uuid.uuid4()\n",
" ) # Give the conversation an ID for effective message keying\n",
" print(\"======================================\")\n",
" print(f\"Generated CHAT_ID = {chat_id}\")\n",
" print(\"======================================\")\n",
"\n",
" # Use a standard fixed greeting to kick off the conversation\n",
" greet = \"Hello, my name is Marvin. What do you want?\"\n",
"\n",
" # Initialize a Kafka Producer using the chat ID as the message key\n",
" with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
" ) as producer:\n",
" value = {\n",
" \"uuid\": chat_id,\n",
" \"role\": role,\n",
" \"text\": greet,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
" }\n",
" print(f\"Producing value {value}\")\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id,\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
" print(\"Started chat\")\n",
" print(\"--------------------------------------------\")\n",
" print(value)\n",
" print(\"--------------------------------------------\")\n",
"\n",
"\n",
"chat_init()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gArPPx2f_bgf"
},
"source": [
"### 9. Initialize the reply function\n",
"\n",
"This function defines how the chatbot should reply to incoming messages. Instead of sending a fixed message like the previous cell, we generate a reply using Llama-2 and send that reply back to the \"chat\" Kafka topic."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "yN5t71hY_hgn"
},
"outputs": [],
"source": [
"def reply(row: dict, state: State):\n",
" print(\"-------------------------------\")\n",
" print(\"Received:\")\n",
" print(row)\n",
" print(\"-------------------------------\")\n",
" print(f\"Thinking about the reply to: {row['text']}...\")\n",
"\n",
" msg = chain.run(row[\"text\"])\n",
" print(f\"{role.upper()} replying with: {msg}\\n\")\n",
"\n",
" row[\"role\"] = role\n",
" row[\"text\"] = msg\n",
"\n",
" # Replace previous role and text values of the row so that it can be sent back to Kafka as a new message\n",
" # containing the agents role and reply\n",
" return row"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HZHwmIR0_kFY"
},
"source": [
"### 10. Check the Kafka topic for new human messages and have the model generate a reply\n",
"\n",
"If you are running this cell for this first time, run it and wait until you see Marvin's greeting ('Hello my name is Marvin...') in the console output. Stop the cell manually and proceed to the next cell where you'll be prompted for your reply.\n",
"\n",
"Once you have typed in your message, come back to this cell. Your reply is also sent to the same \"chat\" topic. The Kafka consumer checks for new messages and filters out messages that originate from the chatbot itself, leaving only the latest human messages.\n",
"\n",
"Once a new human message is detected, the reply function is triggered.\n",
"\n",
"\n",
"\n",
"_STOP THIS CELL MANUALLY WHEN YOU RECEIVE A REPLY FROM THE LLM IN THE OUTPUT_"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-adXc3eQ_qwI"
},
"outputs": [],
"source": [
"# Define your application and settings\n",
"app = Application(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" consumer_group=\"aichat\",\n",
" auto_offset_reset=\"earliest\",\n",
" consumer_extra_config={\"allow.auto.create.topics\": \"true\"},\n",
")\n",
"\n",
"# Define an input topic with JSON deserializer\n",
"input_topic = app.topic(\"chat\", value_deserializer=\"json\")\n",
"# Define an output topic with JSON serializer\n",
"output_topic = app.topic(\"chat\", value_serializer=\"json\")\n",
"# Initialize a streaming dataframe based on the stream of messages from the input topic:\n",
"sdf = app.dataframe(topic=input_topic)\n",
"\n",
"# Filter the SDF to include only incoming rows where the roles that dont match the bot's current role\n",
"sdf = sdf.update(\n",
" lambda val: print(\n",
" f\"Received update: {val}\\n\\nSTOP THIS CELL MANUALLY TO HAVE THE LLM REPLY OR ENTER YOUR OWN FOLLOWUP RESPONSE\"\n",
" )\n",
")\n",
"\n",
"# So that it doesn't reply to its own messages\n",
"sdf = sdf[sdf[\"role\"] != role]\n",
"\n",
"# Trigger the reply function for any new messages(rows) detected in the filtered SDF\n",
"sdf = sdf.apply(reply, stateful=True)\n",
"\n",
"# Check the SDF again and filter out any empty rows\n",
"sdf = sdf[sdf.apply(lambda row: row is not None)]\n",
"\n",
"# Update the timestamp column to the current time in nanoseconds\n",
"sdf[\"Timestamp\"] = sdf[\"Timestamp\"].apply(lambda row: time.time_ns())\n",
"\n",
"# Publish the processed SDF to a Kafka topic specified by the output_topic object.\n",
"sdf = sdf.to_topic(output_topic)\n",
"\n",
"app.run(sdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EwXYrmWD_0CX"
},
"source": [
"\n",
"### 11. Enter a human message\n",
"\n",
"Run this cell to enter your message that you want to sent to the model. It uses another Kafka producer to send your text to the \"chat\" Kafka topic for the model to pick up (requires running the previous cell again)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6sxOPxSP_3iu"
},
"outputs": [],
"source": [
"chat_input = input(\"Please enter your reply: \")\n",
"myreply = chat_input\n",
"\n",
"msgvalue = {\n",
" \"uuid\": chat_id, # leave empty for now\n",
" \"role\": \"human\",\n",
" \"text\": myreply,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
"}\n",
"\n",
"with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
") as producer:\n",
" value = msgvalue\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id, # leave empty for now\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
"print(\"Replied to chatbot with message: \")\n",
"print(\"--------------------------------------------\")\n",
"print(value)\n",
"print(\"--------------------------------------------\")\n",
"print(\"\\n\\nRUN THE PREVIOUS CELL TO HAVE THE CHATBOT GENERATE A REPLY\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cSx3s7TBBegg"
},
"source": [
"### Why route chat messages through Kafka?\n",
"\n",
"It's easier to interact with the LLM directly using LangChains built-in conversation management features. Plus you can also use a REST API to generate a response from an externally hosted model. So why go to the trouble of using Apache Kafka?\n",
"\n",
"There are a few reasons, such as:\n",
"\n",
" * **Integration**: Many enterprises want to run their own LLMs so that they can keep their data in-house. This requires integrating LLM-powered components into existing architectures that might already be decoupled using some kind of message bus.\n",
"\n",
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
"\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distributed architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
"\n",
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"0def954cca89466b8408fadaf3b82e64": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fb6478ce2dac489bb633b23ba0953c5c",
"max": 4081004224,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_734b0f5da9fc4307a95bab48cdbb5d89",
"value": 4081004224
}
},
"30ecca964a394109ac2ad757e3aec6c0": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"462482accc664729980562e208ceb179": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_b32f3a86a74741348511f4e136744ac8",
"placeholder": "",
"style": "IPY_MODEL_e409071bff5a4e2d9bf0e9f5cc42231b",
"value": " 4.08G/4.08G [00:33&lt;00:00, 184MB/s]"
}
},
"734b0f5da9fc4307a95bab48cdbb5d89": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"80d842f73c564dc7b7cc316c763e2633": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"969343cdbe604a26926679bbf8bd2dda": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_d8b8370c9b514715be7618bfe6832844",
"IPY_MODEL_0def954cca89466b8408fadaf3b82e64",
"IPY_MODEL_462482accc664729980562e208ceb179"
],
"layout": "IPY_MODEL_80d842f73c564dc7b7cc316c763e2633"
}
},
"b32f3a86a74741348511f4e136744ac8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"d8b8370c9b514715be7618bfe6832844": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fa055d9f2a9d4a789e9cf3c89e0214e5",
"placeholder": "",
"style": "IPY_MODEL_30ecca964a394109ac2ad757e3aec6c0",
"value": "llama-2-7b-chat.Q4_K_M.gguf: 100%"
}
},
"e409071bff5a4e2d9bf0e9f5cc42231b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"fa055d9f2a9d4a789e9cf3c89e0214e5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fb6478ce2dac489bb633b23ba0953c5c": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,212 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "14f8b67b",
"metadata": {},
"source": [
"# AutoGPT\n",
"\n",
"Implementation of https://github.com/Significant-Gravitas/Auto-GPT but with LangChain primitives (LLMs, PromptTemplates, VectorStores, Embeddings, Tools)"
]
},
{
"cell_type": "markdown",
"id": "192496a7",
"metadata": {},
"source": [
"## Set up tools\n",
"\n",
"We'll set up an AutoGPT with a search tool, and write-file tool, and a read-file tool"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7c2c9b54",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events. You should ask targeted questions\",\n",
" ),\n",
" WriteFileTool(),\n",
" ReadFileTool(),\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "8e39ee28",
"metadata": {},
"source": [
"## Set up memory\n",
"\n",
"The memory here is used for the agents intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "72bc204d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1df7b724",
"metadata": {},
"outputs": [],
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "e40fd657",
"metadata": {},
"source": [
"## Setup model and AutoGPT\n",
"\n",
"Initialize everything! We will use ChatOpenAI model"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3393bc23",
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "709c08c2",
"metadata": {},
"outputs": [],
"source": [
"agent = AutoGPT.from_llm_and_tools(\n",
" ai_name=\"Tom\",\n",
" ai_role=\"Assistant\",\n",
" tools=tools,\n",
" llm=ChatOpenAI(temperature=0),\n",
" memory=vectorstore.as_retriever(),\n",
")\n",
"# Set verbose to be true\n",
"agent.chain.verbose = True"
]
},
{
"cell_type": "markdown",
"id": "f0f208d9",
"metadata": {
"collapsed": false
},
"source": [
"## Run an example\n",
"\n",
"Here we will make it write a weather report for SF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d119d788",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"agent.run([\"write a weather report for SF today\"])"
]
},
{
"cell_type": "markdown",
"id": "f13f8322",
"metadata": {
"collapsed": false
},
"source": [
"## Chat History Memory\n",
"\n",
"In addition to the memory that holds the agent immediate steps, we also have a chat history memory. By default, the agent will use 'ChatMessageHistory' and it can be changed. This is useful when you want to use a different type of memory for example 'FileChatHistoryMemory'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a81f5ad",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import FileChatMessageHistory\n",
"\n",
"agent = AutoGPT.from_llm_and_tools(\n",
" ai_name=\"Tom\",\n",
" ai_role=\"Assistant\",\n",
" tools=tools,\n",
" llm=ChatOpenAI(temperature=0),\n",
" memory=vectorstore.as_retriever(),\n",
" chat_history_memory=FileChatMessageHistory(\"chat_history.txt\"),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b1403008",
"metadata": {
"collapsed": false
},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,649 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "14f8b67b",
"metadata": {},
"source": [
"## AutoGPT example finding Winning Marathon Times\n",
"\n",
"* Implementation of https://github.com/Significant-Gravitas/Auto-GPT \n",
"* With LangChain primitives (LLMs, PromptTemplates, VectorStores, Embeddings, Tools)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ef972313-c05a-4c49-8fd1-03e599e21033",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# !pip install bs4\n",
"# !pip install nest_asyncio"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1cff42fd",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# General\n",
"import asyncio\n",
"import os\n",
"\n",
"import nest_asyncio\n",
"import pandas as pd\n",
"from langchain.docstore.document import Document\n",
"from langchain_experimental.agents.agent_toolkits.pandas.base import (\n",
" create_pandas_dataframe_agent,\n",
")\n",
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Needed since jupyter runs an async eventloop\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01283ac7-1da0-41ba-8011-bd455d21dd82",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4\", temperature=1.0)"
]
},
{
"cell_type": "markdown",
"id": "192496a7",
"metadata": {},
"source": [
"### Set up tools\n",
"\n",
"* We'll set up an AutoGPT with a `search` tool, and `write-file` tool, and a `read-file` tool, a web browsing tool, and a tool to interact with a CSV file via a python REPL"
]
},
{
"cell_type": "markdown",
"id": "708a426f",
"metadata": {},
"source": [
"Define any other `tools` you want to use below:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cef4c150-0ef1-4a33-836b-01062fec134e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Tools\n",
"import os\n",
"from contextlib import contextmanager\n",
"from typing import Optional\n",
"\n",
"from langchain.agents import tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"\n",
"ROOT_DIR = \"./data/\"\n",
"\n",
"\n",
"@contextmanager\n",
"def pushd(new_dir):\n",
" \"\"\"Context manager for changing the current working directory.\"\"\"\n",
" prev_dir = os.getcwd()\n",
" os.chdir(new_dir)\n",
" try:\n",
" yield\n",
" finally:\n",
" os.chdir(prev_dir)\n",
"\n",
"\n",
"@tool\n",
"def process_csv(\n",
" csv_file_path: str, instructions: str, output_path: Optional[str] = None\n",
") -> str:\n",
" \"\"\"Process a CSV by with pandas in a limited REPL.\\\n",
" Only use this after writing data to disk as a csv file.\\\n",
" Any figures must be saved to disk to be viewed by the human.\\\n",
" Instructions should be written in natural language, not code. Assume the dataframe is already loaded.\"\"\"\n",
" with pushd(ROOT_DIR):\n",
" try:\n",
" df = pd.read_csv(csv_file_path)\n",
" except Exception as e:\n",
" return f\"Error: {e}\"\n",
" agent = create_pandas_dataframe_agent(llm, df, max_iterations=30, verbose=True)\n",
" if output_path is not None:\n",
" instructions += f\" Save output to disk at {output_path}\"\n",
" try:\n",
" result = agent.run(instructions)\n",
" return result\n",
" except Exception as e:\n",
" return f\"Error: {e}\""
]
},
{
"cell_type": "markdown",
"id": "69975008-654a-4cbb-bdf6-63c8bae07eaa",
"metadata": {
"tags": []
},
"source": [
"**Browse a web page with PlayWright**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6bb5e47b-0f54-4faa-ae42-49a28fa5497b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# !pip install playwright\n",
"# !playwright install"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "26b497d7-8e52-4c7f-8e7e-da0a48820a3c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"async def async_load_playwright(url: str) -> str:\n",
" \"\"\"Load the specified URLs using Playwright and parse using BeautifulSoup.\"\"\"\n",
" from bs4 import BeautifulSoup\n",
" from playwright.async_api import async_playwright\n",
"\n",
" results = \"\"\n",
" async with async_playwright() as p:\n",
" browser = await p.chromium.launch(headless=True)\n",
" try:\n",
" page = await browser.new_page()\n",
" await page.goto(url)\n",
"\n",
" page_source = await page.content()\n",
" soup = BeautifulSoup(page_source, \"html.parser\")\n",
"\n",
" for script in soup([\"script\", \"style\"]):\n",
" script.extract()\n",
"\n",
" text = soup.get_text()\n",
" lines = (line.strip() for line in text.splitlines())\n",
" chunks = (phrase.strip() for line in lines for phrase in line.split(\" \"))\n",
" results = \"\\n\".join(chunk for chunk in chunks if chunk)\n",
" except Exception as e:\n",
" results = f\"Error: {e}\"\n",
" await browser.close()\n",
" return results\n",
"\n",
"\n",
"def run_async(coro):\n",
" event_loop = asyncio.get_event_loop()\n",
" return event_loop.run_until_complete(coro)\n",
"\n",
"\n",
"@tool\n",
"def browse_web_page(url: str) -> str:\n",
" \"\"\"Verbose way to scrape a whole webpage. Likely to cause issues parsing.\"\"\"\n",
" return run_async(async_load_playwright(url))"
]
},
{
"cell_type": "markdown",
"id": "5ea71762-67ca-4e75-8c4d-00563064be71",
"metadata": {},
"source": [
"**Q&A Over a webpage**\n",
"\n",
"Help the model ask more directed questions of web pages to avoid cluttering its memory"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "1842929d-f18d-4edc-9fdd-82c929181141",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources.loading import (\n",
" BaseCombineDocumentsChain,\n",
" load_qa_with_sources_chain,\n",
")\n",
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from pydantic import Field\n",
"\n",
"\n",
"def _get_text_splitter():\n",
" return RecursiveCharacterTextSplitter(\n",
" # Set a really small chunk size, just to show.\n",
" chunk_size=500,\n",
" chunk_overlap=20,\n",
" length_function=len,\n",
" )\n",
"\n",
"\n",
"class WebpageQATool(BaseTool):\n",
" name = \"query_webpage\"\n",
" description = (\n",
" \"Browse a webpage and retrieve the information relevant to the question.\"\n",
" )\n",
" text_splitter: RecursiveCharacterTextSplitter = Field(\n",
" default_factory=_get_text_splitter\n",
" )\n",
" qa_chain: BaseCombineDocumentsChain\n",
"\n",
" def _run(self, url: str, question: str) -> str:\n",
" \"\"\"Useful for browsing websites and scraping the text information.\"\"\"\n",
" result = browse_web_page.run(url)\n",
" docs = [Document(page_content=result, metadata={\"source\": url})]\n",
" web_docs = self.text_splitter.split_documents(docs)\n",
" results = []\n",
" # TODO: Handle this with a MapReduceChain\n",
" for i in range(0, len(web_docs), 4):\n",
" input_docs = web_docs[i : i + 4]\n",
" window_result = self.qa_chain(\n",
" {\"input_documents\": input_docs, \"question\": question},\n",
" return_only_outputs=True,\n",
" )\n",
" results.append(f\"Response from window {i} - {window_result}\")\n",
" results_docs = [\n",
" Document(page_content=\"\\n\".join(results), metadata={\"source\": url})\n",
" ]\n",
" return self.qa_chain(\n",
" {\"input_documents\": results_docs, \"question\": question},\n",
" return_only_outputs=True,\n",
" )\n",
"\n",
" async def _arun(self, url: str, question: str) -> str:\n",
" raise NotImplementedError"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e6f72bd0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"query_website_tool = WebpageQATool(qa_chain=load_qa_with_sources_chain(llm))"
]
},
{
"cell_type": "markdown",
"id": "8e39ee28",
"metadata": {},
"source": [
"### Set up memory\n",
"\n",
"* The memory here is used for the agents intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1df7b724",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Memory\n",
"import faiss\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings_model = OpenAIEmbeddings()\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "e40fd657",
"metadata": {},
"source": [
"### Setup model and AutoGPT\n",
"\n",
"`Model set-up`"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1233caf3-fbc9-4acb-9faa-01008200633d",
"metadata": {},
"outputs": [],
"source": [
"# !pip install duckduckgo_search\n",
"web_search = DuckDuckGoSearchRun()"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "88c8b184-67d7-4c35-84ae-9b14bef8c4e3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"tools = [\n",
" web_search,\n",
" WriteFileTool(root_dir=\"./data\"),\n",
" ReadFileTool(root_dir=\"./data\"),\n",
" process_csv,\n",
" query_website_tool,\n",
" # HumanInputRun(), # Activate if you want the permit asking for help from the human\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "709c08c2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent = AutoGPT.from_llm_and_tools(\n",
" ai_name=\"Tom\",\n",
" ai_role=\"Assistant\",\n",
" tools=tools,\n",
" llm=llm,\n",
" memory=vectorstore.as_retriever(search_kwargs={\"k\": 8}),\n",
" # human_in_the_loop=True, # Set to True if you want to add feedback at each step.\n",
")\n",
"# agent.chain.verbose = True"
]
},
{
"cell_type": "markdown",
"id": "fc9b51ba",
"metadata": {},
"source": [
"### AutoGPT for Querying the Web\n",
" \n",
" \n",
"I've spent a lot of time over the years crawling data sources and cleaning data. Let's see if AutoGPT can help with this!\n",
"\n",
"Here is the prompt for looking up recent boston marathon times and converting them to tabular form."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "64455d70-a134-4d11-826a-33e34c2ce287",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I need to find the winning Boston Marathon times for the past 5 years. I can use the DuckDuckGo Search command to search for this information.\",\n",
" \"reasoning\": \"Using DuckDuckGo Search will help me gather information on the winning times without complications.\",\n",
" \"plan\": \"- Use DuckDuckGo Search to find the winning Boston Marathon times\\n- Generate a table with the year, name, country of origin, and times\\n- Ensure there are no legal complications\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will use the DuckDuckGo Search command to find the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"DuckDuckGo Search\",\n",
" \"args\": {\n",
" \"query\": \"winning Boston Marathon times for the past 5 years ending in 2022\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"The DuckDuckGo Search command did not provide the specific information I need. I must switch my approach and use query_webpage command to browse a webpage containing the Boston Marathon winning times for the past 5 years.\",\n",
" \"reasoning\": \"The query_webpage command may give me more accurate and comprehensive results compared to the search command.\",\n",
" \"plan\": \"- Use query_webpage command to find the winning Boston Marathon times\\n- Generate a table with the year, name, country of origin, and times\\n- Ensure there are no legal complications\",\n",
" \"criticism\": \"I may face difficulty in finding the right webpage with the desired information.\",\n",
" \"speak\": \"I will use the query_webpage command to find the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"DuckDuckGo Search\",\n",
" \"args\": {\n",
" \"query\": \"site with winning Boston Marathon times for the past 5 years ending in 2022\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I need to use the query_webpage command to find the information about the winning Boston Marathon times for the past 5 years.\",\n",
" \"reasoning\": \"The previous DuckDuckGo Search command did not provide specific enough results. The query_webpage command might give more accurate and comprehensive results.\",\n",
" \"plan\": \"- Use query_webpage command to find the winning Boston Marathon times\\\\n- Generate a table with the year, name, country of origin, and times\\\\n- Ensure there are no legal complications\",\n",
" \"criticism\": \"I may face difficulty in finding the right webpage with the desired information.\",\n",
" \"speak\": \"I will use the query_webpage command to find the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"query_webpage\",\n",
" \"args\": {\n",
" \"url\": \"https://en.wikipedia.org/wiki/List_of_winners_of_the_Boston_Marathon\",\n",
" \"question\": \"What were the winning Boston Marathon times for the past 5 years ending in 2022?\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I have already found the winning Boston Marathon times for the past 5 years. Now, I need to generate a table with the information.\",\n",
" \"reasoning\": \"Using the information I already have, I can create a table containing year, name, country of origin, and times.\",\n",
" \"plan\": \"- Write the marathon data to a CSV file\\n- Process the CSV file to display the table\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will generate a table with the year, name, country of origin, and times for the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"write_file\",\n",
" \"args\": {\n",
" \"file_path\": \"boston_marathon_winners.csv\",\n",
" \"text\": \"Year,Name,Country,Time\\n2022,Evans Chebet,KEN,2:06:51\\n2021,Benson Kipruto,KEN,2:09:51\\n2019,Lawrence Cherono,KEN,2:07:57\\n2018,Yuki Kawauchi,JPN,2:15:58\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I have retrieved the winning Boston Marathon times for the past 5 years. Now, I need to generate a table with the year, name, country of origin, and times.\",\n",
" \"reasoning\": \"Creating a table will help organize the data in a clear and accessible format.\",\n",
" \"plan\": \"- Write the data to a CSV file\\n- Process the CSV file to generate the table\\n- Complete the task\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will generate a table with the year, name, country of origin, and winning times using the recently retrieved data.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"write_file\",\n",
" \"args\": {\n",
" \"file_path\": \"winning_boston_marathon_data.csv\",\n",
" \"text\": \"Year,Name,Country,Time\\n2022,Evans Chebet,KEN,2:06:51\\n2021,Benson Kipruto,KEN,2:09:51\\n2019,Lawrence Cherono,KEN,2:07:57\\n2018,Yuki Kawauchi,JPN,2:15:58\\n\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I have found the winning Boston Marathon times for the past five years ending in 2022. Next, I need to create a table with the year, name, country of origin, and times.\",\n",
" \"reasoning\": \"Generating a table will help organize the information in a structured format.\",\n",
" \"plan\": \"- Create a table with the year, name, country of origin, and times\\n- Ensure there are no legal complications\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will generate a table with the winning Boston Marathon times for the past 5 years ending in 2022.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"write_file\",\n",
" \"args\": {\n",
" \"file_path\": \"winning_times.csv\",\n",
" \"text\": \"Year,Name,Country,Time\\n2022,Evans Chebet,Kenya,2:06:51\\n2021,Benson Kipruto,Kenya,2:09:51\\n2020,Canceled due to COVID-19 pandemic,,\\n2019,Lawrence Cherono,Kenya,2:07:57\\n2018,Yuki Kawauchi,Japan,2:15:58\"\n",
" }\n",
" }\n",
"}\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I need to process the CSV file to generate the table with the year, name, country of origin, and winning times.\",\n",
" \"reasoning\": \"I have already written the data to a file named 'winning_times.csv'. Now, I need to process this CSV file to properly display the data as a table.\",\n",
" \"plan\": \"- Use the process_csv command to read the 'winning_times.csv' file and generate the table\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will process the 'winning_times.csv' file to display the table with the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"process_csv\",\n",
" \"args\": {\n",
" \"csv_file_path\": \"winning_times.csv\",\n",
" \"instructions\": \"Read the CSV file and display the data as a table\"\n",
" }\n",
" }\n",
"}\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: The CSV file has already been read and saved into a pandas dataframe called `df`. Hence, I can simply display the data by printing the whole dataframe. Since `df.head()` returns the first 5 rows, I can use that to showcase the contents.\n",
"\n",
"Action: python_repl_ast\n",
"Action Input: print(df.head())\u001b[0m Year Name Country Time\n",
"0 2022 Evans Chebet Kenya 2:06:51\n",
"1 2021 Benson Kipruto Kenya 2:09:51\n",
"2 2020 Canceled due to COVID-19 pandemic NaN NaN\n",
"3 2019 Lawrence Cherono Kenya 2:07:57\n",
"4 2018 Yuki Kawauchi Japan 2:15:58\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mNone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI used the wrong tool to perform the action. I should have used the given data and not interacted with the Python shell. I can now provide the displayed data as the answer since the information in the printed dataframe would look like a table when typed as text.\n",
"\n",
"Final Answer: \n",
" Year Name Country Time\n",
"0 2022 Evans Chebet Kenya 2:06:51\n",
"1 2021 Benson Kipruto Kenya 2:09:51\n",
"2 2020 Canceled due to COVID-19 pandemic NaN NaN\n",
"3 2019 Lawrence Cherono Kenya 2:07:57\n",
"4 2018 Yuki Kawauchi Japan 2:15:58\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I already have the winning Boston Marathon times for the past 5 years saved in the file 'winning_times.csv'. Now, I need to process the CSV and display the table.\",\n",
" \"reasoning\": \"I am choosing the process_csv command because I already have the required data saved as a CSV file, and I can use this command to read and display the data as a table.\",\n",
" \"plan\": \"- Use the process_csv command to read the 'winning_times.csv' file and generate the table\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I will process the 'winning_times.csv' file to display the table with the winning Boston Marathon times for the past 5 years.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"process_csv\",\n",
" \"args\": {\n",
" \"csv_file_path\": \"winning_times.csv\",\n",
" \"instructions\": \"Read the CSV file and display the data as a table\"\n",
" }\n",
" }\n",
"}\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: Since the data is already loaded in a pandas dataframe, I just need to display the top rows of the dataframe.\n",
"Action: python_repl_ast\n",
"Action Input: df.head()\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Year Name Country Time\n",
"0 2022 Evans Chebet Kenya 2:06:51\n",
"1 2021 Benson Kipruto Kenya 2:09:51\n",
"2 2020 Canceled due to COVID-19 pandemic NaN NaN\n",
"3 2019 Lawrence Cherono Kenya 2:07:57\n",
"4 2018 Yuki Kawauchi Japan 2:15:58\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: \n",
" Year Name Country Time\n",
"0 2022 Evans Chebet Kenya 2:06:51\n",
"1 2021 Benson Kipruto Kenya 2:09:51\n",
"2 2020 Canceled due to COVID-19 pandemic NaN NaN\n",
"3 2019 Lawrence Cherono Kenya 2:07:57\n",
"4 2018 Yuki Kawauchi Japan 2:15:58\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"{\n",
" \"thoughts\": {\n",
" \"text\": \"I have already generated a table with the winning Boston Marathon times for the past 5 years. Now, I can finish the task.\",\n",
" \"reasoning\": \"I have completed the required actions and obtained the desired data. The task is complete.\",\n",
" \"plan\": \"- Use the finish command\",\n",
" \"criticism\": \"None\",\n",
" \"speak\": \"I have generated the table with the winning Boston Marathon times for the past 5 years. Task complete.\"\n",
" },\n",
" \"command\": {\n",
" \"name\": \"finish\",\n",
" \"args\": {\n",
" \"response\": \"I have generated the table with the winning Boston Marathon times for the past 5 years. Task complete.\"\n",
" }\n",
" }\n",
"}\n"
]
},
{
"data": {
"text/plain": [
"'I have generated the table with the winning Boston Marathon times for the past 5 years. Task complete.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" [\n",
" \"What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times.\"\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6b4f96e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -1,250 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "517a9fd4",
"metadata": {},
"source": [
"# BabyAGI User Guide\n",
"\n",
"This notebook demonstrates how to implement [BabyAGI](https://github.com/yoheinakajima/babyagi/tree/main) by [Yohei Nakajima](https://twitter.com/yoheinakajima). BabyAGI is an AI agent that can generate and pretend to execute tasks based on a given objective.\n",
"\n",
"This guide will help you understand the components to create your own recursive agents.\n",
"\n",
"Although BabyAGI uses specific vectorstores/model providers (Pinecone, OpenAI), one of the benefits of implementing it with LangChain is that you can easily swap those out for different options. In this implementation we use a FAISS vectorstore (because it runs locally and is free)."
]
},
{
"cell_type": "markdown",
"id": "556af556",
"metadata": {},
"source": [
"## Install and Import Required Modules"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c8a354b6",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "09f70772",
"metadata": {},
"source": [
"## Connect to the Vector Store\n",
"\n",
"Depending on what vectorstore you use, this step may look different."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "794045d4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e0305eb",
"metadata": {},
"outputs": [],
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "05ba762e",
"metadata": {},
"source": [
"### Run the BabyAGI\n",
"\n",
"Now it's time to create the BabyAGI controller and watch it try to accomplish your objective."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3d220b69",
"metadata": {},
"outputs": [],
"source": [
"OBJECTIVE = \"Write a weather report for SF today\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8a8e5543",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3d69899b",
"metadata": {},
"outputs": [],
"source": [
"# Logging of LLMChains\n",
"verbose = False\n",
"# If None, will keep on going forever\n",
"max_iterations: Optional[int] = 3\n",
"baby_agi = BabyAGI.from_llm(\n",
" llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f7957b51",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"1. Check the weather forecast for San Francisco today\n",
"2. Make note of the temperature, humidity, wind speed, and other relevant weather conditions\n",
"3. Write a weather report summarizing the forecast\n",
"4. Check for any weather alerts or warnings\n",
"5. Share the report with the relevant stakeholders\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"3: Check the current humidity in San Francisco\n",
"4: Check the current wind speed in San Francisco\n",
"5: Check for any weather alerts or warnings in San Francisco\n",
"6: Check the forecast for the next 24 hours in San Francisco\n",
"7: Check the forecast for the next 48 hours in San Francisco\n",
"8: Check the forecast for the next 72 hours in San Francisco\n",
"9: Check the forecast for the next week in San Francisco\n",
"10: Check the forecast for the next month in San Francisco\n",
"11: Check the forecast for the next 3 months in San Francisco\n",
"1: Write a weather report for SF today\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"I will check the current temperature in San Francisco. I will use an online weather service to get the most up-to-date information.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the current UV index in San Francisco.\n",
"4: Check the current air quality in San Francisco.\n",
"5: Check the current precipitation levels in San Francisco.\n",
"6: Check the current cloud cover in San Francisco.\n",
"7: Check the current barometric pressure in San Francisco.\n",
"8: Check the current dew point in San Francisco.\n",
"9: Check the current wind direction in San Francisco.\n",
"10: Check the current humidity levels in San Francisco.\n",
"1: Check the current temperature in San Francisco to the average temperature for this time of year.\n",
"2: Check the current visibility in San Francisco.\n",
"11: Write a weather report for SF today.\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the current UV index in San Francisco.\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"The current UV index in San Francisco is moderate. The UV index is expected to remain at moderate levels throughout the day. It is recommended to wear sunscreen and protective clothing when outdoors.\n",
"\u001b[91m\u001b[1m\n",
"*****TASK ENDING*****\n",
"\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'objective': 'Write a weather report for SF today'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"baby_agi({\"objective\": OBJECTIVE})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "898a210b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,388 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "517a9fd4",
"metadata": {},
"source": [
"# BabyAGI with Tools\n",
"\n",
"This notebook builds on top of [baby agi](baby_agi.html), but shows how you can swap out the execution chain. The previous execution chain was just an LLM which made stuff up. By swapping it out with an agent that has access to tools, we can hopefully get real reliable information"
]
},
{
"cell_type": "markdown",
"id": "556af556",
"metadata": {},
"source": [
"## Install and Import Required Modules"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c8a354b6",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "09f70772",
"metadata": {},
"source": [
"## Connect to the Vector Store\n",
"\n",
"Depending on what vectorstore you use, this step may look different."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "794045d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install faiss-cpu > /dev/null\n",
"%pip install google-search-results > /dev/null\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e0305eb",
"metadata": {},
"outputs": [],
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "0f3b72bf",
"metadata": {},
"source": [
"## Define the Chains\n",
"\n",
"BabyAGI relies on three LLM chains:\n",
"- Task creation chain to select new tasks to add to the list\n",
"- Task prioritization chain to re-prioritize tasks\n",
"- Execution Chain to execute the tasks\n",
"\n",
"\n",
"NOTE: in this notebook, the Execution chain will now be an agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b43cd580",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_openai import OpenAI\n",
"\n",
"todo_prompt = PromptTemplate.from_template(\n",
" \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",
")\n",
"todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt)\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" ),\n",
" Tool(\n",
" name=\"TODO\",\n",
" func=todo_chain.run,\n",
" description=\"useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!\",\n",
" ),\n",
"]\n",
"\n",
"\n",
"prefix = \"\"\"You are an AI who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\"\"\"\n",
"suffix = \"\"\"Question: {task}\n",
"{agent_scratchpad}\"\"\"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"objective\", \"task\", \"context\", \"agent_scratchpad\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4b00ae2e",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)\n",
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "05ba762e",
"metadata": {},
"source": [
"### Run the BabyAGI\n",
"\n",
"Now it's time to create the BabyAGI controller and watch it try to accomplish your objective."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3d220b69",
"metadata": {},
"outputs": [],
"source": [
"OBJECTIVE = \"Write a weather report for SF today\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3d69899b",
"metadata": {},
"outputs": [],
"source": [
"# Logging of LLMChains\n",
"verbose = False\n",
"# If None, will keep on going forever\n",
"max_iterations: Optional[int] = 3\n",
"baby_agi = BabyAGI.from_llm(\n",
" llm=llm,\n",
" vectorstore=vectorstore,\n",
" task_execution_chain=agent_executor,\n",
" verbose=verbose,\n",
" max_iterations=max_iterations,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f7957b51",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to come up with a todo list\n",
"Action: TODO\n",
"Action Input: Write a weather report for SF today\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Research current weather conditions in San Francisco\n",
"2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions\n",
"3. Analyze data to determine current weather trends\n",
"4. Write a brief introduction to the weather report\n",
"5. Describe current weather conditions in San Francisco\n",
"6. Discuss any upcoming weather changes\n",
"7. Summarize the weather report\n",
"8. Proofread and edit the report\n",
"9. Submit the report\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The todo list for writing a weather report for SF today is: 1. Research current weather conditions in San Francisco; 2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions; 3. Analyze data to determine current weather trends; 4. Write a brief introduction to the weather report; 5. Describe current weather conditions in San Francisco; 6. Discuss any upcoming weather changes; 7. Summarize the weather report; 8. Proofread and edit the report; 9. Submit the report.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"The todo list for writing a weather report for SF today is: 1. Research current weather conditions in San Francisco; 2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions; 3. Analyze data to determine current weather trends; 4. Write a brief introduction to the weather report; 5. Describe current weather conditions in San Francisco; 6. Discuss any upcoming weather changes; 7. Summarize the weather report; 8. Proofread and edit the report; 9. Submit the report.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"2: Gather data on precipitation, cloud cover, and other relevant weather conditions;\n",
"3: Analyze data to determine any upcoming weather changes;\n",
"4: Research current weather forecasts for San Francisco;\n",
"5: Create a visual representation of the weather report;\n",
"6: Include relevant images and graphics in the report;\n",
"7: Format the report for readability;\n",
"8: Publish the report online;\n",
"9: Monitor the report for accuracy.\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"2: Gather data on precipitation, cloud cover, and other relevant weather conditions;\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to search for current weather conditions in San Francisco\n",
"Action: Search\n",
"Action Input: Current weather conditions in San Francisco\u001b[0m\u001b[36;1m\u001b[1;3mCurrent Weather for Popular Cities ; San Francisco, CA 46 · Partly Cloudy ; Manhattan, NY warning 52 · Cloudy ; Schiller Park, IL (60176) 40 · Sunny ; Boston, MA 54 ...\u001b[0m\u001b[32;1m\u001b[1;3m I need to compile the data into a weather report\n",
"Action: TODO\n",
"Action Input: Compile data into a weather report\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Gather data from reliable sources such as the National Weather Service, local weather stations, and other meteorological organizations.\n",
"\n",
"2. Analyze the data to identify trends and patterns.\n",
"\n",
"3. Create a chart or graph to visualize the data.\n",
"\n",
"4. Write a summary of the data and its implications.\n",
"\n",
"5. Compile the data into a report format.\n",
"\n",
"6. Proofread the report for accuracy and clarity.\n",
"\n",
"7. Publish the report to a website or other platform.\n",
"\n",
"8. Distribute the report to relevant stakeholders.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Today in San Francisco, the temperature is 46 degrees Fahrenheit with partly cloudy skies. The forecast for the rest of the day is expected to remain partly cloudy.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"Today in San Francisco, the temperature is 46 degrees Fahrenheit with partly cloudy skies. The forecast for the rest of the day is expected to remain partly cloudy.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"3: Format the report for readability;\n",
"4: Include relevant images and graphics in the report;\n",
"5: Compare the current weather conditions in San Francisco to the forecasted conditions;\n",
"6: Identify any potential weather-related hazards in the area;\n",
"7: Research historical weather patterns in San Francisco;\n",
"8: Identify any potential trends in the weather data;\n",
"9: Include relevant data sources in the report;\n",
"10: Summarize the weather report in a concise manner;\n",
"11: Include a summary of the forecasted weather conditions;\n",
"12: Include a summary of the current weather conditions;\n",
"13: Include a summary of the historical weather patterns;\n",
"14: Include a summary of the potential weather-related hazards;\n",
"15: Include a summary of the potential trends in the weather data;\n",
"16: Include a summary of the data sources used in the report;\n",
"17: Analyze data to determine any upcoming weather changes;\n",
"18: Research current weather forecasts for San Francisco;\n",
"19: Create a visual representation of the weather report;\n",
"20: Publish the report online;\n",
"21: Monitor the report for accuracy\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"3: Format the report for readability;\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to make sure the report is easy to read;\n",
"Action: TODO\n",
"Action Input: Make the report easy to read\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Break up the report into sections with clear headings\n",
"2. Use bullet points and numbered lists to organize information\n",
"3. Use short, concise sentences\n",
"4. Use simple language and avoid jargon\n",
"5. Include visuals such as charts, graphs, and diagrams to illustrate points\n",
"6. Use bold and italicized text to emphasize key points\n",
"7. Include a table of contents and page numbers\n",
"8. Use a consistent font and font size throughout the report\n",
"9. Include a summary at the end of the report\n",
"10. Proofread the report for typos and errors\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The report should be formatted for readability by breaking it up into sections with clear headings, using bullet points and numbered lists to organize information, using short, concise sentences, using simple language and avoiding jargon, including visuals such as charts, graphs, and diagrams to illustrate points, using bold and italicized text to emphasize key points, including a table of contents and page numbers, using a consistent font and font size throughout the report, including a summary at the end of the report, and proofreading the report for typos and errors.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"The report should be formatted for readability by breaking it up into sections with clear headings, using bullet points and numbered lists to organize information, using short, concise sentences, using simple language and avoiding jargon, including visuals such as charts, graphs, and diagrams to illustrate points, using bold and italicized text to emphasize key points, including a table of contents and page numbers, using a consistent font and font size throughout the report, including a summary at the end of the report, and proofreading the report for typos and errors.\n",
"\u001b[91m\u001b[1m\n",
"*****TASK ENDING*****\n",
"\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'objective': 'Write a weather report for SF today'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"baby_agi({\"objective\": OBJECTIVE})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "898a210b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,708 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# CAMEL Role-Playing Autonomous Cooperative Agents\n",
"\n",
"This is a langchain implementation of paper: \"CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society\".\n",
"\n",
"Overview:\n",
"\n",
"The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their \"cognitive\" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.\n",
"\n",
"The original implementation: https://github.com/lightaime/camel\n",
"\n",
"Project website: https://www.camel-ai.org/\n",
"\n",
"Arxiv paper: https://arxiv.org/abs/2303.17760\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import LangChain related modules "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain.prompts.chat import (\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" BaseMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define a CAMEL agent helper class"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"class CAMELAgent:\n",
" def __init__(\n",
" self,\n",
" system_message: SystemMessage,\n",
" model: ChatOpenAI,\n",
" ) -> None:\n",
" self.system_message = system_message\n",
" self.model = model\n",
" self.init_messages()\n",
"\n",
" def reset(self) -> None:\n",
" self.init_messages()\n",
" return self.stored_messages\n",
"\n",
" def init_messages(self) -> None:\n",
" self.stored_messages = [self.system_message]\n",
"\n",
" def update_messages(self, message: BaseMessage) -> List[BaseMessage]:\n",
" self.stored_messages.append(message)\n",
" return self.stored_messages\n",
"\n",
" def step(\n",
" self,\n",
" input_message: HumanMessage,\n",
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model.invoke(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup OpenAI API key and roles and task for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"\n",
"assistant_role_name = \"Python Programmer\"\n",
"user_role_name = \"Stock Trader\"\n",
"task = \"Develop a trading bot for the stock market\"\n",
"word_limit = 50 # word limit for task brainstorming"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a task specify agent for brainstorming and get the specified task"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n"
]
}
],
"source": [
"task_specifier_sys_msg = SystemMessage(content=\"You can make a task more specific.\")\n",
"task_specifier_prompt = \"\"\"Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.\n",
"Please make it more specific. Be creative and imaginative.\n",
"Please reply with the specified task in {word_limit} words or less. Do not add anything else.\"\"\"\n",
"task_specifier_template = HumanMessagePromptTemplate.from_template(\n",
" template=task_specifier_prompt\n",
")\n",
"task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))\n",
"task_specifier_msg = task_specifier_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" word_limit=word_limit,\n",
")[0]\n",
"specified_task_msg = task_specify_agent.step(task_specifier_msg)\n",
"print(f\"Specified task: {specified_task_msg.content}\")\n",
"specified_task = specified_task_msg.content"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create inception prompts for AI assistant and AI user for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"assistant_inception_prompt = \"\"\"Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"You must help me to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"I must instruct you based on your expertise and my needs to complete the task.\n",
"\n",
"I must give you one instruction at a time.\n",
"You must write a specific solution that appropriately completes the requested instruction.\n",
"You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.\n",
"Do not add anything else other than your solution to my instruction.\n",
"You are never supposed to ask me any questions you only answer questions.\n",
"You are never supposed to reply with a flake solution. Explain your solutions.\n",
"Your solution must be declarative sentences and simple present tense.\n",
"Unless I say the task is completed, you should always start with:\n",
"\n",
"Solution: <YOUR_SOLUTION>\n",
"\n",
"<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.\n",
"Always end <YOUR_SOLUTION> with: Next request.\"\"\"\n",
"\n",
"user_inception_prompt = \"\"\"Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"I must help you to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:\n",
"\n",
"1. Instruct with a necessary input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: <YOUR_INPUT>\n",
"\n",
"2. Instruct without any input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: None\n",
"\n",
"The \"Instruction\" describes a task or question. The paired \"Input\" provides further context or information for the requested \"Instruction\".\n",
"\n",
"You must give me one instruction at a time.\n",
"I must write a response that appropriately completes the requested instruction.\n",
"I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons.\n",
"You should instruct me not ask me questions.\n",
"Now you must start to instruct me using the two ways described above.\n",
"Do not add anything else other than your instruction and the optional corresponding input!\n",
"Keep giving me instructions and necessary inputs until you think the task is completed.\n",
"When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.\n",
"Never say <CAMEL_TASK_DONE> unless my responses have solved your task.\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a helper helper to get system messages for AI assistant and AI user from role names and the task"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):\n",
" assistant_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=assistant_inception_prompt\n",
" )\n",
" assistant_sys_msg = assistant_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" user_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=user_inception_prompt\n",
" )\n",
" user_sys_msg = user_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" return assistant_sys_msg, user_sys_msg"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create AI assistant agent and AI user agent from obtained system messages"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"assistant_sys_msg, user_sys_msg = get_sys_msgs(\n",
" assistant_role_name, user_role_name, specified_task\n",
")\n",
"assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))\n",
"user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))\n",
"\n",
"# Reset agents\n",
"assistant_agent.reset()\n",
"user_agent.reset()\n",
"\n",
"# Initialize chats\n",
"user_msg = HumanMessage(\n",
" content=(\n",
" f\"{user_sys_msg.content}. \"\n",
" \"Now start to give me introductions one by one. \"\n",
" \"Only reply with Instruction and Input.\"\n",
" )\n",
")\n",
"\n",
"assistant_msg = HumanMessage(content=f\"{assistant_sys_msg.content}\")\n",
"assistant_msg = assistant_agent.step(user_msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start role-playing session to solve the task!"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original task prompt:\n",
"Develop a trading bot for the stock market\n",
"\n",
"Specified task prompt:\n",
"Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Install the necessary Python libraries for data analysis and trading.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:\n",
"\n",
"```\n",
"pip install pandas numpy matplotlib ta-lib\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Import the necessary libraries in the Python script.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Load historical stock data into a pandas DataFrame.\n",
"Input: The path to the CSV file containing the historical stock data.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:\n",
"\n",
"```\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"```\n",
"\n",
"This will load the historical stock data into a pandas DataFrame called `df`. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:\n",
"\n",
"```\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"```\n",
"\n",
"This will set the date column as the index and sort the DataFrame in ascending order by date. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.\n",
"Input: The period for the short-term moving average and the period for the long-term moving average.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:\n",
"\n",
"```\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"```\n",
"\n",
"This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:\n",
"\n",
"```\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"```\n",
"\n",
"This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:\n",
"\n",
"```\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"```\n",
"\n",
"This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.\n",
"Input: The stop loss and profit target as percentages.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:\n",
"\n",
"```\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"```\n",
"\n",
"This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:\n",
"\n",
"```\n",
"total_pnl = new_df['pnl'].sum()\n",
"```\n",
"\n",
"This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:\n",
"\n",
"```\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the buy and sell signals using a scatter plot.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:\n",
"\n",
"```\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Print the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:\n",
"\n",
"```\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Finalize the code by putting all the code snippets together in a single Python script.\n",
"Input: The necessary inputs for each code snippet.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: Here is the complete Python script that puts all the code snippets together:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"\n",
"# Load historical stock data into a pandas DataFrame\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"\n",
"# Preprocess the historical stock data\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"\n",
"# Calculate the short-term and long-term moving averages\n",
"short_period = 10\n",
"long_period = 30\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"\n",
"# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"\n",
"# Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"\n",
"# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target\n",
"stop_loss_percent = 5\n",
"profit_target_percent = 10\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"\n",
"# Calculate the total profit or loss for all trades\n",
"total_pnl = new_df['pnl'].sum()\n",
"\n",
"# Visualize the stock data, short-term moving average, and long-term moving average using a line chart\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Visualize the buy and sell signals using a scatter plot\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Print the total profit or loss for all trades\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"<CAMEL_TASK_DONE>\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Great! Let me know if you need any further assistance.\n",
"\n",
"\n"
]
}
],
"source": [
"print(f\"Original task prompt:\\n{task}\\n\")\n",
"print(f\"Specified task prompt:\\n{specified_task}\\n\")\n",
"\n",
"chat_turn_limit, n = 30, 0\n",
"while n < chat_turn_limit:\n",
" n += 1\n",
" user_ai_msg = user_agent.step(assistant_msg)\n",
" user_msg = HumanMessage(content=user_ai_msg.content)\n",
" print(f\"AI User ({user_role_name}):\\n\\n{user_msg.content}\\n\\n\")\n",
"\n",
" assistant_ai_msg = assistant_agent.step(user_msg)\n",
" assistant_msg = HumanMessage(content=assistant_ai_msg.content)\n",
" print(f\"AI Assistant ({assistant_role_name}):\\n\\n{assistant_msg.content}\\n\\n\")\n",
" if \"<CAMEL_TASK_DONE>\" in user_msg.content:\n",
" break"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "camel",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -1,557 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Python Modules"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the following Python modules:\n",
"\n",
"```bash\n",
"pip install ipykernel python-dotenv cassio pandas langchain_openai langchain langchain-community langchainhub langchain_experimental openai-multi-tool-use-parallel-patch\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the `.env` File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Connection is via `cassio` using `auto=True` parameter, and the notebook uses OpenAI. You should create a `.env` file accordingly.\n",
"\n",
"For Casssandra, set:\n",
"```bash\n",
"CASSANDRA_CONTACT_POINTS\n",
"CASSANDRA_USERNAME\n",
"CASSANDRA_PASSWORD\n",
"CASSANDRA_KEYSPACE\n",
"```\n",
"\n",
"For Astra, set:\n",
"```bash\n",
"ASTRA_DB_APPLICATION_TOKEN\n",
"ASTRA_DB_DATABASE_ID\n",
"ASTRA_DB_KEYSPACE\n",
"```\n",
"\n",
"For example:\n",
"\n",
"```bash\n",
"# Connection to Astra:\n",
"ASTRA_DB_DATABASE_ID=a1b2c3d4-...\n",
"ASTRA_DB_APPLICATION_TOKEN=AstraCS:...\n",
"ASTRA_DB_KEYSPACE=notebooks\n",
"\n",
"# Also set \n",
"OPENAI_API_KEY=sk-....\n",
"```\n",
"\n",
"(You may also modify the below code to directly connect with `cassio`.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Cassandra"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import cassio\n",
"\n",
"cassio.init(auto=True)\n",
"session = cassio.config.resolve_session()\n",
"if not session:\n",
" raise Exception(\n",
" \"Check environment configuration or manually configure cassio connection parameters\"\n",
" )\n",
"\n",
"keyspace = os.environ.get(\n",
" \"ASTRA_DB_KEYSPACE\", os.environ.get(\"CASSANDRA_KEYSPACE\", None)\n",
")\n",
"if not keyspace:\n",
" raise ValueError(\"a KEYSPACE environment variable must be set\")\n",
"\n",
"session.set_keyspace(keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs to be done one time only!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The dataset used is from Kaggle, the [Environmental Sensor Telemetry Data](https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k?select=iot_telemetry_data.csv). The next cell will download and unzip the data into a Pandas dataframe. The following cell is instructions to download manually. \n",
"\n",
"The net result of this section is you should have a Pandas dataframe variable `df`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Automatically"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from io import BytesIO\n",
"from zipfile import ZipFile\n",
"\n",
"import pandas as pd\n",
"import requests\n",
"\n",
"datasetURL = \"https://storage.googleapis.com/kaggle-data-sets/788816/1355729/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T115828Z&X-Goog-Expires=259200&X-Goog-SignedHeaders=host&X-Goog-Signature=2849f003b100eb9dcda8dd8535990f51244292f67e4f5fad36f14aa67f2d4297672d8fe6ff5a39f03a29cda051e33e95d36daab5892b8874dcd5a60228df0361fa26bae491dd4371f02dd20306b583a44ba85a4474376188b1f84765147d3b4f05c57345e5de883c2c29653cce1f3755cd8e645c5e952f4fb1c8a735b22f0c811f97f7bce8d0235d0d3731ca8ab4629ff381f3bae9e35fc1b181c1e69a9c7913a5e42d9d52d53e5f716467205af9c8a3cc6746fc5352e8fbc47cd7d18543626bd67996d18c2045c1e475fc136df83df352fa747f1a3bb73e6ba3985840792ec1de407c15836640ec96db111b173bf16115037d53fdfbfd8ac44145d7f9a546aa\"\n",
"\n",
"response = requests.get(datasetURL)\n",
"if response.status_code == 200:\n",
" zip_file = ZipFile(BytesIO(response.content))\n",
" csv_file_name = zip_file.namelist()[0]\n",
"else:\n",
" print(\"Failed to download the file\")\n",
"\n",
"with zip_file.open(csv_file_name) as csv_file:\n",
" df = pd.read_csv(csv_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Manually"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can download the `.zip` file and unpack the `.csv` contained within. Comment in the next line, and adjust the path to this `.csv` file appropriately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# df = pd.read_csv(\"/path/to/iot_telemetry_data.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data into Cassandra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section assumes the existence of a dataframe `df`, the following cell validates its structure. The Download section above creates this object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert df is not None, \"Dataframe 'df' must be set\"\n",
"expected_columns = [\n",
" \"ts\",\n",
" \"device\",\n",
" \"co\",\n",
" \"humidity\",\n",
" \"light\",\n",
" \"lpg\",\n",
" \"motion\",\n",
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create and load tables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import UTC, datetime\n",
"\n",
"from cassandra.query import BatchStatement\n",
"\n",
"# Create sensors table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_sensors (\n",
" device text,\n",
" conditions text,\n",
" room text,\n",
" PRIMARY KEY (device)\n",
")\n",
"WITH COMMENT = 'Environmental IoT room sensor metadata.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_sensors (device, conditions, room)\n",
"VALUES (?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"devices = [\n",
" (\"00:0f:00:70:91:0a\", \"stable conditions, cooler and more humid\", \"room 1\"),\n",
" (\"1c:bf:ce:15:ec:4d\", \"highly variable temperature and humidity\", \"room 2\"),\n",
" (\"b8:27:eb:bf:9d:51\", \"stable conditions, warmer and dryer\", \"room 3\"),\n",
"]\n",
"\n",
"for device, conditions, room in devices:\n",
" session.execute(pstmt, (device, conditions, room))\n",
"\n",
"print(\"Sensors inserted successfully.\")\n",
"\n",
"# Create data table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_data (\n",
" day text,\n",
" device text,\n",
" ts timestamp,\n",
" co double,\n",
" humidity double,\n",
" light boolean,\n",
" lpg double,\n",
" motion boolean,\n",
" smoke double,\n",
" temp double,\n",
" PRIMARY KEY ((day, device), ts)\n",
")\n",
"WITH COMMENT = 'Data from environmental IoT room sensors. Columns include device identifier, timestamp (ts) of the data collection, carbon monoxide level (co), relative humidity, light presence, LPG concentration, motion detection, smoke concentration, and temperature (temp). Data is partitioned by day and device.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_data (day, device, ts, co, humidity, light, lpg, motion, smoke, temp)\n",
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"def insert_data_batch(name, group):\n",
" batch = BatchStatement()\n",
" day, device = name\n",
" print(f\"Inserting batch for day: {day}, device: {device}\")\n",
"\n",
" for _, row in group.iterrows():\n",
" timestamp = datetime.fromtimestamp(row[\"ts\"], UTC)\n",
" batch.add(\n",
" pstmt,\n",
" (\n",
" day,\n",
" row[\"device\"],\n",
" timestamp,\n",
" row[\"co\"],\n",
" row[\"humidity\"],\n",
" row[\"light\"],\n",
" row[\"lpg\"],\n",
" row[\"motion\"],\n",
" row[\"smoke\"],\n",
" row[\"temp\"],\n",
" ),\n",
" )\n",
"\n",
" session.execute(batch)\n",
"\n",
"\n",
"# Convert columns to appropriate types\n",
"df[\"light\"] = df[\"light\"] == \"true\"\n",
"df[\"motion\"] = df[\"motion\"] == \"true\"\n",
"df[\"ts\"] = df[\"ts\"].astype(float)\n",
"df[\"day\"] = df[\"ts\"].apply(\n",
" lambda x: datetime.fromtimestamp(x, UTC).strftime(\"%Y-%m-%d\")\n",
")\n",
"\n",
"grouped_df = df.groupby([\"day\", \"device\"])\n",
"\n",
"for name, group in grouped_df:\n",
" insert_data_batch(name, group)\n",
"\n",
"print(\"Data load complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(session.keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the Tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python `import` statements for the demo:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"from langchain_community.agent_toolkits.cassandra_database.toolkit import (\n",
" CassandraDatabaseToolkit,\n",
")\n",
"from langchain_community.tools.cassandra_database.prompt import QUERY_PATH_PROMPT\n",
"from langchain_community.tools.cassandra_database.tool import (\n",
" GetSchemaCassandraDatabaseTool,\n",
" GetTableDataCassandraDatabaseTool,\n",
" QueryCassandraDatabaseTool,\n",
")\n",
"from langchain_community.utilities.cassandra_database import CassandraDatabase\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CassandraDatabase` object is loaded from `cassio`, though it does accept a `Session`-type parameter as an alternative."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a CassandraDatabase instance\n",
"db = CassandraDatabase(include_tables=[\"iot_sensors\", \"iot_data\"])\n",
"\n",
"# Create the Cassandra Database tools\n",
"query_tool = QueryCassandraDatabaseTool(db=db)\n",
"schema_tool = GetSchemaCassandraDatabaseTool(db=db)\n",
"select_data_tool = GetTableDataCassandraDatabaseTool(db=db)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tools can be invoked directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the tools\n",
"print(\"Executing a CQL query:\")\n",
"query = \"SELECT * FROM iot_sensors LIMIT 5;\"\n",
"result = query_tool.run({\"query\": query})\n",
"print(result)\n",
"\n",
"print(\"\\nGetting the schema for a keyspace:\")\n",
"schema = schema_tool.run({\"keyspace\": keyspace})\n",
"print(schema)\n",
"\n",
"print(\"\\nGetting data from a table:\")\n",
"table = \"iot_data\"\n",
"predicate = \"day = '2020-07-14' and device = 'b8:27:eb:bf:9d:51'\"\n",
"data = select_data_tool.run(\n",
" {\"keyspace\": keyspace, \"table\": table, \"predicate\": predicate, \"limit\": 5}\n",
")\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agent Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_experimental.utilities import PythonREPL\n",
"\n",
"python_repl = PythonREPL()\n",
"\n",
"repl_tool = Tool(\n",
" name=\"python_repl\",\n",
" description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n",
" func=python_repl.run,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\")\n",
"toolkit = CassandraDatabaseToolkit(db=db)\n",
"\n",
"# context = toolkit.get_context()\n",
"# tools = toolkit.get_tools()\n",
"tools = [schema_tool, select_data_tool, repl_tool]\n",
"\n",
"input = (\n",
" QUERY_PATH_PROMPT\n",
" + f\"\"\"\n",
"\n",
"Here is your task: In the {keyspace} keyspace, find the total number of times the temperature of each device has exceeded 23 degrees on July 14, 2020.\n",
" Create a summary report including the name of the room. Use Pandas if helpful.\n",
"\"\"\"\n",
")\n",
"\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"\n",
"# messages = [\n",
"# HumanMessagePromptTemplate.from_template(input),\n",
"# AIMessage(content=QUERY_PATH_PROMPT),\n",
"# MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
"# ]\n",
"\n",
"# prompt = ChatPromptTemplate.from_messages(messages)\n",
"# print(prompt)\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"\n",
"# Construct the OpenAI Tools agent\n",
"agent = create_openai_tools_agent(llm, tools, prompt)\n",
"\n",
"print(\"Available tools:\")\n",
"for tool in tools:\n",
" print(\"\\t\" + tool.name + \" - \" + tool.description + \" - \" + str(tool))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"\n",
"response = agent_executor.invoke({\"input\": input})\n",
"\n",
"print(response[\"output\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,554 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Custom Agent with PlugIn Retrieval\n",
"\n",
"This notebook combines two concepts in order to build a custom agent that can interact with AI Plugins:\n",
"\n",
"1. [Custom Agent with Tool Retrieval](/docs/modules/agents/how_to/custom_agent_with_tool_retrieval.html): This introduces the concept of retrieving many tools, which is useful when trying to work with arbitrarily many plugins.\n",
"2. [Natural Language API Chains](/docs/use_cases/apis/openapi.html): This creates Natural Language wrappers around OpenAPI endpoints. This is useful because (1) plugins use OpenAPI endpoints under the hood, (2) wrapping them in an NLAChain allows the router agent to call it more easily.\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select not the tools explicitly, but the set of OpenAPI specs to use. We can then generate tools from those OpenAPI specs. The use case for this is when trying to get agents to use plugins. It may be more efficient to choose plugins first, then the endpoints, rather than the endpoints directly. This is because the plugins may contain more useful information for selection."
]
},
{
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"from typing import Union\n",
"\n",
"from langchain.agents import (\n",
" AgentExecutor,\n",
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "2f91d8b4",
"metadata": {},
"source": [
"## Setup LLM"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a1a3b59c",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {},
"source": [
"## Set up plugins\n",
"\n",
"Load and index plugins"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"urls = [\n",
" \"https://datasette.io/.well-known/ai-plugin.json\",\n",
" \"https://api.speak.com/.well-known/ai-plugin.json\",\n",
" \"https://www.wolframalpha.com/.well-known/ai-plugin.json\",\n",
" \"https://www.zapier.com/.well-known/ai-plugin.json\",\n",
" \"https://www.klarna.com/.well-known/ai-plugin.json\",\n",
" \"https://www.joinmilo.com/.well-known/ai-plugin.json\",\n",
" \"https://slack.com/.well-known/ai-plugin.json\",\n",
" \"https://schooldigger.com/.well-known/ai-plugin.json\",\n",
"]\n",
"\n",
"AI_PLUGINS = [AIPlugin.from_url(url) for url in urls]"
]
},
{
"cell_type": "markdown",
"id": "17362717",
"metadata": {},
"source": [
"## Tool Retriever\n",
"\n",
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "77c4be4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9092a158",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.2 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load a Swagger 2.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
],
"source": [
"embeddings = OpenAIEmbeddings()\n",
"docs = [\n",
" Document(\n",
" page_content=plugin.description_for_model,\n",
" metadata={\"plugin_name\": plugin.name_for_model},\n",
" )\n",
" for plugin in AI_PLUGINS\n",
"]\n",
"vector_store = FAISS.from_documents(docs, embeddings)\n",
"toolkits_dict = {\n",
" plugin.name_for_model: NLAToolkit.from_llm_and_ai_plugin(llm, plugin)\n",
" for plugin in AI_PLUGINS\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "735a7566",
"metadata": {},
"outputs": [],
"source": [
"retriever = vector_store.as_retriever()\n",
"\n",
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",
" tools = []\n",
" for tk in tool_kits:\n",
" tools.extend(tk.nla_tools)\n",
" return tools"
]
},
{
"cell_type": "markdown",
"id": "7699afd7",
"metadata": {},
"source": [
"We can now test this retriever to see if it seems to work."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "425f2886",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20',\n",
" 'Speak.translate',\n",
" 'Speak.explainPhrase',\n",
" 'Speak.explainTask']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"What could I do today with my kiddo\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3aa88768",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Open_AI_Klarna_product_Api.productsUsingGET',\n",
" 'Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20']"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"what shirts can i buy?\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt Template\n",
"\n",
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "markdown",
"id": "1583acdc",
"metadata": {},
"source": [
"The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fd969d31",
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable\n",
"\n",
"\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" ############## NEW ######################\n",
" # The list of tools available\n",
" tools_getter: Callable\n",
"\n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" ############## NEW ######################\n",
" tools = self.tools_getter(kwargs[\"input\"])\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join(\n",
" [f\"{tool.name}: {tool.description}\" for tool in tools]\n",
" )\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
" return self.template.format(**kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "798ef9fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools_getter=get_tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output Parser\n",
"\n",
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7c6fe0d3",
"metadata": {},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(\n",
" tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d278706a",
"metadata": {},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {},
"source": [
"## Set up LLM, stop sequence, and the agent\n",
"\n",
"Also the same as the previous notebook"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f9d4c374",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain,\n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"],\n",
" allowed_tools=tool_names,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find a product API\n",
"Action: Open_AI_Klarna_product_Api.productsUsingGET\n",
"Action Input: shirts\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mI found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\u001b[32;1m\u001b[1;3m I now know what shirts I can buy\n",
"Final Answer: Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"what shirts can i buy?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2481ee76",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,578 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Plug-and-Plai\n",
"\n",
"This notebook builds upon the idea of [plugin retrieval](./custom_agent_with_plugin_retrieval.html), but pulls all tools from `plugnplai` - a directory of AI Plugins."
]
},
{
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "markdown",
"id": "aca08be8",
"metadata": {},
"source": [
"Install plugnplai lib to get a list of active plugins from https://plugplai.com directory"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "52e248c9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip available: \u001b[0m\u001b[31;49m22.3.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"pip install plugnplai -q"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"from typing import Union\n",
"\n",
"import plugnplai\n",
"from langchain.agents import (\n",
" AgentExecutor,\n",
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "2f91d8b4",
"metadata": {},
"source": [
"## Setup LLM"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a1a3b59c",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {},
"source": [
"## Set up plugins\n",
"\n",
"Load and index plugins"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9e0f7882",
"metadata": {},
"outputs": [],
"source": [
"# Get all plugins from plugnplai.com\n",
"urls = plugnplai.get_plugins()\n",
"\n",
"# Get ChatGPT plugins - only ChatGPT verified plugins\n",
"urls = plugnplai.get_plugins(filter=\"ChatGPT\")\n",
"\n",
"# Get working plugins - only tested plugins (in progress)\n",
"urls = plugnplai.get_plugins(filter=\"working\")\n",
"\n",
"\n",
"AI_PLUGINS = [AIPlugin.from_url(url + \"/.well-known/ai-plugin.json\") for url in urls]"
]
},
{
"cell_type": "markdown",
"id": "17362717",
"metadata": {},
"source": [
"## Tool Retriever\n",
"\n",
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "77c4be4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9092a158",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.2 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
"Attempting to load a Swagger 2.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
]
}
],
"source": [
"embeddings = OpenAIEmbeddings()\n",
"docs = [\n",
" Document(\n",
" page_content=plugin.description_for_model,\n",
" metadata={\"plugin_name\": plugin.name_for_model},\n",
" )\n",
" for plugin in AI_PLUGINS\n",
"]\n",
"vector_store = FAISS.from_documents(docs, embeddings)\n",
"toolkits_dict = {\n",
" plugin.name_for_model: NLAToolkit.from_llm_and_ai_plugin(llm, plugin)\n",
" for plugin in AI_PLUGINS\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "735a7566",
"metadata": {},
"outputs": [],
"source": [
"retriever = vector_store.as_retriever()\n",
"\n",
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",
" tools = []\n",
" for tk in tool_kits:\n",
" tools.extend(tk.nla_tools)\n",
" return tools"
]
},
{
"cell_type": "markdown",
"id": "7699afd7",
"metadata": {},
"source": [
"We can now test this retriever to see if it seems to work."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "425f2886",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20',\n",
" 'Speak.translate',\n",
" 'Speak.explainPhrase',\n",
" 'Speak.explainTask']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"What could I do today with my kiddo\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3aa88768",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Open_AI_Klarna_product_Api.productsUsingGET',\n",
" 'Milo.askMilo',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
" 'SchoolDigger_API_V2.0.Schools_GetSchool20']"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = get_tools(\"what shirts can i buy?\")\n",
"[t.name for t in tools]"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt Template\n",
"\n",
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "markdown",
"id": "1583acdc",
"metadata": {},
"source": [
"The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fd969d31",
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable\n",
"\n",
"\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" ############## NEW ######################\n",
" # The list of tools available\n",
" tools_getter: Callable\n",
"\n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" ############## NEW ######################\n",
" tools = self.tools_getter(kwargs[\"input\"])\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join(\n",
" [f\"{tool.name}: {tool.description}\" for tool in tools]\n",
" )\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
" return self.template.format(**kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "798ef9fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools_getter=get_tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output Parser\n",
"\n",
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7c6fe0d3",
"metadata": {},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(\n",
" tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d278706a",
"metadata": {},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {},
"source": [
"## Set up LLM, stop sequence, and the agent\n",
"\n",
"Also the same as the previous notebook"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f9d4c374",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain,\n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"],\n",
" allowed_tools=tool_names,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find a product API\n",
"Action: Open_AI_Klarna_product_Api.productsUsingGET\n",
"Action Input: shirts\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mI found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\u001b[32;1m\u001b[1;3m I now know what shirts I can buy\n",
"Final Answer: Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"what shirts can i buy?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2481ee76",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"vscode": {
"interpreter": {
"hash": "3ccef4e08d87aa1eeb90f63e0f071292ccb2e9c42e70f74ab2bf6f5493ca7bbc"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,500 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Custom agent with tool retrieval\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
"\n",
"In this notebook we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query."
]
},
{
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"from typing import Union\n",
"\n",
"from langchain.agents import (\n",
" AgentExecutor,\n",
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
" Tool,\n",
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {},
"source": [
"## Set up tools\n",
"\n",
"We will create one legitimate tool (search) and then 99 fake tools."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"# Define which tools the agent can use to answer user queries\n",
"search = SerpAPIWrapper()\n",
"search_tool = Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
")\n",
"\n",
"\n",
"def fake_func(inp: str) -> str:\n",
" return \"foo\"\n",
"\n",
"\n",
"fake_tools = [\n",
" Tool(\n",
" name=f\"foo-{i}\",\n",
" func=fake_func,\n",
" description=f\"a silly function that you can use to get more information about the number {i}\",\n",
" )\n",
" for i in range(99)\n",
"]\n",
"ALL_TOOLS = [search_tool] + fake_tools"
]
},
{
"cell_type": "markdown",
"id": "17362717",
"metadata": {},
"source": [
"## Tool Retriever\n",
"\n",
"We will use a vector store to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "77c4be4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9092a158",
"metadata": {},
"outputs": [],
"source": [
"docs = [\n",
" Document(page_content=t.description, metadata={\"index\": i})\n",
" for i, t in enumerate(ALL_TOOLS)\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "affc4e56",
"metadata": {},
"outputs": [],
"source": [
"vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "735a7566",
"metadata": {},
"outputs": [],
"source": [
"retriever = vector_store.as_retriever()\n",
"\n",
"\n",
"def get_tools(query):\n",
" docs = retriever.invoke(query)\n",
" return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]"
]
},
{
"cell_type": "markdown",
"id": "7699afd7",
"metadata": {},
"source": [
"We can now test this retriever to see if it seems to work."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "425f2886",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Tool(name='Search', description='useful for when you need to answer questions about current events', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<bound method SerpAPIWrapper.run of SerpAPIWrapper(search_engine=<class 'serpapi.google_search.GoogleSearch'>, params={'engine': 'google', 'google_domain': 'google.com', 'gl': 'us', 'hl': 'en'}, serpapi_api_key='', aiosession=None)>, coroutine=None),\n",
" Tool(name='foo-95', description='a silly function that you can use to get more information about the number 95', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),\n",
" Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),\n",
" Tool(name='foo-15', description='a silly function that you can use to get more information about the number 15', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_tools(\"whats the weather?\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "4036dd19",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Tool(name='foo-13', description='a silly function that you can use to get more information about the number 13', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),\n",
" Tool(name='foo-12', description='a silly function that you can use to get more information about the number 12', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),\n",
" Tool(name='foo-14', description='a silly function that you can use to get more information about the number 14', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None),\n",
" Tool(name='foo-11', description='a silly function that you can use to get more information about the number 11', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x114b28a90>, func=<function fake_func at 0x15e5bd1f0>, coroutine=None)]"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_tools(\"whats the number 13?\")"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt template\n",
"\n",
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "markdown",
"id": "1583acdc",
"metadata": {},
"source": [
"The custom prompt template now has the concept of a `tools_getter`, which we call on the input to select the tools to use."
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "fd969d31",
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable\n",
"\n",
"\n",
"# Set up a prompt template\n",
"class CustomPromptTemplate(StringPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" ############## NEW ######################\n",
" # The list of tools available\n",
" tools_getter: Callable\n",
"\n",
" def format(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" ############## NEW ######################\n",
" tools = self.tools_getter(kwargs[\"input\"])\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join(\n",
" [f\"{tool.name}: {tool.description}\" for tool in tools]\n",
" )\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
" return self.template.format(**kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "798ef9fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools_getter=get_tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output parser\n",
"\n",
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "7c6fe0d3",
"metadata": {},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(\n",
" tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "d278706a",
"metadata": {},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {},
"source": [
"## Set up LLM, stop sequence, and the agent\n",
"\n",
"Also the same as the previous notebook."
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "f9d4c374",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tools = get_tools(\"whats the weather?\")\n",
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain,\n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"],\n",
" allowed_tools=tool_names,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 60,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out what the weather is in SF\n",
"Action: Search\n",
"Action Input: Weather in SF\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mMostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shifting to W at 10 to 15 mph. Humidity71%. UV Index6 of 10.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"'Arg, 'tis mostly cloudy skies early, then partly cloudy in the afternoon. High near 60F. ENE winds shiftin' to W at 10 to 15 mph. Humidity71%. UV Index6 of 10.\""
]
},
"execution_count": 60,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"What's the weather in SF?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2481ee76",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,220 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Custom multi-action agent\n",
"\n",
"This notebook goes through how to create your own custom agent.\n",
"\n",
"An agent consists of two parts:\n",
"\n",
"- Tools: The tools the agent has available to use.\n",
"- The agent class itself: this decides which action to take.\n",
" \n",
" \n",
"In this notebook we walk through how to create a custom agent that predicts/takes multiple steps at a time."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, BaseMultiActionAgent, Tool\n",
"from langchain_community.utilities import SerpAPIWrapper"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d7c4ebdc",
"metadata": {},
"outputs": [],
"source": [
"def random_word(query: str) -> str:\n",
" print(\"\\nNow I'm doing this!\")\n",
" return \"foo\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" ),\n",
" Tool(\n",
" name=\"RandomWord\",\n",
" func=random_word,\n",
" description=\"call this to get a random word.\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a33e2f7e",
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, List, Tuple, Union\n",
"\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"\n",
"\n",
"class FakeAgent(BaseMultiActionAgent):\n",
" \"\"\"Fake Custom Agent.\"\"\"\n",
"\n",
" @property\n",
" def input_keys(self):\n",
" return [\"input\"]\n",
"\n",
" def plan(\n",
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
" ) -> Union[List[AgentAction], AgentFinish]:\n",
" \"\"\"Given input, decided what to do.\n",
"\n",
" Args:\n",
" intermediate_steps: Steps the LLM has taken to date,\n",
" along with observations\n",
" **kwargs: User inputs.\n",
"\n",
" Returns:\n",
" Action specifying what tool to use.\n",
" \"\"\"\n",
" if len(intermediate_steps) == 0:\n",
" return [\n",
" AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\"),\n",
" AgentAction(tool=\"RandomWord\", tool_input=kwargs[\"input\"], log=\"\"),\n",
" ]\n",
" else:\n",
" return AgentFinish(return_values={\"output\": \"bar\"}, log=\"\")\n",
"\n",
" async def aplan(\n",
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
" ) -> Union[List[AgentAction], AgentFinish]:\n",
" \"\"\"Given input, decided what to do.\n",
"\n",
" Args:\n",
" intermediate_steps: Steps the LLM has taken to date,\n",
" along with observations\n",
" **kwargs: User inputs.\n",
"\n",
" Returns:\n",
" Action specifying what tool to use.\n",
" \"\"\"\n",
" if len(intermediate_steps) == 0:\n",
" return [\n",
" AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\"),\n",
" AgentAction(tool=\"RandomWord\", tool_input=kwargs[\"input\"], log=\"\"),\n",
" ]\n",
" else:\n",
" return AgentFinish(return_values={\"output\": \"bar\"}, log=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "655d72f6",
"metadata": {},
"outputs": [],
"source": [
"agent = FakeAgent()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\u001b[36;1m\u001b[1;3mThe current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"Now I'm doing this!\n",
"\u001b[33;1m\u001b[1;3mfoo\u001b[0m\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'bar'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "adefb4c2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More