Compare commits

..

52 Commits

Author SHA1 Message Date
ccurme
672339f3c6 core: release 0.3.60 (#31249) 2025-05-15 11:14:04 -04:00
renchao
6f2acbcf2e Update sql_large_db.ipynb (#31241)
"Alanis Morissette" spelling error

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-15 11:07:51 -04:00
ccurme
8b145d5dc3 openai: release 0.3.17 (#31246) 2025-05-15 09:18:22 -04:00
dependabot[bot]
d4f77a8c8f build(deps): bump actions/setup-python from 3 to 5 (#31234)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 3 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release, we update node version runtime from node16
to node20 (<a
href="https://redirect.github.com/actions/setup-python/pull/772">actions/setup-python#772</a>).
Besides, we update dependencies to the latest versions.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4.8.0...v5.0.0">https://github.com/actions/setup-python/compare/v4.8.0...v5.0.0</a></p>
<h2>v4.9.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1084">actions/setup-python#1084</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4...v4.9.1">https://github.com/actions/setup-python/compare/v4...v4.9.1</a></p>
<h2>v4.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade <code>actions/cache</code> to 4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1073">actions/setup-python#1073</a>
In scope of this release we updated actions/cache package to ensure
continued support and compatibility, as older versions of the package
are now deprecated. For more information please refer to the <a
href="https://github.com/actions/toolkit/discussions/1890">toolkit/cache</a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4.8.0...v4.9.0">https://github.com/actions/setup-python/compare/v4.8.0...v4.9.0</a></p>
<h2>v4.8.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release we added support for GraalPy (<a
href="https://redirect.github.com/actions/setup-python/pull/694">actions/setup-python#694</a>).
You can use this snippet to set up GraalPy:</p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4 
  with:
    python-version: 'graalpy-22.3' 
- run: python my_script.py
</code></pre>
<p>Besides, the release contains such changes as:</p>
<ul>
<li>Trim python version when reading from file by <a
href="https://github.com/FerranPares"><code>@​FerranPares</code></a> in
<a
href="https://redirect.github.com/actions/setup-python/pull/628">actions/setup-python#628</a></li>
<li>Use non-deprecated versions in examples by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/724">actions/setup-python#724</a></li>
<li>Change deprecation comment to past tense by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/723">actions/setup-python#723</a></li>
<li>Bump <code>@​babel/traverse</code> from 7.9.0 to 7.23.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/743">actions/setup-python#743</a></li>
<li>advanced-usage.md: Encourage the use actions/checkout@v4 by <a
href="https://github.com/cclauss"><code>@​cclauss</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/729">actions/setup-python#729</a></li>
<li>Examples now use checkout@v4 by <a
href="https://github.com/simonw"><code>@​simonw</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/738">actions/setup-python#738</a></li>
<li>Update actions/checkout to v4 by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/761">actions/setup-python#761</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FerranPares"><code>@​FerranPares</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/628">actions/setup-python#628</a></li>
<li><a href="https://github.com/timfel"><code>@​timfel</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/694">actions/setup-python#694</a></li>
<li><a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/724">actions/setup-python#724</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v4...v4.8.0">https://github.com/actions/setup-python/compare/v4...v4.8.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a26af69be9"><code>a26af69</code></a>
Bump ts-jest from 29.1.2 to 29.3.2 (<a
href="https://redirect.github.com/actions/setup-python/issues/1081">#1081</a>)</li>
<li><a
href="30eafe9548"><code>30eafe9</code></a>
Bump prettier from 2.8.8 to 3.5.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1046">#1046</a>)</li>
<li><a
href="5d95bc16d4"><code>5d95bc1</code></a>
Bump semver and <code>@​types/semver</code> (<a
href="https://redirect.github.com/actions/setup-python/issues/1091">#1091</a>)</li>
<li><a
href="6ed2c67c8a"><code>6ed2c67</code></a>
Fix for Candidate Not Iterable Error (<a
href="https://redirect.github.com/actions/setup-python/issues/1082">#1082</a>)</li>
<li><a
href="e348410e00"><code>e348410</code></a>
Remove Ubuntu 20.04 from workflows due to deprecation from 2025-04-15
(<a
href="https://redirect.github.com/actions/setup-python/issues/1065">#1065</a>)</li>
<li><a
href="8d9ed9ac5c"><code>8d9ed9a</code></a>
Add e2e Testing for free threaded and Bump <code>@​action/cache</code>
from 4.0.0 to 4.0.3 ...</li>
<li><a
href="19e4675e06"><code>19e4675</code></a>
Add support for .tool-versions file in setup-python (<a
href="https://redirect.github.com/actions/setup-python/issues/1043">#1043</a>)</li>
<li><a
href="6fd11e170a"><code>6fd11e1</code></a>
Bump <code>@​actions/glob</code> from 0.4.0 to 0.5.0 (<a
href="https://redirect.github.com/actions/setup-python/issues/1015">#1015</a>)</li>
<li><a
href="9e62be81b2"><code>9e62be8</code></a>
Support free threaded Python versions like '3.13t' (<a
href="https://redirect.github.com/actions/setup-python/issues/973">#973</a>)</li>
<li><a
href="6ca8e8598f"><code>6ca8e85</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1016">#1016</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-python/compare/v3...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=3&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 15:48:33 -04:00
dependabot[bot]
71b71768bf build(deps): bump actions/setup-node from 3 to 4 (#31237)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>In scope of this release we changed version of node runtime for
action from node16 to node20 and updated dependencies in <a
href="https://redirect.github.com/actions/setup-node/pull/866">actions/setup-node#866</a></p>
<p>Besides, release contains such changes as:</p>
<ul>
<li>Upgrade actions/checkout to v4 by <a
href="https://github.com/gmembre-zenika"><code>@​gmembre-zenika</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/868">actions/setup-node#868</a></li>
<li>Update actions/checkout for documentation and yaml by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/876">actions/setup-node#876</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/gmembre-zenika"><code>@​gmembre-zenika</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/868">actions/setup-node#868</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v4.0.0">https://github.com/actions/setup-node/compare/v3...v4.0.0</a></p>
<h2>v3.9.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1281">actions/setup-node#1281</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.9.1">https://github.com/actions/setup-node/compare/v3...v3.9.1</a></p>
<h2>v3.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade <code>@​actions/cache</code> to 4.0.3 by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1270">actions/setup-node#1270</a>
In scope of this release we updated actions/cache package to ensure
continued support and compatibility, as older versions of the package
are now deprecated. For more information please refer to the <a
href="https://github.com/actions/toolkit/discussions/1890">toolkit/cache</a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.9.0">https://github.com/actions/setup-node/compare/v3...v3.9.0</a></p>
<h2>v3.8.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update semver by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/861">actions/setup-node#861</a></li>
<li>Update temp directory creation by <a
href="https://github.com/nikolai-laevskii"><code>@​nikolai-laevskii</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/859">actions/setup-node#859</a></li>
<li>Bump <code>@​babel/traverse</code> from 7.15.4 to 7.23.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/870">actions/setup-node#870</a></li>
<li>Add notice about binaries not being updated yet by <a
href="https://github.com/nikolai-laevskii"><code>@​nikolai-laevskii</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/872">actions/setup-node#872</a></li>
<li>Update toolkit cache and core by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
and <a
href="https://github.com/seongwon-privatenote"><code>@​seongwon-privatenote</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/875">actions/setup-node#875</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.8.2">https://github.com/actions/setup-node/compare/v3...v3.8.2</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<p>In scope of this release, the filter was removed within the
cache-save step by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/831">actions/setup-node#831</a>.
It is filtered and checked in the toolkit/cache library.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v3...v3.8.1">https://github.com/actions/setup-node/compare/v3...v3.8.1</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Add check for existing paths by <a
href="https://github.com/dmitry-shibanov"><code>@​dmitry-shibanov</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/803">actions/setup-node#803</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="49933ea528"><code>49933ea</code></a>
Bump <code>@​action/cache</code> from 4.0.2 to 4.0.3 (<a
href="https://redirect.github.com/actions/setup-node/issues/1262">#1262</a>)</li>
<li><a
href="e3ce749e20"><code>e3ce749</code></a>
feat: support private mirrors (<a
href="https://redirect.github.com/actions/setup-node/issues/1240">#1240</a>)</li>
<li><a
href="40337cb8f7"><code>40337cb</code></a>
Add support for indented eslint output (<a
href="https://redirect.github.com/actions/setup-node/issues/1245">#1245</a>)</li>
<li><a
href="1ccdddc9b8"><code>1ccdddc</code></a>
Make eslint-compact matcher compatible with Stylelint (<a
href="https://redirect.github.com/actions/setup-node/issues/98">#98</a>)</li>
<li><a
href="cdca7365b2"><code>cdca736</code></a>
Bump <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 (<a
href="https://redirect.github.com/actions/setup-node/issues/1220">#1220</a>)</li>
<li><a
href="22c0e7494f"><code>22c0e74</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/setup-node/issues/1203">#1203</a>)</li>
<li><a
href="a7c2d9473e"><code>a7c2d94</code></a>
actions/cache upgrade (<a
href="https://redirect.github.com/actions/setup-node/issues/1251">#1251</a>)</li>
<li><a
href="802632921f"><code>8026329</code></a>
Bump <code>@​actions/glob</code> from 0.4.0 to 0.5.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1200">#1200</a>)</li>
<li><a
href="1d0ff469b7"><code>1d0ff46</code></a>
Bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1205">#1205</a>)</li>
<li><a
href="574f09a9fa"><code>574f09a</code></a>
Bump <code>@​types/jest</code> from 29.5.12 to 29.5.14 (<a
href="https://redirect.github.com/actions/setup-node/issues/1201">#1201</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-node/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 14:43:18 -04:00
Christophe Bornet
921573e2b7 core: Add ruff rules SLF (#30666)
Add ruff rules SLF: https://docs.astral.sh/ruff/rules/#flake8-self-slf

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-14 18:42:39 +00:00
dependabot[bot]
d8a7eda12e build(deps): bump astral-sh/setup-uv from 5 to 6 (#31235)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 5
to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/setup-uv/releases">astral-sh/setup-uv's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0 🌈 activate-environment and working-directory</h2>
<h2>Changes</h2>
<p>This version contains some breaking changes which have been gathering
up for a while. Lets dive into them:</p>
<ul>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#activate-environment">Activate
environment</a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#working-directory">Working
Directory</a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#default-cache-dependency-glob">Default
<code>cache-dependency-glob</code></a></li>
<li><a
href="https://github.com/astral-sh/setup-uv/blob/HEAD/#use-default-cache-dir-on-self-hosted-runners">Use
default cache dir on self hosted runners</a></li>
</ul>
<h3>Activate environment</h3>
<p>In previous versions using the input <code>python-version</code>
automatically activated a venv at the repository root.
This led to some unwanted side-effects, was sometimes unexpected and not
flexible enough.</p>
<p>The venv activation is now explicitly controlled with the new input
<code>activate-environment</code> (false by default):</p>
<pre lang="yaml"><code>- name: Install the latest version of uv and
activate the environment
  uses: astral-sh/setup-uv@v6
  with:
    activate-environment: true
- run: uv pip install pip
</code></pre>
<p>The venv gets created by the <a
href="https://docs.astral.sh/uv/pip/environments/"><code>uv
venv</code></a> command so the python version is controlled by the
<code>python-version</code> input or the files
<code>pyproject.toml</code>, <code>uv.toml</code>,
<code>.python-version</code> in the <code>working-directory</code>.</p>
<h3>Working Directory</h3>
<p>The new input <code>working-directory</code> controls where we look
for <code>pyproject.toml</code>, <code>uv.toml</code> and
<code>.python-version</code> files
which are used to determine the version of uv and python to install.</p>
<p>It can also be used to control where the venv gets created.</p>
<pre lang="yaml"><code>- name: Install uv based on the config files in
the working-directory
  uses: astral-sh/setup-uv@v6
  with:
    working-directory: my/subproject/dir
</code></pre>
<blockquote>
<p>[!CAUTION]</p>
<p>The inputs <code>pyproject-file</code> and <code>uv-file</code> have
been removed.</p>
</blockquote>
<h3>Default <code>cache-dependency-glob</code></h3>
<p><a href="https://github.com/ssbarnea"><code>@​ssbarnea</code></a>
found out that the default <code>cache-dependency-glob</code> was not
suitable for a lot of users.</p>
<p>The old default</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6b9c6063ab"><code>6b9c606</code></a>
Bump dependencies (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/389">#389</a>)</li>
<li><a
href="ef6bcdff59"><code>ef6bcdf</code></a>
Fix default cache dependency glob (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/388">#388</a>)</li>
<li><a
href="9a311713f4"><code>9a31171</code></a>
chore: update known checksums for 0.6.17 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/384">#384</a>)</li>
<li><a
href="c7f87aa956"><code>c7f87aa</code></a>
bump to v6 in README (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/382">#382</a>)</li>
<li><a
href="aadfaf08d6"><code>aadfaf0</code></a>
Change default cache-dependency-glob (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/352">#352</a>)</li>
<li><a
href="a0f9da6273"><code>a0f9da6</code></a>
No default UV_CACHE_DIR on selfhosted runners (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/380">#380</a>)</li>
<li><a
href="ec4c691628"><code>ec4c691</code></a>
new inputs activate-environment and working-directory (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/381">#381</a>)</li>
<li><a
href="aa1290542e"><code>aa12905</code></a>
chore: update known checksums for 0.6.16 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/378">#378</a>)</li>
<li><a
href="fcaddda076"><code>fcaddda</code></a>
chore: update known checksums for 0.6.15 (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/377">#377</a>)</li>
<li><a
href="fb3a0a97fa"><code>fb3a0a9</code></a>
log info on venv activation (<a
href="https://redirect.github.com/astral-sh/setup-uv/issues/375">#375</a>)</li>
<li>See full diff in <a
href="https://github.com/astral-sh/setup-uv/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=astral-sh/setup-uv&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 14:41:13 -04:00
Minh Nguyen
8af0dc5fd6 docs: Update langchain-anthropic version for tutorial with web search tool (#31240)
**Description:** This is a document change regarding integration with
package `langchain-anthropic` for newly released websearch tool ([Claude
doc](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool)).

Issue 1: The sample in [Web Search
section](https://python.langchain.com/docs/integrations/chat/anthropic/#web-search)
did not run. You would get an error as below:
```
File "my_file.py", line 170, in call
    model_with_tools = model.bind_tools([websearch_tool])
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/langchain_anthropic/chat_models.py", line 1363, in bind_tools
    tool if _is_builtin_tool(tool) else convert_to_anthropic_tool(tool)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/langchain_anthropic/chat_models.py", line 1645, in convert_to_anthropic_tool
    input_schema=oai_formatted["parameters"],
                 ~~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'parameters'
```
This is because websearch tool is only recently supported in
langchain-anthropic==0.3.13`, in [0.3.13
release](https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain-anthropic%3D%3D0%22&expanded=true)
mentioning:
> anthropic[patch]: support web search
(https://github.com/langchain-ai/langchain/pull/31157)



Issue 2: The current doc has outdated package requirements for Websearch
tool: "This guide requires langchain-anthropic>=0.3.10".

Changes:
- Updated the required `langchain-anthropic` package version (0.3.10 ->
0.3.13).
- Added notes to user when using websearch sample.

I believe this will help avoid future confusion from readers.

**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A
2025-05-14 14:19:32 -04:00
Sydney Runkle
7263011b24 perf[core]: remove unnecessary model validators (#31238)
* Remove unnecessary cast of id -> str (can do with a field setting)
* Remove unnecessary `set_text` model validator (can be done with a
computed field - though we had to make some changes to the `Generation`
class to make this possible

Before: ~2.4s

Blue circles represent time spent in custom validators :(

<img width="1337" alt="Screenshot 2025-05-14 at 10 10 12 AM"
src="https://github.com/user-attachments/assets/bb4f477f-4ee3-4870-ae93-14ca7f197d55"
/>


After: ~2.2s

<img width="1344" alt="Screenshot 2025-05-14 at 10 11 03 AM"
src="https://github.com/user-attachments/assets/99f97d80-49de-462f-856f-9e7e8662adbc"
/>

We still want to optimize the backwards compatible tool calls model
validator, though I think this might involve breaking changes, so wanted
to separate that into a different PR. This is circled in green.
2025-05-14 10:20:22 -07:00
Sydney Runkle
1523602196 packaging[core]: bump min pydantic version (#31239)
Bumping to a version that's a year old, so seems like a reasonable bump.
2025-05-14 10:01:24 -07:00
ccurme
367566b02f docs: fix notebook (#31233)
This is no longer runnable in CI.
2025-05-14 11:53:38 -04:00
Scott Brenner
29bfbc0ea6 infra: Dependabot configuration to update actions in workflow (#31026)
Noticed a few Actions used in the workflows here are outdated, proposing
a Dependabot configuration to update - reference
https://docs.github.com/en/actions/security-guides/using-githubs-security-features-to-secure-your-use-of-github-actions#keeping-the-actions-in-your-workflows-secure-and-up-to-date

Suggest enabling
https://docs.github.com/en/code-security/dependabot/working-with-dependabot/about-dependabot-on-github-actions-runners#enabling-or-disabling-for-your-repository
as well
2025-05-14 11:40:54 -04:00
Lope Ramos
b8ae2de169 langchain-core[patch]: Incremental record manager deletion should be batched (#31206)
**Description:** Before this commit, if one record is batched in more
than 32k rows for sqlite3 >= 3.32 or more than 999 rows for sqlite3 <
3.31, the `record_manager.delete_keys()` will fail, as we are creating a
query with too many variables.

This commit ensures that we are batching the delete operation leveraging
the `cleanup_batch_size` as it is already done for `full` cleanup.

Added unit tests for incremental mode as well on different deleting
batch size.
2025-05-14 11:38:21 -04:00
Sydney Runkle
263c215112 perf[core]: remove generations summation from hot loop (#31231)
1. Removes summation of `ChatGenerationChunk` from hot loops in `stream`
and `astream`
2. Removes run id gen from loop as well (minor impact)

Again, benchmarking on processing ~200k chunks (a poem about broccoli).

Before: ~4.2s

Blue circle is all the time spent adding up gen chunks

<img width="1345" alt="Screenshot 2025-05-14 at 7 48 33 AM"
src="https://github.com/user-attachments/assets/08a59d78-134d-4cd3-9d54-214de689df51"
/>

After: ~2.3s

Blue circle is remaining time spent on adding chunks, which can be
minimized in a future PR by optimizing the `merge_content`,
`merge_dicts`, and `merge_lists` utilities.

<img width="1353" alt="Screenshot 2025-05-14 at 7 50 08 AM"
src="https://github.com/user-attachments/assets/df6b3506-929e-4b6d-b198-7c4e992c6d34"
/>
2025-05-14 08:13:05 -07:00
Sydney Runkle
17b799860f perf[core]: remove costly async helpers for non-end event handlers (#31230)
1. Remove `shielded` decorator from non-end event handlers
2. Exit early with a `self.handlers` check instead of doing unnecessary
asyncio work

Using a benchmark that processes ~200k chunks (a poem about broccoli).

Before: ~15s

Circled in blue is unnecessary event handling time. This is addressed by
point 2 above

<img width="1347" alt="Screenshot 2025-05-14 at 7 37 53 AM"
src="https://github.com/user-attachments/assets/675e0fed-8f37-46c0-90b3-bef3cb9a1e86"
/>

After: ~4.2s

The total time is largely reduced by the removal of the `shielded`
decorator, which holds little significance for non-end handlers.

<img width="1348" alt="Screenshot 2025-05-14 at 7 37 22 AM"
src="https://github.com/user-attachments/assets/54be8a3e-5827-4136-a87b-54b0d40fe331"
/>
2025-05-14 07:42:56 -07:00
ccurme
0b8837a0cc openai: support runtime kwargs in embeddings (#31195) 2025-05-14 09:14:40 -04:00
Rares Vernica
4f41b54bcb docs:Fix Google GenAI Embedding params (#31188)
Extend Google parameters in the embeddings tab to include Google GenAI
(Gemini)

**Description:** Update embeddings tab to include example for Google
GenAI (Gemini)

**Issue:** N/A

**Dependencies:** N/A

**Twitter handle:** N/A


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-14 08:50:11 -04:00
MedlockM
ce0b1a9428 docs: update how_to docs to reflect new loaders interface and functionalities (#31219)
- **Description:** Updates two notebooks in the how_to documentation to
reflect new loader interfaces and functionalities.
- **Issue:** Some how_to notebooks were still using loader interfaces
from previous versions of LangChain and did not demonstrate the latest
loader functionalities (e.g., extracting images with `ImageBlobParser`,
extracting tables in specific output formats, parsing documents using
Vision-Language Models with `ZeroxPDFLoader`, and using
`CloudBlobLoader` in the `GenericLoader`, etc.).
- **Dependencies:** `py-zerox`
- **Twitter handle:** @MarcMedlock2

---------

Co-authored-by: Marc Medlock <marc.medlock@octo.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-13 17:59:01 -04:00
Michael Li
275e3b6710 docs: replace initialize_agent with create_react_agent in searchapi - replace deprecated load_tools (#31203)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 16:19:18 -04:00
Collier King
e53c10e546 docs: update Cloudflare examples for env references (#31205)
- [ ] **Docs Update**: "langchain-cloudflare: add env var references in
example notebooks"
- We've updated our Cloudflare integration example notebooks with
examples showing environmental variables to initialize the class
instances.
2025-05-13 16:18:27 -04:00
Michael Li
395f057243 docs: replace deprecated load_tools in google_finance.ipynb (#31220)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:56:11 -04:00
Michael Li
a9ee625f32 docs: replace deprecated load_tools in searx.mdx (#31218)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:44 -04:00
Michael Li
544648eb71 docs: replace deprecated load_tools in wolfram_alpha.mdx (#31217)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:29 -04:00
Michael Li
40be8d1d90 docs: replace deprecated load_tools in stackexchange.mdx (#31216)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:15 -04:00
Michael Li
f034bd7933 docs: replace deprecated load_tools in serpapi.mdx (#31215)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:53:01 -04:00
Michael Li
17a04dd598 docs: replace deprecated load_tools in google.mdx (#31214)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:46 -04:00
Michael Li
a44e707811 docs: replace deprecated load_tools in google_serper.mdx (#31213)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:30 -04:00
Michael Li
3520520a48 docs: replace deprecated load_tools in golden.mdx (#31212)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:52:15 -04:00
Michael Li
09d74504e3 docs: replace deprecated load_tools in dataforseo.mdx (#31211)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-13 14:51:40 -04:00
Shorthills AI
b2f0fbfea5 Update tools.mdx (#124) (#31207)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
  
Changed toolkit=ExampleTookit to toolkit = ExampleToolkit(...) in
tools.mdx file

- [ ] **PR message**: ***Changed toolkit=ExampleTookit to toolkit =
ExampleToolkit(...) in tools.mdx file
- [ ] ***
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: SiddharthAnandShorthillsAI <siddharth.anand@shorthills.ai>
2025-05-13 11:01:54 -04:00
Michael Li
636a35fc2d docs: replace initialize_agent with create_react_agent in llmonitor.md (#31200)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-12 22:05:11 +00:00
Michael Li
7b9feb60cc docs: replace initialize_agent with create_react_agent in openweathermap - replace deprecated load_tools (#31202)
…map.ipynb

Update openweathermap markdown file for tools

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-12 15:12:02 -04:00
Michael Li
87add0809f docs: replace initialize_agent with create_react_agent in graphql.ipynb - replace deprecated load_tools (#31201)
Replace the deprecated load_tools

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-12 15:11:39 -04:00
ccurme
868cfc4a8f openai: ignore function_calls if tool_calls are present (#31198)
Some providers include (legacy) function calls in `additional_kwargs` in
addition to tool calls. We currently unpack both function calls and tool
calls if present, but OpenAI will raise 400 in this case.

This can come up if providers are mixed in a tool-calling loop. Example:
```python
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool


@tool
def get_weather(location: str) -> str:
    """Get weather at a location."""
    return "It's sunny."



gemini = init_chat_model("google_genai:gemini-2.0-flash-001").bind_tools([get_weather])
openai = init_chat_model("openai:gpt-4.1-mini").bind_tools([get_weather])

input_message = HumanMessage("What's the weather in Boston?")
tool_call_message = gemini.invoke([input_message])

assert len(tool_call_message.tool_calls) == 1
tool_call = tool_call_message.tool_calls[0]
tool_message = get_weather.invoke(tool_call)

response = openai.invoke(  # currently raises 400 / BadRequestError
    [input_message, tool_call_message, tool_message]
)
```

Here we ignore function calls if tool calls are present.
2025-05-12 13:50:56 -04:00
Christophe Bornet
83d006190d core: Fix some private member accesses (#30912)
See https://github.com/langchain-ai/langchain/pull/30666

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-05-12 17:42:26 +00:00
CtrlMj
1e56c66f86 core: Fix issue 31035 alias fields in base tool langchain core (#31112)
**Description**: The 'inspect' package in python skips over the aliases
set in the schema of a pydantic model. This is a workound to include the
aliases from the original input.
**issue**: #31035 


Cc: @ccurme @eyurtsev

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-12 11:04:13 -04:00
mateencog
92af7b0933 infra: Suppress error in make api_docs_clean if index.md is missing (#31129)
- **Description:** Added -f flag to rm when calling ```make
api_docs_clean``` to suppress error by make if index.md doesn't exist.
- **Dependencies:** none

On calling ```make api_docs_clean```

Behavior without this PR:
```
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm docs/api_reference/index.md
rm: cannot remove 'docs/api_reference/index.md': No such file or directory
make: *** [Makefile:51: api_docs_clean] Error 1
```
After this PR:

```
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
```
2025-05-11 17:26:49 -04:00
meirk-brd
e6147ce5d2 docs: Add Brightdata integration documentation (#31114)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"

- **Description:** Integrated the Bright Data package to enable
Langchain users to seamlessly incorporate Bright Data into their agents.
 - **Dependencies:** None
- **LinkedIn handle**:[Bright
Data](https://www.linkedin.com/company/bright-data)

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-11 16:07:21 +00:00
Michael Li
0d59fe9789 docs: replace initialize_agent with create_react_agent in agent_vectorstore.ipynb (#31183)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-11 11:19:18 -04:00
ccurme
ff9183fd3c docs: add Gel integration (#31186)
Continued from https://github.com/langchain-ai/langchain/pull/31050

---------

Co-authored-by: deepbuzin <contactbuzin@gmail.com>
2025-05-11 10:17:18 -04:00
Michael Li
65fbbb0249 docs: replace initialize_agent with create_react_agent in searchapi.ipynb (#31184)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-11 09:42:25 -04:00
ccurme
77d3f04e0a docs: add Aerospike to package registry (#31185)
Missed as part of https://github.com/langchain-ai/langchain/pull/31156
2025-05-11 09:33:58 -04:00
dwelch-spike
0dee089ba7 docs: document the move of the aerospike vector store integration to langchain-aerospike vec-595 (#31156)
**Description:** The Aerospike Vector Search vector store integration
has moved out of langchain-community and to its own repository,
https://github.com/aerospike/langchain-aerospike. This PR updates
langchain documentation to reference it.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-11 09:30:29 -04:00
Asif Mehmood
2ec74fea44 docs: update DoctranPropertyExtractor import path and fix typo (#31177)
**Description:**
Updated the import path for `DoctranPropertyExtractor` from
`langchain_community.document_loaders` to
`langchain_community.document_transformers` in multiple locations to
reflect recent package structure changes. Also corrected a minor typo in
the word "variable".

**Issue:**
N/A

**Dependencies:**
N/A

**LinkedIn handle:** For shout out if announced [Asif
Mehmood](https://www.linkedin.com/in/asifmehmood1997/).
2025-05-10 15:43:40 -04:00
Sumin Shin
683da2c9e9 text-splitters: Fix regex separator merge bug in CharacterTextSplitter (#31137)
**Description:**
Fix the merge logic in `CharacterTextSplitter.split_text` so that when
using a regex lookahead separator (`is_separator_regex=True`) with
`keep_separator=False`, the raw pattern is not re-inserted between
chunks.

**Issue:**
Fixes #31136 

**Dependencies:**
None

**Twitter handle:**
None

Since this is my first open-source PR, please feel free to point out any
mistakes, and I'll be eager to make corrections.
2025-05-10 15:42:03 -04:00
Michael Li
0ef4ac75b7 docs: remove duplicated and inaccurate mulvus doc (part of langchain-ai#31104) (#31154) 2025-05-10 19:38:11 +00:00
ccurme
23ec06b481 docs: showcase gemini-2.0-flash-preview-image-generation (#31176) 2025-05-09 11:17:15 -04:00
ccurme
e9e597be8e docs: update sort order in integrations table (#31171) 2025-05-08 20:44:21 +00:00
ccurme
0ba8697286 infra: add to vercel overrides (#31170)
Incompatibility between langchain-redis and langchain-ai21 `tenacity`
dep
2025-05-08 20:36:43 +00:00
ccurme
9aac8923a3 docs: add web search to anthropic docs (#31169) 2025-05-08 16:20:11 -04:00
Victor Hiairrassary
efc52e18e9 docs: fix typing in how_to/custom_tools.ipynb (#31164)
Fix typing in how_to/custom_tools.ipynb
2025-05-08 13:51:42 -04:00
ccurme
2d202f9762 anthropic[patch]: split test into two (#31167) 2025-05-08 09:23:36 -04:00
76 changed files with 4865 additions and 3489 deletions

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
# and
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -12,7 +12,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: 18.x
cache: "yarn"

View File

@@ -21,12 +21,12 @@ jobs:
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
- uses: actions/setup-python@v5
with:
python-version: "3.12"

View File

@@ -48,7 +48,7 @@ api_docs_quick_preview:
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm docs/api_reference/index.md
rm -f docs/api_reference/index.md
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.

View File

@@ -22,7 +22,19 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 1,
"id": "e8d63d14-138d-4aa5-a741-7fd3537d00aa",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
@@ -37,7 +49,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 3,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
@@ -54,19 +66,10 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 4,
"id": "f2675861",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
@@ -81,7 +84,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
@@ -93,17 +96,25 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "1431cded",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import WebBaseLoader"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
@@ -113,16 +124,20 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "96a2edf8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"name": "stderr",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
"Created a chunk of size 2122, which is longer than the specified 1000\n",
"Created a chunk of size 3187, which is longer than the specified 1000\n",
"Created a chunk of size 1017, which is longer than the specified 1000\n",
"Created a chunk of size 1049, which is longer than the specified 1000\n",
"Created a chunk of size 1256, which is longer than the specified 1000\n",
"Created a chunk of size 2321, which is longer than the specified 1000\n"
]
}
],
@@ -135,14 +150,6 @@
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71ecef90",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "c0a6c031",
@@ -153,31 +160,30 @@
},
{
"cell_type": "code",
"execution_count": 43,
"execution_count": 9,
"id": "eb142786",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_openai import OpenAI"
"from langchain.agents import Tool"
]
},
{
"cell_type": "code",
"execution_count": 44,
"execution_count": 10,
"id": "850bc4e9",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" ),\n",
@@ -186,94 +192,116 @@
},
{
"cell_type": "code",
"execution_count": 45,
"id": "fc47f230",
"execution_count": 11,
"id": "70c461d8-aaca-4f2a-9a93-bf35841cc615",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "10ca2db8",
"execution_count": 12,
"id": "a6d2b911-3044-4430-a35b-75832bb45334",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_26QlRdsptjEJJZjFsAUjEbaH)\n",
" Call ID: call_26QlRdsptjEJJZjFsAUjEbaH\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"In the State of the Union address, Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
]
},
{
"data": {
"text/plain": [
"\"Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "4e91b811",
"execution_count": 13,
"id": "e836b4cd-abf7-49eb-be0e-b9ad501213f3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_KqDoWeO9bo9OAXdxOsCb6msC)\n",
" Call ID: call_KqDoWeO9bo9OAXdxOsCb6msC\n",
" Args:\n",
" __arg1: Why use ruff over flake8?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"There are a few reasons why someone might choose to use Ruff over Flake8:\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"1. Larger rule set: Ruff implements over 800 rules, while Flake8 only implements around 200. This means that Ruff can catch more potential issues in your code.\n",
"\n",
"2. Better compatibility with other tools: Ruff is designed to work well with other tools like Black, isort, and type checkers like Mypy. This means that you can use Ruff alongside these tools to get more comprehensive feedback on your code.\n",
"\n",
"3. Automatic fixing of lint violations: Unlike Flake8, Ruff is capable of automatically fixing its own lint violations. This can save you time and effort when fixing issues in your code.\n",
"\n",
"4. Native implementation of popular Flake8 plugins: Ruff re-implements some of the most popular Flake8 plugins natively, which means you don't have to install and configure multiple plugins to get the same functionality.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8, making it a popular choice for many developers.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"You might choose to use Ruff over Flake8 for several reasons:\n",
"\n",
"1. Ruff has a much larger rule set, implementing over 800 rules compared to Flake8's roughly 200, so it can catch more potential issues.\n",
"2. Ruff is designed to work better with other tools like Black, isort, and type checkers like Mypy, providing more comprehensive code feedback.\n",
"3. Ruff can automatically fix its own lint violations, which Flake8 cannot, saving time and effort.\n",
"4. Ruff natively implements some popular Flake8 plugins, so you don't need to install and configure multiple plugins separately.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8.\n"
]
},
{
"data": {
"text/plain": [
"'Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -296,20 +324,20 @@
},
{
"cell_type": "code",
"execution_count": 48,
"execution_count": 14,
"id": "f59b377e",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" return_direct=True,\n",
@@ -319,90 +347,92 @@
},
{
"cell_type": "code",
"execution_count": 49,
"id": "8615707a",
"execution_count": 15,
"id": "06f69c0f-c83d-4b7f-a1c8-7614aced3bae",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "36e718a9",
"execution_count": 16,
"id": "a6b38c12-ac25-43c0-b9c2-2b1985ab4825",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_yjxh11OnZiauoyTAn9npWdxj)\n",
" Call ID: call_yjxh11OnZiauoyTAn9npWdxj\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
]
},
{
"data": {
"text/plain": [
"\" Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "edfd0a1a",
"execution_count": 17,
"id": "88f08d86-7972-4148-8128-3ac8898ad68a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_GiWWfwF6wbbRFQrHlHbhRtGW)\n",
" Call ID: call_GiWWfwF6wbbRFQrHlHbhRtGW\n",
" Args:\n",
" __arg1: What are the advantages of using ruff over flake8 for Python linting?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
" Ruff has a larger rule set, supports automatic fixing of lint violations, and does not require the installation of additional plugins. It also has better compatibility with Black and can be used alongside a type checker for more comprehensive code analysis.\n"
]
},
{
"data": {
"text/plain": [
"' Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -417,19 +447,19 @@
},
{
"cell_type": "code",
"execution_count": 57,
"execution_count": 18,
"id": "d397a233",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"State of Union QA System\",\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
" Tool(\n",
" name=\"Ruff QA System\",\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
@@ -438,60 +468,60 @@
},
{
"cell_type": "code",
"execution_count": 58,
"id": "06157240",
"execution_count": 19,
"id": "41743f29-150d-40ba-aa8e-3a63c32216aa",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "b492b520",
"execution_count": 20,
"id": "e20e81dd-284a-4d07-9160-63a84b65cba8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_VOnxiOEehauQyVOTjDJkR5L2)\n",
" Call ID: call_VOnxiOEehauQyVOTjDJkR5L2\n",
" Args:\n",
" __arg1: What tool does ruff use to run over Jupyter Notebooks?\n",
" state_of_union_qa_system (call_AbSsXAxwe4JtCRhga926SxOZ)\n",
" Call ID: call_AbSsXAxwe4JtCRhga926SxOZ\n",
" Args:\n",
" __arg1: Did the president mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what tool ruff uses to run over Jupyter Notebooks, and if the president mentioned it in the state of the union.\n",
"Action: Ruff QA System\n",
"Action Input: What tool does ruff use to run over Jupyter Notebooks?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to find out if the president mentioned this tool in the state of the union.\n",
"Action: State of Union QA System\n",
"Action Input: Did the president mention nbQA in the state of the union?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
" No, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Ruff does not support source.organizeImports and source.fixAll code actions in Jupyter Notebooks. Additionally, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n"
]
},
{
"data": {
"text/plain": [
"'No, the president did not mention nbQA in the state of the union.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\"\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -519,7 +549,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.12.4"
}
},
"nbformat": 4,

View File

@@ -192,7 +192,7 @@ All Toolkits expose a `get_tools` method which returns a list of tools. You can
```python
# Initialize a toolkit
toolkit = ExampleTookit(...)
toolkit = ExampleToolkit(...)
# Get list of tools
tools = toolkit.get_tools()

View File

@@ -530,7 +530,7 @@
"\n",
" def _run(\n",
" self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None\n",
" ) -> str:\n",
" ) -> int:\n",
" \"\"\"Use the tool.\"\"\"\n",
" return a * b\n",
"\n",
@@ -539,7 +539,7 @@
" a: int,\n",
" b: int,\n",
" run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n",
" ) -> str:\n",
" ) -> int:\n",
" \"\"\"Use the tool asynchronously.\"\"\"\n",
" # If the calculation is cheap, you can just delegate to the sync implementation\n",
" # as shown below.\n",

View File

@@ -67,9 +67,34 @@
"When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods.\n",
"\n",
"All configuration is expected to be passed through the initializer (__init__). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents.\n",
":::\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "520edbbabde7df6e",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain-core** and **langchain_community**."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "936bd5fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_core langchain_community"
]
},
{
"cell_type": "markdown",
"id": "a93f17a87d323bdd",
"metadata": {},
"source": [
"### Implementation\n",
"\n",
"Let's create an example of a standard document loader that loads a file and creates a document from each line in the file."
@@ -77,9 +102,13 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "20f128c1-1a2c-43b9-9e7b-cf9b3a86d1db",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:56.764714Z",
"start_time": "2025-04-21T08:49:56.623508Z"
},
"tags": []
},
"outputs": [],
@@ -122,7 +151,7 @@
" self,\n",
" ) -> AsyncIterator[Document]: # <-- Does not take any arguments\n",
" \"\"\"An async lazy loader that reads a file line by line.\"\"\"\n",
" # Requires aiofiles (install with pip)\n",
" # Requires aiofiles\n",
" # https://github.com/Tinche/aiofiles\n",
" import aiofiles\n",
"\n",
@@ -151,9 +180,13 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "b1751198-c6dd-4149-95bd-6370ce8fa06f",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:56.776521Z",
"start_time": "2025-04-21T08:49:56.773511Z"
},
"tags": []
},
"outputs": [],
@@ -167,9 +200,23 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "c5210428",
"metadata": {},
"outputs": [],
"source": [
"%pip install -q aiofiles"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "71ef1482-f9de-4852-b5a4-0938f350612e",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:57.972675Z",
"start_time": "2025-04-21T08:49:57.969411Z"
},
"tags": []
},
"outputs": [
@@ -179,10 +226,12 @@
"text": [
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content='meow meow🐱 \\n' metadata={'line_number': 0, 'source': './meow.txt'}\n",
"page_content='meow meow🐱 \n",
"' metadata={'line_number': 0, 'source': './meow.txt'}\n",
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content=' meow meow🐱 \\n' metadata={'line_number': 1, 'source': './meow.txt'}\n",
"page_content=' meow meow🐱 \n",
"' metadata={'line_number': 1, 'source': './meow.txt'}\n",
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}\n"
@@ -199,9 +248,13 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"id": "1588e78c-e81a-4d40-b36c-634242c84a6a",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.028989Z",
"start_time": "2025-04-21T08:49:58.021972Z"
},
"tags": []
},
"outputs": [
@@ -211,10 +264,12 @@
"text": [
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content='meow meow🐱 \\n' metadata={'line_number': 0, 'source': './meow.txt'}\n",
"page_content='meow meow🐱 \n",
"' metadata={'line_number': 0, 'source': './meow.txt'}\n",
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content=' meow meow🐱 \\n' metadata={'line_number': 1, 'source': './meow.txt'}\n",
"page_content=' meow meow🐱 \n",
"' metadata={'line_number': 1, 'source': './meow.txt'}\n",
"\n",
"<class 'langchain_core.documents.base.Document'>\n",
"page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}\n"
@@ -245,21 +300,25 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "df5ad46a-9e00-4073-8505-489fc4f3799e",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.078111Z",
"start_time": "2025-04-21T08:49:58.071421Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='meow meow🐱 \\n', metadata={'line_number': 0, 'source': './meow.txt'}),\n",
" Document(page_content=' meow meow🐱 \\n', metadata={'line_number': 1, 'source': './meow.txt'}),\n",
" Document(page_content=' meow😻😻', metadata={'line_number': 2, 'source': './meow.txt'})]"
"[Document(metadata={'line_number': 0, 'source': './meow.txt'}, page_content='meow meow🐱 \\n'),\n",
" Document(metadata={'line_number': 1, 'source': './meow.txt'}, page_content=' meow meow🐱 \\n'),\n",
" Document(metadata={'line_number': 2, 'source': './meow.txt'}, page_content=' meow😻😻')]"
]
},
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -286,9 +345,13 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "209f6a91-2f15-4cb2-9237-f79fc9493b82",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.124363Z",
"start_time": "2025-04-21T08:49:58.120782Z"
},
"tags": []
},
"outputs": [],
@@ -313,9 +376,13 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "b1275c59-06d4-458f-abd2-fcbad0bde442",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.172506Z",
"start_time": "2025-04-21T08:49:58.167416Z"
},
"tags": []
},
"outputs": [],
@@ -326,21 +393,25 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 10,
"id": "56a3d707-2086-413b-ae82-50e92ddb27f6",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.218426Z",
"start_time": "2025-04-21T08:49:58.214684Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='meow meow🐱 \\n', metadata={'line_number': 1, 'source': './meow.txt'}),\n",
" Document(page_content=' meow meow🐱 \\n', metadata={'line_number': 2, 'source': './meow.txt'}),\n",
" Document(page_content=' meow😻😻', metadata={'line_number': 3, 'source': './meow.txt'})]"
"[Document(metadata={'line_number': 1, 'source': './meow.txt'}, page_content='meow meow🐱 \\n'),\n",
" Document(metadata={'line_number': 2, 'source': './meow.txt'}, page_content=' meow meow🐱 \\n'),\n",
" Document(metadata={'line_number': 3, 'source': './meow.txt'}, page_content=' meow😻😻')]"
]
},
"execution_count": 8,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -359,20 +430,24 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 11,
"id": "20d03092-ba35-47d7-b612-9d1631c261cd",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.267755Z",
"start_time": "2025-04-21T08:49:58.264369Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='some data from memory\\n', metadata={'line_number': 1, 'source': None}),\n",
" Document(page_content='meow', metadata={'line_number': 2, 'source': None})]"
"[Document(metadata={'line_number': 1, 'source': None}, page_content='some data from memory\\n'),\n",
" Document(metadata={'line_number': 2, 'source': None}, page_content='meow')]"
]
},
"execution_count": 9,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -394,9 +469,13 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 12,
"id": "a9e92e0e-c8da-401c-b8c6-f0676004cf58",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.330432Z",
"start_time": "2025-04-21T08:49:58.327223Z"
},
"tags": []
},
"outputs": [],
@@ -406,9 +485,13 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 13,
"id": "6b559d30-8b0c-4e45-86b1-e4602d9aaa7e",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.383905Z",
"start_time": "2025-04-21T08:49:58.380658Z"
},
"tags": []
},
"outputs": [
@@ -418,7 +501,7 @@
"'utf-8'"
]
},
"execution_count": 11,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -429,9 +512,13 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 14,
"id": "2f7b145a-9c6f-47f9-9487-1f4b25aff46f",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.443829Z",
"start_time": "2025-04-21T08:49:58.440222Z"
},
"tags": []
},
"outputs": [
@@ -441,7 +528,7 @@
"b'meow meow\\xf0\\x9f\\x90\\xb1 \\n meow meow\\xf0\\x9f\\x90\\xb1 \\n meow\\xf0\\x9f\\x98\\xbb\\xf0\\x9f\\x98\\xbb'"
]
},
"execution_count": 12,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -452,9 +539,13 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 15,
"id": "9b9482fa-c49c-42cd-a2ef-80bc93214631",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.498609Z",
"start_time": "2025-04-21T08:49:58.494903Z"
},
"tags": []
},
"outputs": [
@@ -464,7 +555,7 @@
"'meow meow🐱 \\n meow meow🐱 \\n meow😻😻'"
]
},
"execution_count": 13,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -475,19 +566,23 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 16,
"id": "04cc7a81-290e-4ef8-b7e1-d885fcc59ece",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.551353Z",
"start_time": "2025-04-21T08:49:58.547518Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"<contextlib._GeneratorContextManager at 0x743f34324450>"
"<contextlib._GeneratorContextManager at 0x74b8d42e9940>"
]
},
"execution_count": 14,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -498,9 +593,13 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 17,
"id": "ec8de0ab-51d7-4e41-82c9-3ce0a6fdc2cd",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.599576Z",
"start_time": "2025-04-21T08:49:58.596567Z"
},
"tags": []
},
"outputs": [
@@ -510,7 +609,7 @@
"{'foo': 'bar'}"
]
},
"execution_count": 15,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -521,9 +620,13 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 18,
"id": "19eae991-ae48-43c2-8952-7347cdb76a34",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.649634Z",
"start_time": "2025-04-21T08:49:58.646313Z"
},
"tags": []
},
"outputs": [
@@ -533,7 +636,7 @@
"'./meow.txt'"
]
},
"execution_count": 16,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -551,65 +654,50 @@
"\n",
"While a parser encapsulates the logic needed to parse binary data into documents, *blob loaders* encapsulate the logic that's necessary to load blobs from a given storage location.\n",
"\n",
"At the moment, `LangChain` only supports `FileSystemBlobLoader`.\n",
"At the moment, `LangChain` supports `FileSystemBlobLoader` and `CloudBlobLoader`.\n",
"\n",
"You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them."
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 19,
"id": "c093becb-2e84-4329-89e3-956a3bd765e5",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:49:58.718259Z",
"start_time": "2025-04-21T08:49:58.705367Z"
},
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoader\n",
"\n",
"blob_loader = FileSystemBlobLoader(path=\".\", glob=\"*.mdx\", show_progress=True)"
"filesystem_blob_loader = FileSystemBlobLoader(\n",
" path=\".\", glob=\"*.mdx\", show_progress=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "77739dab-2a1e-4b64-8daa-fee8aa029972",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "45e85d3f63224bb59db02a40ae2e3268",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/8 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n",
"page_content='# Markdown\\n' metadata={'line_number': 1, 'source': 'markdown.mdx'}\n",
"page_content='# JSON\\n' metadata={'line_number': 1, 'source': 'json.mdx'}\n",
"page_content='---\\n' metadata={'line_number': 1, 'source': 'pdf.mdx'}\n",
"page_content='---\\n' metadata={'line_number': 1, 'source': 'index.mdx'}\n",
"page_content='# File Directory\\n' metadata={'line_number': 1, 'source': 'file_directory.mdx'}\n",
"page_content='# CSV\\n' metadata={'line_number': 1, 'source': 'csv.mdx'}\n",
"page_content='# HTML\\n' metadata={'line_number': 1, 'source': 'html.mdx'}\n"
]
}
],
"execution_count": null,
"id": "21b91bad",
"metadata": {},
"outputs": [],
"source": [
"%pip install -q tqdm"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40be670b",
"metadata": {},
"outputs": [],
"source": [
"parser = MyParser()\n",
"for blob in blob_loader.yield_blobs():\n",
"for blob in filesystem_blob_loader.yield_blobs():\n",
" for doc in parser.lazy_parse(blob):\n",
" print(doc)\n",
" break"
@@ -620,56 +708,104 @@
"id": "f016390c-d38b-4261-946d-34eefe546df7",
"metadata": {},
"source": [
"### Generic Loader\n",
"\n",
"LangChain has a `GenericLoader` abstraction which composes a `BlobLoader` with a `BaseBlobParser`.\n",
"\n",
"`GenericLoader` is meant to provide standardized classmethods that make it easy to use existing `BlobLoader` implementations. At the moment, only the `FileSystemBlobLoader` is supported."
"Or, you can use `CloudBlobLoader` to load blobs from a cloud storage location (Supports s3://, az://, gs://, file:// schemes)."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "1de74daf-70ee-4616-9089-d28e26b16851",
"execution_count": null,
"id": "8210714e",
"metadata": {},
"outputs": [],
"source": [
"%pip install -q 'cloudpathlib[s3]'"
]
},
{
"cell_type": "markdown",
"id": "d3f84501-b0aa-4a60-aad2-5109cbd37d4f",
"metadata": {},
"source": [
"```python\n",
"from cloudpathlib import S3Client, S3Path\n",
"from langchain_community.document_loaders.blob_loaders import CloudBlobLoader\n",
"\n",
"client = S3Client(no_sign_request=True)\n",
"client.set_as_default_client()\n",
"\n",
"path = S3Path(\n",
" \"s3://bucket-01\", client=client\n",
") # Supports s3://, az://, gs://, file:// schemes.\n",
"\n",
"cloud_loader = CloudBlobLoader(path, glob=\"**/*.pdf\", show_progress=True)\n",
"\n",
"for blob in cloud_loader.yield_blobs():\n",
" print(blob)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "40c361ba4cd30164",
"metadata": {},
"source": [
"### Generic Loader\n",
"\n",
"LangChain has a `GenericLoader` abstraction which composes a `BlobLoader` with a `BaseBlobParser`.\n",
"\n",
"`GenericLoader` is meant to provide standardized classmethods that make it easy to use existing `BlobLoader` implementations. At the moment, the `FileSystemBlobLoader` and `CloudBlobLoader` are supported. See example below:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "5dfb2be02fe662c5",
"metadata": {
"tags": []
"ExecuteTime": {
"end_time": "2025-04-21T08:50:16.244917Z",
"start_time": "2025-04-21T08:50:15.527562Z"
}
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "5f1f6810a71a4909ac9fe1e8f8cb9e0a",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/8 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 7/7 [00:00<00:00, 1224.82it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n",
"page_content='\\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}\n",
"page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}\n",
"page_content='\\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}\n",
"page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}\n",
"page_content='# Text embedding models\n",
"' metadata={'line_number': 1, 'source': 'embed_text.mdx'}\n",
"page_content='\n",
"' metadata={'line_number': 2, 'source': 'embed_text.mdx'}\n",
"page_content=':::info\n",
"' metadata={'line_number': 3, 'source': 'embed_text.mdx'}\n",
"page_content='Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.\n",
"' metadata={'line_number': 4, 'source': 'embed_text.mdx'}\n",
"page_content=':::\n",
"' metadata={'line_number': 5, 'source': 'embed_text.mdx'}\n",
"... output truncated for demo purposes\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"from langchain_community.document_loaders.generic import GenericLoader\n",
"\n",
"loader = GenericLoader.from_filesystem(\n",
" path=\".\", glob=\"*.mdx\", show_progress=True, parser=MyParser()\n",
"generic_loader_filesystem = GenericLoader(\n",
" blob_loader=filesystem_blob_loader, blob_parser=parser\n",
")\n",
"\n",
"for idx, doc in enumerate(loader.lazy_load()):\n",
"for idx, doc in enumerate(generic_loader_filesystem.lazy_load()):\n",
" if idx < 5:\n",
" print(doc)\n",
"\n",
@@ -690,9 +826,13 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 28,
"id": "23633102-dc44-4fed-a4e1-8159489101c8",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:50:34.841862Z",
"start_time": "2025-04-21T08:50:34.838375Z"
},
"tags": []
},
"outputs": [],
@@ -709,37 +849,46 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 29,
"id": "dc95be85-4a29-4c6f-a260-08afa3c95538",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T08:50:34.901734Z",
"start_time": "2025-04-21T08:50:34.888098Z"
},
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "4320598ea3b44a52b1873e1c801db312",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/8 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 7/7 [00:00<00:00, 814.86it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n",
"page_content='\\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}\n",
"page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}\n",
"page_content='\\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}\n",
"page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}\n",
"page_content='# Text embedding models\n",
"' metadata={'line_number': 1, 'source': 'embed_text.mdx'}\n",
"page_content='\n",
"' metadata={'line_number': 2, 'source': 'embed_text.mdx'}\n",
"page_content=':::info\n",
"' metadata={'line_number': 3, 'source': 'embed_text.mdx'}\n",
"page_content='Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.\n",
"' metadata={'line_number': 4, 'source': 'embed_text.mdx'}\n",
"page_content=':::\n",
"' metadata={'line_number': 5, 'source': 'embed_text.mdx'}\n",
"... output truncated for demo purposes\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
@@ -769,7 +918,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -162,7 +162,7 @@
"\n",
"table_chain = prompt | llm_with_tools | output_parser\n",
"\n",
"table_chain.invoke({\"input\": \"What are all the genres of Alanis Morisette songs\"})"
"table_chain.invoke({\"input\": \"What are all the genres of Alanis Morissette songs\"})"
]
},
{
@@ -206,7 +206,7 @@
")\n",
"\n",
"category_chain = prompt | llm_with_tools | output_parser\n",
"category_chain.invoke({\"input\": \"What are all the genres of Alanis Morisette songs\"})"
"category_chain.invoke({\"input\": \"What are all the genres of Alanis Morissette songs\"})"
]
},
{
@@ -261,7 +261,7 @@
"\n",
"\n",
"table_chain = category_chain | get_tables\n",
"table_chain.invoke({\"input\": \"What are all the genres of Alanis Morisette songs\"})"
"table_chain.invoke({\"input\": \"What are all the genres of Alanis Morissette songs\"})"
]
},
{
@@ -313,7 +313,7 @@
],
"source": [
"query = full_chain.invoke(\n",
" {\"question\": \"What are all the genres of Alanis Morisette songs\"}\n",
" {\"question\": \"What are all the genres of Alanis Morissette songs\"}\n",
")\n",
"print(query)"
]

View File

@@ -83,21 +83,28 @@ agent_executor.run("how many letters in the word educa?", callbacks=[handler])
Another example:
```python
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain_openai import OpenAI
from langchain_community.callbacks.llmonitor_callback import LLMonitorCallbackHandler
import os
from langchain_community.agent_toolkits.load_tools import load_tools
from langchain_community.callbacks.llmonitor_callback import LLMonitorCallbackHandler
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
os.environ["LLMONITOR_APP_ID"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = ""
handler = LLMonitorCallbackHandler()
llm = OpenAI(temperature=0)
llm = ChatOpenAI(temperature=0, callbacks=[handler])
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom name
agent = create_react_agent("openai:gpt-4.1-mini", tools)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?",
callbacks=[handler],
)
input_message = {
"role": "user",
"content": "What's the weather in SF?",
}
agent.invoke({"messages": [input_message]})
```
## User Tracking
@@ -110,7 +117,7 @@ with identify("user-123"):
llm.invoke("Tell me a joke")
with identify("user-456", user_props={"email": "user456@test.com"}):
agent.run("Who is Leo DiCaprio's girlfriend?")
agent.invoke(...)
```
## Support

File diff suppressed because it is too large Load Diff

View File

@@ -41,7 +41,7 @@
"### Credentials\n",
"\n",
"\n",
"Head to https://www.cloudflare.com/developer-platform/products/workers-ai/ to sign up to CloudflareWorkersAI and generate an API key. Once you've done this set the CF_API_KEY environment variable and the CF_ACCOUNT_ID environment variable:"
"Head to https://www.cloudflare.com/developer-platform/products/workers-ai/ to sign up to CloudflareWorkersAI and generate an API key. Once you've done this set the CF_AI_API_KEY environment variable and the CF_ACCOUNT_ID environment variable:"
]
},
{
@@ -56,8 +56,8 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CF_API_KEY\"):\n",
" os.environ[\"CF_API_KEY\"] = getpass.getpass(\n",
"if not os.getenv(\"CF_AI_API_KEY\"):\n",
" os.environ[\"CF_AI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI API key: \"\n",
" )\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -8,17 +8,19 @@
Install the AVS Python SDK and AVS langchain vector store:
```bash
pip install aerospike-vector-search langchain-community
pip install aerospike-vector-search langchain-aerospike
```
See the documentation for the Ptyhon SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
The documentation for the AVS langchain vector store is [here](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.aerospike.Aerospike.html).
See the documentation for the Python SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
The documentation for the AVS langchain vector store is [here](https://langchain-aerospike.readthedocs.io/en/latest/).
## Vector Store
To import this vectorstore:
```python
from langchain_community.vectorstores import Aerospike
from langchain_aerospike.vectorstores import Aerospike
```
See a usage example [here](https://python.langchain.com/docs/integrations/vectorstores/aerospike/).

View File

@@ -0,0 +1,34 @@
# Bright Data
[Bright Data](https://brightdata.com) is a web data platform that provides tools for web scraping, SERP collection, and accessing geo-restricted content.
Bright Data allows developers to extract structured data from websites, perform search engine queries, and access content that might be otherwise blocked or geo-restricted. The platform is designed to help overcome common web scraping challenges including anti-bot systems, CAPTCHAs, and IP blocks.
## Installation and Setup
```bash
pip install langchain-brightdata
```
You'll need to set up your Bright Data API key:
```python
import os
os.environ["BRIGHT_DATA_API_KEY"] = "your-api-key"
```
Or you can pass it directly when initializing tools:
```python
from langchain_bright_data import BrightDataSERP
tool = BrightDataSERP(bright_data_api_key="your-api-key")
```
## Tools
The Bright Data integration provides several tools:
- [BrightDataSERP](/docs/integrations/tools/brightdata_serp) - Search engine results collection with geo-targeting
- [BrightDataUnblocker](/docs/integrations/tools/brightdata_unlocker) - Access ANY public website that might be geo-restricted or bot-protected
- [BrightDataWebScraperAPI](/docs/integrations/tools/brightdata-webscraperapi) - Extract structured data from 100+ ppoular domains, e.g. Amazon product details and LinkedIn profiles

View File

@@ -32,7 +32,7 @@ For a detailed walkthrough of this wrapper, see [this notebook](/docs/integratio
You can also load this wrapper as a Tool to use with an Agent:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["dataforseo-api-search"])
```

View File

@@ -1,8 +1,8 @@
# Doctran
>[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source
> NLP libraries to transform raw text into clean, structured, information-dense documents
> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where
>[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source
> NLP libraries to transform raw text into clean, structured, information-dense documents
> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where
> messy strings go in and nice, clean, labelled strings come out.
@@ -19,19 +19,19 @@ pip install doctran
See a [usage example for DoctranQATransformer](/docs/integrations/document_transformers/doctran_interrogate_document).
```python
from langchain_community.document_loaders import DoctranQATransformer
from langchain_community.document_transformers import DoctranQATransformer
```
### Property Extractor
See a [usage example for DoctranPropertyExtractor](/docs/integrations/document_transformers/doctran_extract_properties).
```python
from langchain_community.document_loaders import DoctranPropertyExtractor
from langchain_community.document_transformers import DoctranPropertyExtractor
```
### Document Translator
See a [usage example for DoctranTextTranslator](/docs/integrations/document_transformers/doctran_translate_document).
```python
from langchain_community.document_loaders import DoctranTextTranslator
from langchain_community.document_transformers import DoctranTextTranslator
```

View File

@@ -1,6 +1,6 @@
# Friendli AI
> [FriendliAI](https://friendli.ai/) enhances AI application performance and optimizes
> [FriendliAI](https://friendli.ai/) enhances AI application performance and optimizes
> cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
## Installation and setup
@@ -11,8 +11,8 @@ Install the `friendli-client` python package.
pip install -U langchain_community friendli-client
```
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token,
and set it as the `FRIENDLI_TOKEN` environment variabzle.
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token,
and set it as the `FRIENDLI_TOKEN` environment variable.
## Chat models

View File

@@ -0,0 +1,56 @@
# Gel
[Gel](https://www.geldata.com/) is a powerful data platform built on top of PostgreSQL.
- Think in objects and graphs instead of tables and JOINs.
- Use the advanced Python SDK, integrated GUI, migrations engine, Auth and AI layers, and much more.
- Run locally, remotely, or in a [fully managed cloud](https://www.geldata.com/cloud).
## Installation
```bash
pip install langchain-gel
```
## Setup
1. Run `gel project init`
2. Edit the schema. You need the following types to use the LangChain vectorstore:
```gel
using extension pgvector;
module default {
scalar type EmbeddingVector extending ext::pgvector::vector<1536>;
type Record {
required collection: str;
text: str;
embedding: EmbeddingVector;
external_id: str {
constraint exclusive;
};
metadata: json;
index ext::pgvector::hnsw_cosine(m := 16, ef_construction := 128)
on (.embedding)
}
}
```
> Note: this is the minimal setup. Feel free to add as many types, properties and links as you want!
> Learn more about taking advantage of Gel's schema by reading the [docs](https://docs.geldata.com/learn/schema).
3. Run the migration: `gel migration create && gel migrate`.
## Usage
```python
from langchain_gel import GelVectorStore
vector_store = GelVectorStore(
embeddings=embeddings,
)
```
See the full usage example [here](/docs/integrations/vectorstores/gel).

View File

@@ -27,7 +27,7 @@ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integ
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["golden-query"])
```

View File

@@ -880,7 +880,7 @@ from langchain_community.tools import GoogleSearchRun, GoogleSearchResults
Agent Loading:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["google-search"])
```
@@ -1313,7 +1313,7 @@ from langchain_community.tools import GoogleSearchRun, GoogleSearchResults
Agent Loading:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["google-search"])
```

View File

@@ -67,7 +67,7 @@ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integ
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["google-serper"])
```

View File

@@ -21,13 +21,3 @@ To import this vectorstore:
from langchain_milvus import Milvus
```
## Retrievers
See a [usage example](/docs/integrations/retrievers/milvus_hybrid_search).
To import this vectorstore:
```python
from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever
from langchain_milvus.utils.sparse import BM25SparseEmbedding
```

View File

@@ -37,8 +37,12 @@ You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
import os
from langchain_community.utilities import OpenWeatherMapAPIWrapper
os.environ["OPENWEATHERMAP_API_KEY"] = ""
weather = OpenWeatherMapAPIWrapper()
tools = [weather.run]
```
For more information on tools, see [this page](/docs/how_to/tools_builtin).

View File

@@ -73,7 +73,7 @@ You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["searchapi"])
```

View File

@@ -52,7 +52,7 @@ You can also load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["searx-search"],
searx_host="http://localhost:8888",
engines=["github"])

View File

@@ -24,7 +24,7 @@ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integ
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["serpapi"])
```

View File

@@ -29,7 +29,7 @@ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integ
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["stackexchange"])
```

View File

@@ -32,7 +32,7 @@ For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integ
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
from langchain_community.agent_toolkits.load_tools import load_tools
tools = load_tools(["wolfram-alpha"])
```

View File

@@ -1,639 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Milvus Hybrid Search\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Milvus Hybrid Search Retriever\n",
"\n",
"> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n",
"\n",
"This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/milvus/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n",
"\n",
"See also the Milvus Multi-Vector Search [docs](https://milvus.io/docs/multi-vector-search.md).\n",
"\n",
"### Integration details\n",
"\n",
"import {ItemTable} from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"document_retrievers\" item=\"MilvusCollectionHybridSearchRetriever\" />\n",
"\n",
"## Setup\n",
"\n",
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-milvus` package. This guide requires the following dependencies:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever\n",
"from langchain_milvus.utils.sparse import BM25SparseEmbedding\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from pymilvus import (\n",
" Collection,\n",
" CollectionSchema,\n",
" DataType,\n",
" FieldSchema,\n",
" WeightedRanker,\n",
" connections,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Start the Milvus service\n",
"\n",
"Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.\n",
"\n",
"After starting milvus, you need to specify your milvus connection URI."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"CONNECTION_URI = \"http://localhost:19530\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Prepare OpenAI API Key\n",
"\n",
"Please refer to the [OpenAI documentation](https://platform.openai.com/account/api-keys) to obtain your OpenAI API key, and set it as an environment variable.\n",
"\n",
"```shell\n",
"export OPENAI_API_KEY=<your_api_key>\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare dense and sparse embedding functions\n",
"\n",
"Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"texts = [\n",
" \"In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.\",\n",
" \"In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.\",\n",
" \"In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.\",\n",
" \"In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.\",\n",
" \"In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.\",\n",
" \"In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.\",\n",
" \"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\",\n",
" \"In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.\",\n",
" \"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\",\n",
" \"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the [OpenAI Embedding](https://platform.openai.com/docs/guides/embeddings) to generate dense vectors, and the [BM25 algorithm](https://en.wikipedia.org/wiki/Okapi_BM25) to generate sparse vectors.\n",
"\n",
"Initialize dense embedding function and get dimension"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1536"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dense_embedding_func = OpenAIEmbeddings()\n",
"dense_dim = len(dense_embedding_func.embed_query(texts[1]))\n",
"dense_dim"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize sparse embedding function.\n",
"\n",
"Note that the output of sparse embedding is a set of sparse vectors, which represents the index and weight of the keywords of the input text."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{0: 0.4270424944042204,\n",
" 21: 1.845826690498331,\n",
" 22: 1.845826690498331,\n",
" 23: 1.845826690498331,\n",
" 24: 1.845826690498331,\n",
" 25: 1.845826690498331,\n",
" 26: 1.845826690498331,\n",
" 27: 1.2237754316221157,\n",
" 28: 1.845826690498331,\n",
" 29: 1.845826690498331,\n",
" 30: 1.845826690498331,\n",
" 31: 1.845826690498331,\n",
" 32: 1.845826690498331,\n",
" 33: 1.845826690498331,\n",
" 34: 1.845826690498331,\n",
" 35: 1.845826690498331,\n",
" 36: 1.845826690498331,\n",
" 37: 1.845826690498331,\n",
" 38: 1.845826690498331,\n",
" 39: 1.845826690498331}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sparse_embedding_func = BM25SparseEmbedding(corpus=texts)\n",
"sparse_embedding_func.embed_query(texts[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Milvus Collection and load data\n",
"\n",
"Initialize connection URI and establish connection"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"connections.connect(uri=CONNECTION_URI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define field names and their data types"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"pk_field = \"doc_id\"\n",
"dense_field = \"dense_vector\"\n",
"sparse_field = \"sparse_vector\"\n",
"text_field = \"text\"\n",
"fields = [\n",
" FieldSchema(\n",
" name=pk_field,\n",
" dtype=DataType.VARCHAR,\n",
" is_primary=True,\n",
" auto_id=True,\n",
" max_length=100,\n",
" ),\n",
" FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),\n",
" FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),\n",
" FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a collection with the defined schema"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"schema = CollectionSchema(fields=fields, enable_dynamic_field=False)\n",
"collection = Collection(\n",
" name=\"IntroductionToTheNovels\", schema=schema, consistency_level=\"Strong\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define index for dense and sparse vectors"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"dense_index = {\"index_type\": \"FLAT\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"dense_vector\", dense_index)\n",
"sparse_index = {\"index_type\": \"SPARSE_INVERTED_INDEX\", \"metric_type\": \"IP\"}\n",
"collection.create_index(\"sparse_vector\", sparse_index)\n",
"collection.flush()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Insert entities into the collection and load the collection"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"entities = []\n",
"for text in texts:\n",
" entity = {\n",
" dense_field: dense_embedding_func.embed_documents([text])[0],\n",
" sparse_field: sparse_embedding_func.embed_documents([text])[0],\n",
" text_field: text,\n",
" }\n",
" entities.append(entity)\n",
"collection.insert(entities)\n",
"collection.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our retriever, defining search parameters for sparse and dense fields:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sparse_search_params = {\"metric_type\": \"IP\"}\n",
"dense_search_params = {\"metric_type\": \"IP\", \"params\": {}}\n",
"retriever = MilvusCollectionHybridSearchRetriever(\n",
" collection=collection,\n",
" rerank=WeightedRanker(0.5, 0.5),\n",
" anns_fields=[dense_field, sparse_field],\n",
" field_embeddings=[dense_embedding_func, sparse_embedding_func],\n",
" field_search_params=[dense_search_params, sparse_search_params],\n",
" top_k=3,\n",
" text_field=text_field,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.\", metadata={'doc_id': '449281835035545843'}),\n",
" Document(page_content=\"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.\", metadata={'doc_id': '449281835035545845'}),\n",
" Document(page_content=\"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.\", metadata={'doc_id': '449281835035545846'})]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"What are the story about ventures?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"Initialize ChatOpenAI and define a prompt template"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"llm = ChatOpenAI()\n",
"\n",
"PROMPT_TEMPLATE = \"\"\"\n",
"Human: You are an AI assistant, and provides answers to questions by using fact based and statistical information when possible.\n",
"Use the following pieces of information to provide a concise answer to the question enclosed in <question> tags.\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"<question>\n",
"{question}\n",
"</question>\n",
"\n",
"Assistant:\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=PROMPT_TEMPLATE, input_variables=[\"context\", \"question\"]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a function for formatting documents"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Define a chain using the retriever and other components"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"rag_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Perform a query using the defined chain"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"text/plain": [
"\"Lila Rose has written 'The Memory Thief,' which follows a charismatic thief with the ability to steal and manipulate memories as they navigate a daring heist and a web of deceit and betrayal.\""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rag_chain.invoke(\"What novels has Lila written and what are their contents?\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Drop the collection"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"collection.drop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/milvus/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -12,24 +12,36 @@
"\n",
">[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine learning models, on the `Cloudflare` network, from your code via REST API.\n",
"\n",
">[Cloudflare AI document](https://developers.cloudflare.com/workers-ai/models/text-embeddings/) listed all text embeddings models available.\n",
">[Workers AI Developer Docs](https://developers.cloudflare.com/workers-ai/models/text-embeddings/) lists all text embeddings models available.\n",
"\n",
"## Setting up\n",
"\n",
"Both Cloudflare account ID and API token are required. Find how to obtain them from [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/).\n"
"Both a Cloudflare Account ID and Workers AI API token are required. Find how to obtain them from [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/).\n",
"\n",
"You can pass these parameters explicitly or define as environmental variables.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 11,
"id": "f60023b8",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-05-13T06:00:30.121204Z",
"start_time": "2025-05-13T06:00:30.117936Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"my_account_id = getpass.getpass(\"Enter your Cloudflare account ID:\\n\\n\")\n",
"my_api_token = getpass.getpass(\"Enter your Cloudflare API token:\\n\\n\")"
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(\".env\")\n",
"\n",
"cf_acct_id = os.getenv(\"CF_ACCOUNT_ID\")\n",
"\n",
"cf_ai_token = os.getenv(\"CF_AI_API_TOKEN\")"
]
},
{
@@ -42,9 +54,14 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"id": "92c5b61e",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-05-13T06:00:31.224996Z",
"start_time": "2025-05-13T06:00:31.222981Z"
}
},
"outputs": [],
"source": [
"from langchain_cloudflare.embeddings import (\n",
@@ -54,25 +71,28 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 13,
"id": "062547b9",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-05-13T06:00:32.515031Z",
"start_time": "2025-05-13T06:00:31.798590Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(384, [-0.033627357333898544, 0.03982774540781975, 0.03559349477291107])"
]
"text/plain": "(384, [-0.033660888671875, 0.039764404296875, 0.03558349609375])"
},
"execution_count": 3,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"embeddings = CloudflareWorkersAIEmbeddings(\n",
" account_id=my_account_id,\n",
" api_token=my_api_token,\n",
" account_id=cf_acct_id,\n",
" api_token=cf_ai_token,\n",
" model_name=\"@cf/baai/bge-small-en-v1.5\",\n",
")\n",
"# single string embeddings\n",
@@ -82,17 +102,20 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 14,
"id": "e1dcc4bd",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-05-13T06:00:33.106160Z",
"start_time": "2025-05-13T06:00:32.847232Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(3, 384)"
]
"text/plain": "(3, 384)"
},
"execution_count": 4,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -102,14 +125,6 @@
"batch_query_result = embeddings.embed_documents([\"test1\", \"test2\", \"test3\"])\n",
"len(batch_query_result), len(batch_query_result[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52de8b88",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -0,0 +1,292 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BrightDataWebScraperAPI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Bright Data](https://brightdata.com/) provides a powerful Web Scraper API that allows you to extract structured data from 100+ ppular domains, including Amazon product details, LinkedIn profiles, and more, making it particularly useful for AI agents requiring reliable structured web data feeds."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration details\n",
"\n",
"|Class|Package|Serializable|JS support|Package latest|\n",
"|:--|:--|:-:|:-:|:-:|\n",
"|[BrightDataWebScraperAPI](https://pypi.org/project/langchain-brightdata/)|[langchain-brightdata](https://pypi.org/project/langchain-brightdata/)|✅|❌|![PyPI - Version](https://img.shields.io/pypi/v/langchain-brightdata?style=flat-square&label=%20)|\n",
"\n",
"### Tool features\n",
"\n",
"|Native async|Returns artifact|Return data|Pricing|\n",
"|:-:|:-:|:--|:-:|\n",
"|❌|❌|Structured data from websites (Amazon products, LinkedIn profiles, etc.)|Requires Bright Data account|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The integration lives in the `langchain-brightdata` package.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"pip install langchain-brightdata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll need a Bright Data API key to use this tool. You can set it as an environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"BRIGHT_DATA_API_KEY\"] = \"your-api-key\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Or pass it directly when initializing the tool:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataWebScraperAPI\n",
"\n",
"scraper_tool = BrightDataWebScraperAPI(bright_data_api_key=\"your-api-key\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Here we show how to instantiate an instance of the BrightDataWebScraperAPI tool. This tool allows you to extract structured data from various websites including Amazon product details, LinkedIn profiles, and more using Bright Data's Dataset API.\n",
"\n",
"The tool accepts various parameters during instantiation:\n",
"\n",
"- `bright_data_api_key` (required, str): Your Bright Data API key for authentication.\n",
"- `dataset_mapping` (optional, Dict[str, str]): A dictionary mapping dataset types to their corresponding Bright Data dataset IDs. The default mapping includes:\n",
" - \"amazon_product\": \"gd_l7q7dkf244hwjntr0\"\n",
" - \"amazon_product_reviews\": \"gd_le8e811kzy4ggddlq\"\n",
" - \"linkedin_person_profile\": \"gd_l1viktl72bvl7bjuj0\"\n",
" - \"linkedin_company_profile\": \"gd_l1vikfnt1wgvvqz95w\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"### Basic Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataWebScraperAPI\n",
"\n",
"# Initialize the tool\n",
"scraper_tool = BrightDataWebScraperAPI(\n",
" bright_data_api_key=\"your-api-key\" # Optional if set in environment variables\n",
")\n",
"\n",
"# Extract Amazon product data\n",
"results = scraper_tool.invoke(\n",
" {\"url\": \"https://www.amazon.com/dp/B08L5TNJHG\", \"dataset_type\": \"amazon_product\"}\n",
")\n",
"\n",
"print(results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Advanced Usage with Parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataWebScraperAPI\n",
"\n",
"# Initialize with default parameters\n",
"scraper_tool = BrightDataWebScraperAPI(bright_data_api_key=\"your-api-key\")\n",
"\n",
"# Extract Amazon product data with location-specific pricing\n",
"results = scraper_tool.invoke(\n",
" {\n",
" \"url\": \"https://www.amazon.com/dp/B08L5TNJHG\",\n",
" \"dataset_type\": \"amazon_product\",\n",
" \"zipcode\": \"10001\", # Get pricing for New York City\n",
" }\n",
")\n",
"\n",
"print(results)\n",
"\n",
"# Extract LinkedIn profile data\n",
"linkedin_results = scraper_tool.invoke(\n",
" {\n",
" \"url\": \"https://www.linkedin.com/in/satyanadella/\",\n",
" \"dataset_type\": \"linkedin_person_profile\",\n",
" }\n",
")\n",
"\n",
"print(linkedin_results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customization Options\n",
"\n",
"The BrightDataWebScraperAPI tool accepts several parameters for customization:\n",
"\n",
"|Parameter|Type|Description|\n",
"|:--|:--|:--|\n",
"|`url`|str|The URL to extract data from|\n",
"|`dataset_type`|str|Type of dataset to use (e.g., \"amazon_product\")|\n",
"|`zipcode`|str|Optional zipcode for location-specific data|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Available Dataset Types\n",
"\n",
"The tool supports the following dataset types for structured data extraction:\n",
"\n",
"|Dataset Type|Description|\n",
"|:--|:--|\n",
"|`amazon_product`|Extract detailed Amazon product data|\n",
"|`amazon_product_reviews`|Extract Amazon product reviews|\n",
"|`linkedin_person_profile`|Extract LinkedIn person profile data|\n",
"|`linkedin_company_profile`|Extract LinkedIn company profile data|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataWebScraperAPI\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"# Initialize the LLM\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", google_api_key=\"your-api-key\")\n",
"\n",
"# Initialize the Bright Data Web Scraper API tool\n",
"scraper_tool = BrightDataWebScraperAPI(bright_data_api_key=\"your-api-key\")\n",
"\n",
"# Create the agent with the tool\n",
"agent = create_react_agent(llm, [scraper_tool])\n",
"\n",
"# Provide a user query\n",
"user_input = \"Scrape Amazon product data for https://www.amazon.com/dp/B0D2Q9397Y?th=1 in New York (zipcode 10001).\"\n",
"\n",
"# Stream the agent's step-by-step output\n",
"for step in agent.stream(\n",
" {\"messages\": user_input},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"- [Bright Data API Documentation](https://docs.brightdata.com/scraping-automation/web-scraper-api/overview)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,294 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a6f91f20",
"metadata": {},
"source": [
"# BrightDataSERP\n",
"\n",
"[Bright Data](https://brightdata.com/) provides a powerful SERP API that allows you to query search engines (Google,Bing.DuckDuckGo,Yandex) with geo-targeting and advanced customization options, particularly useful for AI agents requiring real-time web information.\n",
"\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"\n",
"|Class|Package|Serializable|JS support|Package latest|\n",
"|:--|:--|:-:|:-:|:-:|\n",
"|[BrightDataSERP](https://pypi.org/project/langchain-brightdata/)|[langchain-brightdata](https://pypi.org/project/langchain-brightdata/)|✅|❌|![PyPI - Version](https://img.shields.io/pypi/v/langchain-brightdata?style=flat-square&label=%20)|\n",
"\n",
"\n",
"### Tool features\n",
"\n",
"\n",
"|Native async|Returns artifact|Return data|Pricing|\n",
"|:-:|:-:|:--|:-:|\n",
"|❌|❌|Title, URL, snippet, position, and other search result data|Requires Bright Data account|\n",
"\n",
"\n",
"\n",
"## Setup\n",
"\n",
"The integration lives in the `langchain-brightdata` package."
]
},
{
"cell_type": "raw",
"id": "f85b4089",
"metadata": {},
"source": [
"pip install langchain-brightdata"
]
},
{
"cell_type": "markdown",
"id": "b15e9266",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"You'll need a Bright Data API key to use this tool. You can set it as an environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"BRIGHT_DATA_API_KEY\"] = \"your-api-key\""
]
},
{
"cell_type": "markdown",
"id": "bc5ab717-fd27-4c59-b912-bdd099541478",
"metadata": {},
"source": [
"Or pass it directly when initializing the tool:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a6c2f136-6367-4f1f-825d-ae741e1bf281",
"metadata": {},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataSERP\n",
"\n",
"serp_tool = BrightDataSERP(bright_data_api_key=\"your-api-key\")"
]
},
{
"cell_type": "markdown",
"id": "eed8cfcc",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Here we show how to instantiate an instance of the BrightDataSERP tool. This tool allows you to perform search engine queries with various customization options including geo-targeting, language preferences, device type simulation, and specific search types using Bright Data's SERP API.\n",
"\n",
"The tool accepts various parameters during instantiation:\n",
"\n",
"- `bright_data_api_key` (required, str): Your Bright Data API key for authentication.\n",
"- `search_engine` (optional, str): Search engine to use for queries. Default is \"google\". Other options include \"bing\", \"yahoo\", \"yandex\", \"DuckDuckGo\" etc.\n",
"- `country` (optional, str): Two-letter country code for localized search results (e.g., \"us\", \"gb\", \"de\", \"jp\"). Default is \"us\".\n",
"- `language` (optional, str): Two-letter language code for the search results (e.g., \"en\", \"es\", \"fr\", \"de\"). Default is \"en\".\n",
"- `results_count` (optional, int): Number of search results to return. Default is 10. Maximum value is typically 100.\n",
"- `search_type` (optional, str): Type of search to perform. Options include:\n",
" - None (default): Regular web search\n",
" - \"isch\": Images search\n",
" - \"shop\": Shopping search\n",
" - \"nws\": News search\n",
" - \"jobs\": Jobs search\n",
"- `device_type` (optional, str): Device type to simulate for the search. Options include:\n",
" - None (default): Desktop device\n",
" - \"mobile\": Generic mobile device\n",
" - \"ios\": iOS device (iPhone)\n",
" - \"android\": Android device\n",
"- `parse_results` (optional, bool): Whether to return parsed JSON results. Default is False, which returns raw HTML response."
]
},
{
"cell_type": "markdown",
"id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "markdown",
"id": "902dc1fd",
"metadata": {},
"source": [
"### Basic Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b3ddfe9-ca79-494c-a7ab-1f56d9407a64",
"metadata": {},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataSERP\n",
"\n",
"# Initialize the tool\n",
"serp_tool = BrightDataSERP(\n",
" bright_data_api_key=\"your-api-key\" # Optional if set in environment variables\n",
")\n",
"\n",
"# Run a basic search\n",
"results = serp_tool.invoke(\"latest AI research papers\")\n",
"\n",
"print(results)"
]
},
{
"cell_type": "markdown",
"id": "74147a1a",
"metadata": {},
"source": [
"### Advanced Usage with Parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataSERP\n",
"\n",
"# Initialize with default parameters\n",
"serp_tool = BrightDataSERP(\n",
" bright_data_api_key=\"your-api-key\",\n",
" search_engine=\"google\", # Default\n",
" country=\"us\", # Default\n",
" language=\"en\", # Default\n",
" results_count=10, # Default\n",
" parse_results=True, # Get structured JSON results\n",
")\n",
"\n",
"# Use with specific parameters for this search\n",
"results = serp_tool.invoke(\n",
" {\n",
" \"query\": \"best electric vehicles\",\n",
" \"country\": \"de\", # Get results as if searching from Germany\n",
" \"language\": \"de\", # Get results in German\n",
" \"search_type\": \"shop\", # Get shopping results\n",
" \"device_type\": \"mobile\", # Simulate a mobile device\n",
" \"results_count\": 15,\n",
" }\n",
")\n",
"\n",
"print(results)"
]
},
{
"cell_type": "markdown",
"id": "d6e73897",
"metadata": {},
"source": [
"## Customization Options\n",
"\n",
"The BrightDataSERP tool accepts several parameters for customization:\n",
"\n",
"|Parameter|Type|Description|\n",
"|:--|:--|:--|\n",
"|`query`|str|The search query to perform|\n",
"|`search_engine`|str|Search engine to use (default: \"google\")|\n",
"|`country`|str|Two-letter country code for localized results (default: \"us\")|\n",
"|`language`|str|Two-letter language code (default: \"en\")|\n",
"|`results_count`|int|Number of results to return (default: 10)|\n",
"|`search_type`|str|Type of search: None (web), \"isch\" (images), \"shop\", \"nws\" (news), \"jobs\"|\n",
"|`device_type`|str|Device type: None (desktop), \"mobile\", \"ios\", \"android\"|\n",
"|`parse_results`|bool|Whether to return structured JSON (default: False)|\n"
]
},
{
"cell_type": "markdown",
"id": "e3353ce6",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c91c32f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataSERP\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"# Initialize the LLM\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", google_api_key=\"your-api-key\")\n",
"\n",
"# Initialize the Bright Data SERP tool\n",
"serp_tool = BrightDataSERP(\n",
" bright_data_api_key=\"your-api-key\",\n",
" search_engine=\"google\",\n",
" country=\"us\",\n",
" language=\"en\",\n",
" results_count=10,\n",
" parse_results=True,\n",
")\n",
"\n",
"# Create the agent\n",
"agent = create_react_agent(llm, [serp_tool])\n",
"\n",
"# Provide a user query\n",
"user_input = \"Search for 'best electric vehicles' shopping results in Germany in German using mobile.\"\n",
"\n",
"# Stream the agent's output step-by-step\n",
"for step in agent.stream(\n",
" {\"messages\": user_input},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "e8dec55a",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"- [Bright Data API Documentation](https://docs.brightdata.com/scraping-automation/serp-api/introduction)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,314 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BrightDataUnlocker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Bright Data](https://brightdata.com/) provides a powerful Web Unlocker API that allows you to access websites that might be protected by anti-bot measures, geo-restrictions, or other access limitations, making it particularly useful for AI agents requiring reliable web content extraction."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration details"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"|Class|Package|Serializable|JS support|Package latest|\n",
"|:--|:--|:-:|:-:|:-:|\n",
"|[BrightDataUnlocker](https://pypi.org/project/langchain-brightdata/)|[langchain-brightdata](https://pypi.org/project/langchain-brightdata/)|✅|❌|![PyPI - Version](https://img.shields.io/pypi/v/langchain-brightdata?style=flat-square&label=%20)|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool features"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"|Native async|Returns artifact|Return data|Pricing|\n",
"|:-:|:-:|:--|:-:|\n",
"|❌|❌|HTML, Markdown, or screenshot of web pages|Requires Bright Data account|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The integration lives in the `langchain-brightdata` package."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"pip install langchain-brightdata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll need a Bright Data API key to use this tool. You can set it as an environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"BRIGHT_DATA_API_KEY\"] = \"your-api-key\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Or pass it directly when initializing the tool:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataUnlocker\n",
"\n",
"unlocker_tool = BrightDataUnlocker(bright_data_api_key=\"your-api-key\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Here we show how to instantiate an instance of the BrightDataUnlocker tool. This tool allows you to access websites that may be protected by anti-bot measures, geo-restrictions, or other access limitations using Bright Data's Web Unlocker service.\n",
"\n",
"The tool accepts various parameters during instantiation:\n",
"\n",
"- `bright_data_api_key` (required, str): Your Bright Data API key for authentication.\n",
"- `format` (optional, Literal[\"raw\"]): Format of the response content. Default is \"raw\".\n",
"- `country` (optional, str): Two-letter country code for geo-specific access (e.g., \"us\", \"gb\", \"de\", \"jp\"). Set this when you need to view the website as if accessing from a specific country. Default is None.\n",
"- `zone` (optional, str): Bright Data zone to use for the request. The \"unlocker\" zone is optimized for accessing websites that might block regular requests. Default is \"unlocker\".\n",
"- `data_format` (optional, Literal[\"html\", \"markdown\", \"screenshot\"]): Output format for the retrieved content. Options include:\n",
" - \"html\" - Returns the standard HTML content (default)\n",
" - \"markdown\" - Returns content converted to markdown format\n",
" - \"screenshot\" - Returns a PNG screenshot of the rendered page"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Basic Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataUnlocker\n",
"\n",
"# Initialize the tool\n",
"unlocker_tool = BrightDataUnlocker(\n",
" bright_data_api_key=\"your-api-key\" # Optional if set in environment variables\n",
")\n",
"\n",
"# Access a webpage\n",
"result = unlocker_tool.invoke(\"https://example.com\")\n",
"\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Advanced Usage with Parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataUnlocker\n",
"\n",
"unlocker_tool = BrightDataUnlocker(\n",
" bright_data_api_key=\"your-api-key\",\n",
")\n",
"\n",
"# Access a webpage with specific parameters\n",
"result = unlocker_tool.invoke(\n",
" {\n",
" \"url\": \"https://example.com/region-restricted-content\",\n",
" \"country\": \"gb\", # Access as if from Great Britain\n",
" \"data_format\": \"html\", # Get content in markdown format\n",
" \"zone\": \"unlocker\", # Use the unlocker zone\n",
" }\n",
")\n",
"\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customization Options"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The BrightDataUnlocker tool accepts several parameters for customization:\n",
"\n",
"|Parameter|Type|Description|\n",
"|:--|:--|:--|\n",
"|`url`|str|The URL to access|\n",
"|`format`|str|Format of the response content (default: \"raw\")|\n",
"|`country`|str|Two-letter country code for geo-specific access (e.g., \"us\", \"gb\")|\n",
"|`zone`|str|Bright Data zone to use (default: \"unlocker\")|\n",
"|`data_format`|str|Output format: None (HTML), \"markdown\", or \"screenshot\"|\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Format Options\n",
"\n",
"The `data_format` parameter allows you to specify how the content should be returned:\n",
"\n",
"- `None` or `\"html\"` (default): Returns the standard HTML content of the page\n",
"- `\"markdown\"`: Returns the content converted to markdown format, which is useful for feeding directly to LLMs\n",
"- `\"screenshot\"`: Returns a PNG screenshot of the rendered page, useful for visual analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_brightdata import BrightDataUnlocker\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"# Initialize the LLM\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", google_api_key=\"your-api-key\")\n",
"\n",
"# Initialize the tool\n",
"bright_data_tool = BrightDataUnlocker(bright_data_api_key=\"your-api-key\")\n",
"\n",
"# Create the agent\n",
"agent = create_react_agent(llm, [bright_data_tool])\n",
"\n",
"# Input URLs or prompt\n",
"user_input = \"Get the content from https://example.com/region-restricted-page - access it from GB\"\n",
"\n",
"# Stream the agent's output step by step\n",
"for step in agent.stream(\n",
" {\"messages\": user_input},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"- [Bright Data API Documentation](https://docs.brightdata.com/scraping-automation/web-unlocker/introduction)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -15,11 +15,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet google-search-results langchain-community"
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{
@@ -31,31 +39,39 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.tools.google_finance import GoogleFinanceQueryRun\n",
"from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper\n",
"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"[your serpapi key]\"\n",
"tool = GoogleFinanceQueryRun(api_wrapper=GoogleFinanceAPIWrapper())"
"os.environ[\"SERPAPI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.google_finance import GoogleFinanceQueryRun\n",
"from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper\n",
"\n",
"tool = GoogleFinanceQueryRun(api_wrapper=GoogleFinanceAPIWrapper())"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nQuery: Google\\nstock: GOOGL:NASDAQ\\nprice: $161.96\\npercentage: 1.68\\nmovement: Up\\n'"
"'\\nQuery: Google\\nstock: GOOGL:NASDAQ\\nprice: $159.96\\npercentage: 0.94\\nmovement: Up\\nus: price = 42210.57, movement = Down\\neurope: price = 23638.56, movement = Up\\nasia: price = 38183.26, movement = Up\\n'"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -73,9 +89,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langgraph langchain-openai"
]
@@ -89,7 +113,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"SERP_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\"gpt-4o-mini\", model_provider=\"openai\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.agent_toolkits.load_tools import load_tools\n",
"\n",
"tools = load_tools([\"google-scholar\", \"google-finance\"], llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
@@ -101,8 +159,8 @@
"What is Google's stock?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" google_finance (call_u676mJAkdojgkW806ZGSE8mF)\n",
" Call ID: call_u676mJAkdojgkW806ZGSE8mF\n",
" google_finance (call_8m0txCtxNuQaAv9UlomPhSA1)\n",
" Call ID: call_8m0txCtxNuQaAv9UlomPhSA1\n",
" Args:\n",
" query: Google\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
@@ -111,28 +169,22 @@
"\n",
"Query: Google\n",
"stock: GOOGL:NASDAQ\n",
"price: $161.96\n",
"percentage: 1.68\n",
"price: $159.96\n",
"percentage: 0.94\n",
"movement: Up\n",
"us: price = 42210.57, movement = Down\n",
"europe: price = 23638.56, movement = Up\n",
"asia: price = 38183.26, movement = Up\n",
"\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Google's stock (Ticker: GOOGL) is currently priced at $161.96, showing an increase of 1.68%.\n"
"Google's stock, listed as GOOGL on NASDAQ, is currently priced at $159.96, with a movement up by 0.94%.\n"
]
}
],
"source": [
"import os\n",
"\n",
"from langchain.agents import load_tools\n",
"from langchain.chat_models import init_chat_model\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"[your openai key]\"\n",
"os.environ[\"SERP_API_KEY\"] = \"[your serpapi key]\"\n",
"\n",
"llm = init_chat_model(\"gpt-4o-mini\", model_provider=\"openai\")\n",
"tools = load_tools([\"google-scholar\", \"google-finance\"], llm=llm)\n",
"agent = create_react_agent(llm, tools)\n",
"\n",
"events = agent.stream(\n",
@@ -142,11 +194,18 @@
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -160,9 +219,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -83,7 +83,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"from langchain_community.agent_toolkits.load_tools import load_tools\n",
"\n",
"tools = load_tools(\n",
" [\"graphql\"],\n",
@@ -223,7 +223,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
"version": "3.12.4"
}
},
"nbformat": 4,

View File

@@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 1,
"id": "70871a99-ffee-47d7-8e02-82eb99971f28",
"metadata": {},
"outputs": [],
@@ -51,7 +51,7 @@
{
"data": {
"text/plain": [
"'Barack Hussein Obama II'"
"'Barack Obama Full name: Barack Hussein Obama II'"
]
},
"execution_count": 4,
@@ -73,7 +73,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 5,
"id": "17a9b1ad-6e84-4949-8ebd-8c52f6b296e3",
"metadata": {},
"outputs": [],
@@ -83,48 +83,11 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "cf8970a5-00e1-46bd-ba53-6a974eebbc10",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: How old was Plato when he died?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3meighty\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mFollow up: How old was Socrates when he died?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3m| Socrates | \n",
"| -------- | \n",
"| Born | c. 470 BC Deme Alopece, Athens | \n",
"| Died | 399 BC (aged approximately 71) Athens | \n",
"| Cause of death | Execution by forced suicide by poisoning | \n",
"| Spouse(s) | Xanthippe, Myrto | \n",
"\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mFollow up: How old was Aristotle when he died?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3m62 years\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mSo the final answer is: Plato\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Plato'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain_community.utilities import SearchApiAPIWrapper\n",
"from langchain_core.tools import Tool\n",
"from langchain_openai import OpenAI\n",
@@ -133,16 +96,88 @@
"search = SearchApiAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
" name=\"intermediate_answer\",\n",
" func=search.run,\n",
" description=\"useful for when you need to ask with search\",\n",
" )\n",
"]\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4198dda8-b7a9-4ae9-bcb6-b95e2c7681b9",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"self_ask_with_search = initialize_agent(\n",
" tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True\n",
")\n",
"self_ask_with_search.run(\"Who lived longer: Plato, Socrates, or Aristotle?\")"
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c24ad140-d41f-4e99-a42f-11371c3897b5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Who lived longer: Plato, Socrates, or Aristotle?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" intermediate_answer (call_Q0JquDV3SWfnn3rkwJkJaffG)\n",
" Call ID: call_Q0JquDV3SWfnn3rkwJkJaffG\n",
" Args:\n",
" __arg1: Lifespan of Plato\n",
" intermediate_answer (call_j9rXzVlrCcGc8HOFnKUH6j5E)\n",
" Call ID: call_j9rXzVlrCcGc8HOFnKUH6j5E\n",
" Args:\n",
" __arg1: Lifespan of Socrates\n",
" intermediate_answer (call_IBQT2qn5PzDE6q0ZyfPdhRaX)\n",
" Call ID: call_IBQT2qn5PzDE6q0ZyfPdhRaX\n",
" Args:\n",
" __arg1: Lifespan of Aristotle\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: intermediate_answer\n",
"\n",
"384322 BC was an Ancient Greek philosopher and polymath. His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, ...\n",
"The Greek philosopher Aristotle (384-322 B.C.) made significant and lasting contributions to nearly every aspect of human knowledge, ...\n",
"Aristotle's lifespan (384 - 322) (jan 1, 384 BC jan 1, 322 BC). Added to timeline: Political Philosophy timeline. ByEdoardo. 25 Aug 2020.\n",
"Aristotle was one of the greatest philosophers and scientists the world has ever seen. He was born in 384 bc at Stagirus, a Greek seaport on the coast of Thrace ...\n",
"393c. 370 bce), king of Macedonia and grandfather of Alexander the Great (reigned 336323 bce). After his father's death in 367, Aristotle ...\n",
"It is difficult to rule out that possibility decisively, since little is known about the period of Aristotle's life from 341335. He evidently ...\n",
"Lifespan: c. 384 B.C. to 322 B.C.; Contributions: Considered one of the greatest thinkers in various fields including politics, psychology, and ...\n",
"Aristotle (Greek: Ἀριστοτέλης Aristotélēs, pronounced [aristotélɛːs]) lived 384322 BC.\n",
"Aristotle (384 B.C.E.—322 B.C.E.). Aristotle is a towering figure in ancient Greek philosophy, who made important contributions to logic, criticism, ...\n",
"Aristotle. Born: 384 BC in Stagirus, Macedonia, Greece Died: 322 BC in Chalcis, Euboea, Greece. Aristotle was not primarily a mathematician but made ...\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Based on the information:\n",
"\n",
"- Plato reportedly lived to be around eighty or eighty-one years old.\n",
"- Socrates' exact lifespan is not directly stated here, but he is known historically to have lived approximately from 470 BC to 399 BC, making him around 71 years old.\n",
"- Aristotle lived from 384 BC to 322 BC, which means he was about 62 years old.\n",
"\n",
"Therefore, Plato lived longer than both Socrates and Aristotle.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Who lived longer: Plato, Socrates, or Aristotle?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -157,7 +192,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "6d0b4411-780a-4dcf-91b6-f3544e31e532",
"metadata": {},
"outputs": [],
@@ -167,17 +202,17 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "34e79449-6b33-4b45-9306-7e3dab1b8599",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Azure AI Engineer Be an XpanderCandidatar-meCandidatar-meCandidatar-me\\n\\nShare:\\n\\nAzure AI Engineer\\n\\nA área Digital Xperience da Xpand IT é uma equipa tecnológica de rápido crescimento que se concentra em tecnologias Microsoft e Mobile. A sua principal missão é fornecer soluções de software de alta qualidade que atendam às necessidades do utilizador final, num mundo tecnológico continuamente exigente e em ritmo acelerado, proporcionando a melhor experiência em termos de personalização, performance'"
"'No good search result found'"
]
},
"execution_count": 9,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -196,7 +231,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "b16b7cd9-f0fe-4030-a36b-bbb52b19da18",
"metadata": {},
"outputs": [],
@@ -206,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "e8adb325-2ad0-4a39-9bc2-d220ec3a29be",
"metadata": {},
"outputs": [
@@ -214,22 +249,22 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'search_metadata': {'id': 'search_qVdXG2jzvrlqTzayeYoaOb8A',\n",
"{'search_metadata': {'id': 'search_6Lpb2Z8vDqdsPRbrGkVgQzRy',\n",
" 'status': 'Success',\n",
" 'created_at': '2023-09-25T15:22:30Z',\n",
" 'request_time_taken': 3.21,\n",
" 'parsing_time_taken': 0.03,\n",
" 'total_time_taken': 3.24,\n",
" 'created_at': '2025-05-11T03:39:28Z',\n",
" 'request_time_taken': 0.86,\n",
" 'parsing_time_taken': 0.01,\n",
" 'total_time_taken': 0.87,\n",
" 'request_url': 'https://scholar.google.com/scholar?q=Large+Language+Models&hl=en',\n",
" 'html_url': 'https://www.searchapi.io/api/v1/searches/search_qVdXG2jzvrlqTzayeYoaOb8A.html',\n",
" 'json_url': 'https://www.searchapi.io/api/v1/searches/search_qVdXG2jzvrlqTzayeYoaOb8A'},\n",
" 'html_url': 'https://www.searchapi.io/api/v1/searches/search_6Lpb2Z8vDqdsPRbrGkVgQzRy.html',\n",
" 'json_url': 'https://www.searchapi.io/api/v1/searches/search_6Lpb2Z8vDqdsPRbrGkVgQzRy'},\n",
" 'search_parameters': {'engine': 'google_scholar',\n",
" 'q': 'Large Language Models',\n",
" 'hl': 'en'},\n",
" 'search_information': {'query_displayed': 'Large Language Models',\n",
" 'total_results': 6420000,\n",
" 'total_results': 6390000,\n",
" 'page': 1,\n",
" 'time_taken_displayed': 0.06},\n",
" 'time_taken_displayed': 0.08},\n",
" 'organic_results': [{'position': 1,\n",
" 'title': 'ChatGPT for good? On opportunities and '\n",
" 'challenges of large language models for '\n",
@@ -245,15 +280,15 @@
" 'we argue that large language models in '\n",
" 'education require …',\n",
" 'inline_links': {'cited_by': {'cites_id': '8166055256995715258',\n",
" 'total': 410,\n",
" 'link': 'https://scholar.google.com/scholar?cites=8166055256995715258&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'total': 4675,\n",
" 'link': 'https://scholar.google.com/scholar?cites=8166055256995715258&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '8166055256995715258',\n",
" 'total': 10,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=8166055256995715258&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:uthwmf2nU3EJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'edarxiv.org',\n",
" 'total': 16,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=8166055256995715258&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:uthwmf2nU3EJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'osf.io',\n",
" 'format': 'PDF',\n",
" 'link': 'https://edarxiv.org/5er8f/download?format=pdf'},\n",
" 'link': 'https://osf.io/preprints/edarxiv/5er8f/download'},\n",
" 'authors': [{'name': 'E Kasneci',\n",
" 'id': 'bZVkVvoAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=bZVkVvoAAAAJ&hl=en&oi=sra'},\n",
@@ -267,6 +302,82 @@
" 'id': 'TjfQ8QkAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=TjfQ8QkAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 2,\n",
" 'title': 'A survey on evaluation of large language '\n",
" 'models',\n",
" 'data_cid': 'o93zfHYlUTIJ',\n",
" 'link': 'https://dl.acm.org/doi/abs/10.1145/3641289',\n",
" 'publication': 'Y Chang, X Wang, J Wang, Y Wu, L Yang… - '\n",
" 'ACM transactions on …, 2024 - dl.acm.org',\n",
" 'snippet': '… 3.1 Natural Language Processing Tasks … '\n",
" 'the development of language models, '\n",
" 'particularly large language models, was to '\n",
" 'enhance performance on natural language '\n",
" 'processing tasks, …',\n",
" 'inline_links': {'cited_by': {'cites_id': '3625720365842685347',\n",
" 'total': 2864,\n",
" 'link': 'https://scholar.google.com/scholar?cites=3625720365842685347&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '3625720365842685347',\n",
" 'total': 8,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=3625720365842685347&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:o93zfHYlUTIJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'acm.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://dl.acm.org/doi/pdf/10.1145/3641289'},\n",
" 'authors': [{'name': 'Y Chang',\n",
" 'id': 'Hw-lrpAAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=Hw-lrpAAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'X Wang',\n",
" 'id': 'Q7Ieos8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=Q7Ieos8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Wang',\n",
" 'id': 'hBZ_tKsAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=hBZ_tKsAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'Y Wu',\n",
" 'id': 'KVeRu2QAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=KVeRu2QAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'L Yang',\n",
" 'id': 'go3sFxcAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=go3sFxcAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 3,\n",
" 'title': 'A comprehensive overview of large language '\n",
" 'models',\n",
" 'data_cid': 'UDLkJGuOVl4J',\n",
" 'link': 'https://arxiv.org/abs/2307.06435',\n",
" 'publication': 'H Naveed, AU Khan, S Qiu, M Saqib, S '\n",
" 'Anwar… - arXiv preprint arXiv …, 2023 - '\n",
" 'arxiv.org',\n",
" 'snippet': '… Large Language Models (LLMs) have recently '\n",
" 'demonstrated remarkable capabilities in '\n",
" 'natural language processing tasks and '\n",
" 'beyond. This success of LLMs has led to a '\n",
" 'large influx of …',\n",
" 'inline_links': {'cited_by': {'cites_id': '6797777278393922128',\n",
" 'total': 990,\n",
" 'link': 'https://scholar.google.com/scholar?cites=6797777278393922128&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '6797777278393922128',\n",
" 'total': 4,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=6797777278393922128&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:UDLkJGuOVl4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:UDLkJGuOVl4J:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'arxiv.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://arxiv.org/pdf/2307.06435'},\n",
" 'authors': [{'name': 'H Naveed',\n",
" 'id': 'k5dpooQAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=k5dpooQAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'AU Khan',\n",
" 'id': 'sbOhz2UAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=sbOhz2UAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'S Qiu',\n",
" 'id': 'OPNVthUAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=OPNVthUAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'M Saqib',\n",
" 'id': 'KvbLR3gAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=KvbLR3gAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'S Anwar',\n",
" 'id': 'vPJIHywAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=vPJIHywAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 4,\n",
" 'title': 'Large language models in medicine',\n",
" 'data_cid': 'Ph9AwHTmhzAJ',\n",
" 'link': 'https://www.nature.com/articles/s41591-023-02448-8',\n",
@@ -279,11 +390,15 @@
" '(LLaMA) as its backend model 30 . Finally, '\n",
" 'cheap imitations of …',\n",
" 'inline_links': {'cited_by': {'cites_id': '3497017024792502078',\n",
" 'total': 25,\n",
" 'link': 'https://scholar.google.com/scholar?cites=3497017024792502078&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'total': 2474,\n",
" 'link': 'https://scholar.google.com/scholar?cites=3497017024792502078&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '3497017024792502078',\n",
" 'total': 3,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=3497017024792502078&hl=en&as_sdt=0,33'}},\n",
" 'total': 7,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=3497017024792502078&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:Ph9AwHTmhzAJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'google.com',\n",
" 'format': 'PDF',\n",
" 'link': 'https://drive.google.com/file/d/1FKEGsSZ9GYOeToeKpxB4m3atGRbC-TSm/view'},\n",
" 'authors': [{'name': 'AJ Thirunavukarasu',\n",
" 'id': '3qb1AYwAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=3qb1AYwAAAAJ&hl=en&oi=sra'},\n",
@@ -293,43 +408,132 @@
" {'name': 'K Elangovan',\n",
" 'id': 'BE_lVTQAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=BE_lVTQAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 3,\n",
" 'title': 'Extracting training data from large language '\n",
" 'models',\n",
" 'data_cid': 'mEYsWK6bWKoJ',\n",
" 'link': 'https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting',\n",
" 'publication': 'N Carlini, F Tramer, E Wallace, M '\n",
" 'Jagielski… - 30th USENIX Security …, '\n",
" '2021 - usenix.org',\n",
" 'snippet': '… language model trained on scrapes of the '\n",
" 'public Internet, and are able to extract '\n",
" 'hundreds of verbatim text sequences from the '\n",
" 'model… models are more vulnerable than '\n",
" 'smaller models. …',\n",
" 'inline_links': {'cited_by': {'cites_id': '12274731957504198296',\n",
" 'total': 742,\n",
" 'link': 'https://scholar.google.com/scholar?cites=12274731957504198296&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '12274731957504198296',\n",
" 'total': 8,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=12274731957504198296&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:mEYsWK6bWKoJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:mEYsWK6bWKoJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'usenix.org',\n",
" {'position': 5,\n",
" 'title': 'A watermark for large language models',\n",
" 'data_cid': 'BlSyLHT4iiEJ',\n",
" 'link': 'https://proceedings.mlr.press/v202/kirchenbauer23a.html',\n",
" 'publication': 'J Kirchenbauer, J Geiping, Y Wen… - '\n",
" 'International …, 2023 - '\n",
" 'proceedings.mlr.press',\n",
" 'snippet': '… We propose a watermarking framework for '\n",
" 'proprietary language models. The … in the '\n",
" 'language model just before it produces a '\n",
" 'probability vector. The last layer of the '\n",
" 'language model …',\n",
" 'inline_links': {'cited_by': {'cites_id': '2417017327887471622',\n",
" 'total': 774,\n",
" 'link': 'https://scholar.google.com/scholar?cites=2417017327887471622&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '2417017327887471622',\n",
" 'total': 13,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=2417017327887471622&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:BlSyLHT4iiEJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:BlSyLHT4iiEJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'mlr.press',\n",
" 'format': 'PDF',\n",
" 'link': 'https://www.usenix.org/system/files/sec21-carlini-extracting.pdf'},\n",
" 'authors': [{'name': 'N Carlini',\n",
" 'id': 'q4qDvAoAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=q4qDvAoAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'F Tramer',\n",
" 'id': 'ijH0-a8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=ijH0-a8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'E Wallace',\n",
" 'id': 'SgST3LkAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=SgST3LkAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'M Jagielski',\n",
" 'id': '_8rw_GMAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=_8rw_GMAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 4,\n",
" 'link': 'https://proceedings.mlr.press/v202/kirchenbauer23a/kirchenbauer23a.pdf'},\n",
" 'authors': [{'name': 'J Kirchenbauer',\n",
" 'id': '48GJrbsAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=48GJrbsAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Geiping',\n",
" 'id': '206vNCEAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=206vNCEAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'Y Wen',\n",
" 'id': 'oUYfjg0AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=oUYfjg0AAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 6,\n",
" 'title': 'Welcome to the era of chatgpt et al. the '\n",
" 'prospects of large language models',\n",
" 'data_cid': '3UrgC1BmpV8J',\n",
" 'link': 'https://link.springer.com/article/10.1007/s12599-023-00795-x',\n",
" 'publication': 'T Teubner, CM Flath, C Weinhardt… - '\n",
" 'Business & Information …, 2023 - '\n",
" 'Springer',\n",
" 'snippet': 'The emergence of Large Language Models '\n",
" '(LLMs) in combination with easy-to-use '\n",
" 'interfaces such as ChatGPT, Bing Chat, and '\n",
" 'Googles Bard represent both a Herculean '\n",
" 'task and a …',\n",
" 'inline_links': {'cited_by': {'cites_id': '6892027298743077597',\n",
" 'total': 409,\n",
" 'link': 'https://scholar.google.com/scholar?cites=6892027298743077597&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '6892027298743077597',\n",
" 'total': 16,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=6892027298743077597&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:3UrgC1BmpV8J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'springer.com',\n",
" 'format': 'PDF',\n",
" 'link': 'https://link.springer.com/content/pdf/10.1007/s12599-023-00795-x.pdf'},\n",
" 'authors': [{'name': 'T Teubner',\n",
" 'id': 'ZeCM1k8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=ZeCM1k8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'CM Flath',\n",
" 'id': '5Iy85HsAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=5Iy85HsAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'C Weinhardt',\n",
" 'id': 'lhfZxjAAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=lhfZxjAAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 7,\n",
" 'title': 'Talking about large language models',\n",
" 'data_cid': '3eYYI745r_0J',\n",
" 'link': 'https://dl.acm.org/doi/abs/10.1145/3624724',\n",
" 'publication': 'M Shanahan - Communications of the ACM, '\n",
" '2024 - dl.acm.org',\n",
" 'snippet': '… Recently, it has become commonplace to use '\n",
" 'the term “large language model” both for the '\n",
" 'generative models themselves and for the '\n",
" 'systems in which they are embedded, '\n",
" 'especially in …',\n",
" 'inline_links': {'cited_by': {'cites_id': '18279892901315536605',\n",
" 'total': 477,\n",
" 'link': 'https://scholar.google.com/scholar?cites=18279892901315536605&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '18279892901315536605',\n",
" 'total': 4,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=18279892901315536605&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:3eYYI745r_0J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'acm.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://dl.acm.org/doi/pdf/10.1145/3624724'},\n",
" 'authors': [{'name': 'M Shanahan',\n",
" 'id': '00bnGpAAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=00bnGpAAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 8,\n",
" 'title': 'Explainability for large language models: A '\n",
" 'survey',\n",
" 'data_cid': '0AqRKEINMw4J',\n",
" 'link': 'https://dl.acm.org/doi/abs/10.1145/3639372',\n",
" 'publication': 'H Zhao, H Chen, F Yang, N Liu, H Deng, H '\n",
" 'Cai… - ACM Transactions on …, 2024 - '\n",
" 'dl.acm.org',\n",
" 'snippet': '… Let us consider a scenario where we have a '\n",
" 'language model and we input a specific text '\n",
" 'into the model. The model then produces a '\n",
" 'classification output, such as sentiment …',\n",
" 'inline_links': {'cited_by': {'cites_id': '1023176118142831312',\n",
" 'total': 576,\n",
" 'link': 'https://scholar.google.com/scholar?cites=1023176118142831312&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '1023176118142831312',\n",
" 'total': 7,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=1023176118142831312&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:0AqRKEINMw4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'acm.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://dl.acm.org/doi/pdf/10.1145/3639372'},\n",
" 'authors': [{'name': 'H Zhao',\n",
" 'id': '9FobigIAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=9FobigIAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'H Chen',\n",
" 'id': 'DyYOgLwAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=DyYOgLwAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'F Yang',\n",
" 'id': 'RXFeW-8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=RXFeW-8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'N Liu',\n",
" 'id': 'Nir-EDYAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=Nir-EDYAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'H Cai',\n",
" 'id': 'Kz-r34UAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=Kz-r34UAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 9,\n",
" 'title': 'Emergent abilities of large language models',\n",
" 'data_cid': 'hG0iVOrOguoJ',\n",
" 'link': 'https://arxiv.org/abs/2206.07682',\n",
@@ -341,16 +545,16 @@
" 'efficiency on a wide range of downstream '\n",
" 'tasks. This paper instead discusses an …',\n",
" 'inline_links': {'cited_by': {'cites_id': '16898296257676733828',\n",
" 'total': 621,\n",
" 'link': 'https://scholar.google.com/scholar?cites=16898296257676733828&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'total': 3436,\n",
" 'link': 'https://scholar.google.com/scholar?cites=16898296257676733828&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '16898296257676733828',\n",
" 'total': 12,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=16898296257676733828&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:hG0iVOrOguoJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:hG0iVOrOguoJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'total': 11,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=16898296257676733828&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:hG0iVOrOguoJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:hG0iVOrOguoJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'arxiv.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://arxiv.org/pdf/2206.07682.pdf?trk=cndc-detail'},\n",
" 'link': 'https://arxiv.org/pdf/2206.07682'},\n",
" 'authors': [{'name': 'J Wei',\n",
" 'id': 'wA5TK_0AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=wA5TK_0AAAAJ&hl=en&oi=sra'},\n",
@@ -362,232 +566,78 @@
" 'link': 'https://scholar.google.com/citations?user=WMBXw1EAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'C Raffel',\n",
" 'id': 'I66ZBYwAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=I66ZBYwAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'B Zoph',\n",
" 'id': 'NL_7iTwAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=NL_7iTwAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 5,\n",
" 'title': 'A survey on evaluation of large language '\n",
" 'models',\n",
" 'data_cid': 'ZYohnzOz-XgJ',\n",
" 'link': 'https://arxiv.org/abs/2307.03109',\n",
" 'publication': 'Y Chang, X Wang, J Wang, Y Wu, K Zhu… - '\n",
" 'arXiv preprint arXiv …, 2023 - arxiv.org',\n",
" 'snippet': '… 3.1 Natural Language Processing Tasks … '\n",
" 'the development of language models, '\n",
" 'particularly large language models, was to '\n",
" 'enhance performance on natural language '\n",
" 'processing tasks, …',\n",
" 'inline_links': {'cited_by': {'cites_id': '8717195588046785125',\n",
" 'total': 31,\n",
" 'link': 'https://scholar.google.com/scholar?cites=8717195588046785125&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '8717195588046785125',\n",
" 'total': 3,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=8717195588046785125&hl=en&as_sdt=0,33'},\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:ZYohnzOz-XgJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'arxiv.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://arxiv.org/pdf/2307.03109'},\n",
" 'authors': [{'name': 'X Wang',\n",
" 'id': 'Q7Ieos8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=Q7Ieos8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Wang',\n",
" 'id': 'YomxTXQAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=YomxTXQAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'Y Wu',\n",
" 'id': 'KVeRu2QAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=KVeRu2QAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'K Zhu',\n",
" 'id': 'g75dFLYAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=g75dFLYAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 6,\n",
" 'title': 'Evaluating large language models trained on '\n",
" 'code',\n",
" 'data_cid': '3tNvW3l5nU4J',\n",
" 'link': 'https://arxiv.org/abs/2107.03374',\n",
" 'publication': 'M Chen, J Tworek, H Jun, Q Yuan, HPO '\n",
" 'Pinto… - arXiv preprint arXiv …, 2021 - '\n",
" 'arxiv.org',\n",
" 'snippet': '… We introduce Codex, a GPT language model '\n",
" 'finetuned on publicly available code from '\n",
" 'GitHub, and study its Python code-writing '\n",
" 'capabilities. A distinct production version '\n",
" 'of Codex …',\n",
" 'inline_links': {'cited_by': {'cites_id': '5664817468434011102',\n",
" 'total': 941,\n",
" 'link': 'https://scholar.google.com/scholar?cites=5664817468434011102&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '5664817468434011102',\n",
" 'total': 2,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=5664817468434011102&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:3tNvW3l5nU4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:3tNvW3l5nU4J:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'arxiv.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://arxiv.org/pdf/2107.03374.pdf?trk=public_post_comment-text'},\n",
" 'authors': [{'name': 'M Chen',\n",
" 'id': '5fU-QMwAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=5fU-QMwAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Tworek',\n",
" 'id': 'ZPuESCQAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=ZPuESCQAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'Q Yuan',\n",
" 'id': 'B059m2EAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=B059m2EAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 7,\n",
" 'title': 'Large language models in machine translation',\n",
" 'data_cid': 'sY5m_Y3-0Y4J',\n",
" 'link': 'http://research.google/pubs/pub33278.pdf',\n",
" 'publication': 'T Brants, AC Popat, P Xu, FJ Och, J Dean '\n",
" '- 2007 - research.google',\n",
" 'snippet': '… the benefits of largescale statistical '\n",
" 'language modeling in ma… trillion tokens, '\n",
" 'resulting in language models having up to '\n",
" '300 … is inexpensive to train on large data '\n",
" 'sets and approaches the …',\n",
" 'type': 'PDF',\n",
" 'inline_links': {'cited_by': {'cites_id': '10291286509313494705',\n",
" 'total': 737,\n",
" 'link': 'https://scholar.google.com/scholar?cites=10291286509313494705&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '10291286509313494705',\n",
" 'total': 31,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=10291286509313494705&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:sY5m_Y3-0Y4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:sY5m_Y3-0Y4J:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'research.google',\n",
" 'format': 'PDF',\n",
" 'link': 'http://research.google/pubs/pub33278.pdf'},\n",
" 'authors': [{'name': 'FJ Och',\n",
" 'id': 'ITGdg6oAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=ITGdg6oAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Dean',\n",
" 'id': 'NMS69lQAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=NMS69lQAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 8,\n",
" 'title': 'A watermark for large language models',\n",
" 'data_cid': 'BlSyLHT4iiEJ',\n",
" 'link': 'https://arxiv.org/abs/2301.10226',\n",
" 'publication': 'J Kirchenbauer, J Geiping, Y Wen, J '\n",
" 'Katz… - arXiv preprint arXiv …, 2023 - '\n",
" 'arxiv.org',\n",
" 'snippet': '… To derive this watermark, we examine what '\n",
" 'happens in the language model just before it '\n",
" 'produces a probability vector. The last '\n",
" 'layer of the language model outputs a vector '\n",
" 'of logits l(t). …',\n",
" 'inline_links': {'cited_by': {'cites_id': '2417017327887471622',\n",
" 'total': 104,\n",
" 'link': 'https://scholar.google.com/scholar?cites=2417017327887471622&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '2417017327887471622',\n",
" 'total': 4,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=2417017327887471622&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:BlSyLHT4iiEJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:BlSyLHT4iiEJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'arxiv.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://arxiv.org/pdf/2301.10226.pdf?curius=1419'},\n",
" 'authors': [{'name': 'J Kirchenbauer',\n",
" 'id': '48GJrbsAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=48GJrbsAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Geiping',\n",
" 'id': '206vNCEAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=206vNCEAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'Y Wen',\n",
" 'id': 'oUYfjg0AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=oUYfjg0AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'J Katz',\n",
" 'id': 'yPw4WjoAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=yPw4WjoAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 9,\n",
" 'title': 'ChatGPT and other large language models are '\n",
" 'double-edged swords',\n",
" 'data_cid': 'So0q8TRvxhYJ',\n",
" 'link': 'https://pubs.rsna.org/doi/full/10.1148/radiol.230163',\n",
" 'publication': 'Y Shen, L Heacock, J Elias, KD Hentel, B '\n",
" 'Reig, G Shih… - Radiology, 2023 - '\n",
" 'pubs.rsna.org',\n",
" 'snippet': '… Large Language Models (LLMs) are deep '\n",
" 'learning models trained to understand and '\n",
" 'generate natural language. Recent studies '\n",
" 'demonstrated that LLMs achieve great success '\n",
" 'in a …',\n",
" 'inline_links': {'cited_by': {'cites_id': '1641121387398204746',\n",
" 'total': 231,\n",
" 'link': 'https://scholar.google.com/scholar?cites=1641121387398204746&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '1641121387398204746',\n",
" 'total': 3,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=1641121387398204746&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:So0q8TRvxhYJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'authors': [{'name': 'Y Shen',\n",
" 'id': 'XaeN2zgAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=XaeN2zgAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'L Heacock',\n",
" 'id': 'tYYM5IkAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=tYYM5IkAAAAJ&hl=en&oi=sra'}]},\n",
" 'link': 'https://scholar.google.com/citations?user=I66ZBYwAAAAJ&hl=en&oi=sra'}]},\n",
" {'position': 10,\n",
" 'title': 'Pythia: A suite for analyzing large language '\n",
" 'models across training and scaling',\n",
" 'data_cid': 'aaIDvsMAD8QJ',\n",
" 'link': 'https://proceedings.mlr.press/v202/biderman23a.html',\n",
" 'publication': 'S Biderman, H Schoelkopf… - '\n",
" 'International …, 2023 - '\n",
" 'proceedings.mlr.press',\n",
" 'snippet': '… large language models, we prioritize '\n",
" 'consistency in model … out the most '\n",
" 'performance from each model. For example, we '\n",
" '… models, as it is becoming widely used for '\n",
" 'the largest models, …',\n",
" 'inline_links': {'cited_by': {'cites_id': '14127511396791067241',\n",
" 'total': 89,\n",
" 'link': 'https://scholar.google.com/scholar?cites=14127511396791067241&as_sdt=5,33&sciodt=0,33&hl=en'},\n",
" 'versions': {'cluster_id': '14127511396791067241',\n",
" 'total': 3,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=14127511396791067241&hl=en&as_sdt=0,33'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:aaIDvsMAD8QJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:aaIDvsMAD8QJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'},\n",
" 'resource': {'name': 'mlr.press',\n",
" 'title': 'A systematic evaluation of large language '\n",
" 'models of code',\n",
" 'data_cid': '-iQSW0h72hYJ',\n",
" 'link': 'https://dl.acm.org/doi/abs/10.1145/3520312.3534862',\n",
" 'publication': 'FF Xu, U Alon, G Neubig, VJ Hellendoorn '\n",
" '- Proceedings of the 6th ACM …, 2022 - '\n",
" 'dl.acm.org',\n",
" 'snippet': '… largest language models for code. We also '\n",
" 'release PolyCoder, a large open-source '\n",
" 'language model for code, trained exclusively '\n",
" 'on code in 12 different programming '\n",
" 'languages. In the …',\n",
" 'inline_links': {'cited_by': {'cites_id': '1646764164453115130',\n",
" 'total': 764,\n",
" 'link': 'https://scholar.google.com/scholar?cites=1646764164453115130&as_sdt=2005&sciodt=0,5&hl=en'},\n",
" 'versions': {'cluster_id': '1646764164453115130',\n",
" 'total': 6,\n",
" 'link': 'https://scholar.google.com/scholar?cluster=1646764164453115130&hl=en&as_sdt=0,5'},\n",
" 'related_articles_link': 'https://scholar.google.com/scholar?q=related:-iQSW0h72hYJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,5'},\n",
" 'resource': {'name': 'acm.org',\n",
" 'format': 'PDF',\n",
" 'link': 'https://proceedings.mlr.press/v202/biderman23a/biderman23a.pdf'},\n",
" 'authors': [{'name': 'S Biderman',\n",
" 'id': 'bO7H0DAAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=bO7H0DAAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'H Schoelkopf',\n",
" 'id': 'XLahYIYAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=XLahYIYAAAAJ&hl=en&oi=sra'}]}],\n",
" 'related_searches': [{'query': 'large language models machine',\n",
" 'highlighted': ['machine'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=1&q=large+language+models+machine&qst=ib'},\n",
" {'query': 'large language models pruning',\n",
" 'highlighted': ['pruning'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=2&q=large+language+models+pruning&qst=ib'},\n",
" {'query': 'large language models multitask learners',\n",
" 'highlighted': ['multitask learners'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=3&q=large+language+models+multitask+learners&qst=ib'},\n",
" {'query': 'large language models speech recognition',\n",
" 'highlighted': ['speech recognition'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=4&q=large+language+models+speech+recognition&qst=ib'},\n",
" 'link': 'https://dl.acm.org/doi/pdf/10.1145/3520312.3534862'},\n",
" 'authors': [{'name': 'FF Xu',\n",
" 'id': '1hXyfIkAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=1hXyfIkAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'U Alon',\n",
" 'id': 'QBn7vq8AAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=QBn7vq8AAAAJ&hl=en&oi=sra'},\n",
" {'name': 'G Neubig',\n",
" 'id': 'wlosgkoAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=wlosgkoAAAAJ&hl=en&oi=sra'},\n",
" {'name': 'VJ Hellendoorn',\n",
" 'id': 'PfYrc5kAAAAJ',\n",
" 'link': 'https://scholar.google.com/citations?user=PfYrc5kAAAAJ&hl=en&oi=sra'}]}],\n",
" 'related_searches': [{'query': 'emergent large language models',\n",
" 'highlighted': ['emergent'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=1&q=emergent+large+language+models&qst=ib'},\n",
" {'query': 'large language models abilities',\n",
" 'highlighted': ['abilities'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=2&q=large+language+models+abilities&qst=ib'},\n",
" {'query': 'prompt large language models',\n",
" 'highlighted': ['prompt'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=3&q=prompt+large+language+models&qst=ib'},\n",
" {'query': 'large language models training '\n",
" 'compute-optimal',\n",
" 'highlighted': ['training compute-optimal'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=4&q=large+language+models+training+compute-optimal&qst=ib'},\n",
" {'query': 'large language models machine translation',\n",
" 'highlighted': ['machine translation'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=5&q=large+language+models+machine+translation&qst=ib'},\n",
" {'query': 'emergent abilities of large language models',\n",
" 'highlighted': ['emergent abilities of'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=6&q=emergent+abilities+of+large+language+models&qst=ir'},\n",
" {'query': 'language models privacy risks',\n",
" 'highlighted': ['privacy risks'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=7&q=language+models+privacy+risks&qst=ir'},\n",
" {'query': 'language model fine tuning',\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=5&q=large+language+models+machine+translation&qst=ib'},\n",
" {'query': 'large language models zero shot',\n",
" 'highlighted': ['zero shot'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=6&q=large+language+models+zero+shot&qst=ib'},\n",
" {'query': 'large language models chatgpt',\n",
" 'highlighted': ['chatgpt'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=7&q=large+language+models+chatgpt&qst=ib'},\n",
" {'query': 'fine tuning large language models',\n",
" 'highlighted': ['fine tuning'],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=8&q=language+model+fine+tuning&qst=ir'}],\n",
" 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,5&qsp=8&q=fine+tuning+large+language+models&qst=ib'}],\n",
" 'pagination': {'current': 1,\n",
" 'next': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" 'other_pages': {'2': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '3': 'https://scholar.google.com/scholar?start=20&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '4': 'https://scholar.google.com/scholar?start=30&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '5': 'https://scholar.google.com/scholar?start=40&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '6': 'https://scholar.google.com/scholar?start=50&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '7': 'https://scholar.google.com/scholar?start=60&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '8': 'https://scholar.google.com/scholar?start=70&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '9': 'https://scholar.google.com/scholar?start=80&q=Large+Language+Models&hl=en&as_sdt=0,33',\n",
" '10': 'https://scholar.google.com/scholar?start=90&q=Large+Language+Models&hl=en&as_sdt=0,33'}}}\n"
" 'next': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" 'other_pages': {'2': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '3': 'https://scholar.google.com/scholar?start=20&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '4': 'https://scholar.google.com/scholar?start=30&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '5': 'https://scholar.google.com/scholar?start=40&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '6': 'https://scholar.google.com/scholar?start=50&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '7': 'https://scholar.google.com/scholar?start=60&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '8': 'https://scholar.google.com/scholar?start=70&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '9': 'https://scholar.google.com/scholar?start=80&q=Large+Language+Models&hl=en&as_sdt=0,5',\n",
" '10': 'https://scholar.google.com/scholar?start=90&q=Large+Language+Models&hl=en&as_sdt=0,5'}}}\n"
]
}
],
@@ -596,6 +646,14 @@
"results = search.results(\"Large Language Models\")\n",
"pprint.pp(results)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11ab5938-e298-471d-96fc-50405ffad35c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -614,7 +672,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.4"
}
},
"nbformat": 4,

View File

@@ -11,8 +11,8 @@
"datasets stored in Aerospike. This new service lives outside of Aerospike and\n",
"builds an index to perform those searches.\n",
"\n",
"This notebook showcases the functionality of the LangChain Aerospike VectorStore\n",
"integration.\n",
"This notebook showcases the functionality of the [LangChain Aerospike VectorStore\n",
"integration](https://github.com/aerospike/langchain-aerospike).\n",
"\n",
"## Install AVS\n",
"\n",
@@ -25,11 +25,11 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"AVS_HOST = \"<avs-ip>\"\n",
"AVS_HOST = \"<avs_ip>\"\n",
"AVS_PORT = 5000"
]
},
@@ -43,15 +43,25 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m25.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.1.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
]
}
],
"source": [
"!pip install --upgrade --quiet aerospike-vector-search==3.0.1 langchain-community sentence-transformers langchain"
"!pip install --upgrade --quiet aerospike-vector-search==4.2.0 langchain-aerospike langchain-community sentence-transformers langchain"
]
},
{
@@ -65,28 +75,32 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2024-05-10 17:28:17-- https://github.com/aerospike/aerospike-vector-search-examples/raw/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz\n",
"Resolving github.com (github.com)... 140.82.116.4\n",
"Connecting to github.com (github.com)|140.82.116.4|:443... connected.\n",
"--2025-05-07 21:06:30-- https://github.com/aerospike/aerospike-vector-search-examples/raw/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz\n",
"Resolving github.com (github.com)... 140.82.116.3\n",
"Connecting to github.com (github.com)|140.82.116.3|:443... connected.\n",
"HTTP request sent, awaiting response... 301 Moved Permanently\n",
"Location: https://github.com/aerospike/aerospike-vector/raw/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz [following]\n",
"--2025-05-07 21:06:30-- https://github.com/aerospike/aerospike-vector/raw/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz\n",
"Reusing existing connection to github.com:443.\n",
"HTTP request sent, awaiting response... 302 Found\n",
"Location: https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz [following]\n",
"--2024-05-10 17:28:17-- https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz\n",
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...\n",
"Location: https://raw.githubusercontent.com/aerospike/aerospike-vector/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz [following]\n",
"--2025-05-07 21:06:30-- https://raw.githubusercontent.com/aerospike/aerospike-vector/7dfab0fccca0852a511c6803aba46578729694b5/quote-semantic-search/container-volumes/quote-search/data/quotes.csv.tgz\n",
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...\n",
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 11597643 (11M) [application/octet-stream]\n",
"Saving to: quotes.csv.tgz\n",
"\n",
"quotes.csv.tgz 100%[===================>] 11.06M 1.94MB/s in 6.1s \n",
"quotes.csv.tgz 100%[===================>] 11.06M 12.7MB/s in 0.9s \n",
"\n",
"2024-05-10 17:28:23 (1.81 MB/s) - quotes.csv.tgz saved [11597643/11597643]\n",
"2025-05-07 21:06:32 (12.7 MB/s) - quotes.csv.tgz saved [11597643/11597643]\n",
"\n"
]
}
@@ -106,7 +120,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
@@ -132,14 +146,14 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content=\"quote: I'm selfish, impatient and a little insecure. I make mistakes, I am out of control and at times hard to handle. But if you can't handle me at my worst, then you sure as hell don't deserve me at my best.\" metadata={'source': './quotes.csv', 'row': 0, 'author': 'Marilyn Monroe', 'category': 'attributed-no-source, best, life, love, mistakes, out-of-control, truth, worst'}\n"
"page_content='quote: I'm selfish, impatient and a little insecure. I make mistakes, I am out of control and at times hard to handle. But if you can't handle me at my worst, then you sure as hell don't deserve me at my best.' metadata={'source': './quotes.csv', 'row': 0, 'author': 'Marilyn Monroe', 'category': 'attributed-no-source, best, life, love, mistakes, out-of-control, truth, worst'}\n"
]
}
],
@@ -158,178 +172,18 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "60662fc2676a46a2ac48fbf30d9c85fe",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"modules.json: 0%| | 0.00/349 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "319412217d3944488f135c8bf8bca73b",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"config_sentence_transformers.json: 0%| | 0.00/116 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "eb020ec2e2f4486294f85c490ef4a387",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"README.md: 0%| | 0.00/10.7k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "65d248263e4049bea4f6b554640a6aae",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"sentence_bert_config.json: 0%| | 0.00/53.0 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
" warnings.warn(\n"
"/var/folders/h5/lm2_c1xs3s32kwp11prnpftw0000gp/T/ipykernel_84638/3255399720.py:6: LangChainDeprecationWarning: The class `HuggingFaceEmbeddings` was deprecated in LangChain 0.2.2 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-huggingface package and should be used instead. To use it run `pip install -U :class:`~langchain-huggingface` and import as `from :class:`~langchain_huggingface import HuggingFaceEmbeddings``.\n",
" embedder = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
"/Users/dwelch/Desktop/everything/projects/langchain/myfork/langchain/.venv/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c6b09a49fbd84c799ea28ace296406e3",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"config.json: 0%| | 0.00/612 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
" warnings.warn(\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7e649688c67544d5af6bdd883c47d315",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"model.safetensors: 0%| | 0.00/90.9M [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "de447c7e4df1485ead14efae1faf96d6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer_config.json: 0%| | 0.00/350 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "83ad1f289cd04f73aafca01a8e68e63b",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "2b612221e29e433cb50a54a6b838f5af",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "1f5f0c29c58642478cd665731728dad0",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"special_tokens_map.json: 0%| | 0.00/112 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "dff1d16a5a6d4d20ac39adb5c9425cf6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"1_Pooling/config.json: 0%| | 0.00/190 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
@@ -352,7 +206,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 12,
"metadata": {},
"outputs": [
{
@@ -364,9 +218,9 @@
}
],
"source": [
"from aerospike_vector_search import AdminClient, Client, HostPort\n",
"from aerospike_vector_search import Client, HostPort\n",
"from aerospike_vector_search.types import VectorDistanceMetric\n",
"from langchain_community.vectorstores import Aerospike\n",
"from langchain_aerospike.vectorstores import Aerospike\n",
"\n",
"# Here we are using the AVS host and port you configured earlier\n",
"seed = HostPort(host=AVS_HOST, port=AVS_PORT)\n",
@@ -381,13 +235,10 @@
"VECTOR_KEY = \"vector\"\n",
"\n",
"client = Client(seeds=seed)\n",
"admin_client = AdminClient(\n",
" seeds=seed,\n",
")\n",
"index_exists = False\n",
"\n",
"# Check if the index already exists. If not, create it\n",
"for index in admin_client.index_list():\n",
"for index in client.index_list():\n",
" if index[\"id\"][\"namespace\"] == NAMESPACE and index[\"id\"][\"name\"] == INDEX_NAME:\n",
" index_exists = True\n",
" print(f\"{INDEX_NAME} already exists. Skipping creation\")\n",
@@ -395,7 +246,7 @@
"\n",
"if not index_exists:\n",
" print(f\"{INDEX_NAME} does not exist. Creating index\")\n",
" admin_client.index_create(\n",
" client.index_create(\n",
" namespace=NAMESPACE,\n",
" name=INDEX_NAME,\n",
" vector_field=VECTOR_KEY,\n",
@@ -409,8 +260,6 @@
" },\n",
" )\n",
"\n",
"admin_client.close()\n",
"\n",
"docstore = Aerospike.from_documents(\n",
" documents,\n",
" embedder,\n",
@@ -432,7 +281,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 13,
"metadata": {},
"outputs": [
{
@@ -440,31 +289,31 @@
"output_type": "stream",
"text": [
"~~~~ Document 0 ~~~~\n",
"auto-generated id: f53589dd-e3e0-4f55-8214-766ca8dc082f\n",
"auto-generated id: 4984b472-8a32-4552-b3eb-f03b31b68031\n",
"author: Carl Sagan, Cosmos\n",
"quote: The Cosmos is all that is or was or ever will be. Our feeblest contemplations of the Cosmos stir us -- there is a tingling in the spine, a catch in the voice, a faint sensation, as if a distant memory, of falling from a height. We know we are approaching the greatest of mysteries.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 1 ~~~~\n",
"auto-generated id: dde3e5d1-30b7-47b4-aab7-e319d14e1810\n",
"author: Elizabeth Gilbert\n",
"quote: The love that moves the sun and the other stars.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 2 ~~~~\n",
"auto-generated id: fd56575b-2091-45e7-91c1-9efff2fe5359\n",
"auto-generated id: 486c8d87-8dd7-450d-9008-d7549e680ffb\n",
"author: Renee Ahdieh, The Rose & the Dagger\n",
"quote: From the stars, to the stars.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 2 ~~~~\n",
"auto-generated id: 4b43b309-ce51-498c-b225-5254383b5b4a\n",
"author: Elizabeth Gilbert\n",
"quote: The love that moves the sun and the other stars.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 3 ~~~~\n",
"auto-generated id: 8567ed4e-885b-44a7-b993-e0caf422b3c9\n",
"auto-generated id: af784a10-f498-4570-bf81-2ffdca35440e\n",
"author: Dante Alighieri, Paradiso\n",
"quote: Love, that moves the sun and the other stars\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 4 ~~~~\n",
"auto-generated id: f868c25e-c54d-48cd-a5a8-14bf402f9ea8\n",
"auto-generated id: b45d5d5e-d818-4206-ae6b-b1d166ea3d43\n",
"author: Thich Nhat Hanh, Teachings on Love\n",
"quote: Through my love for you, I want to express my love for the whole cosmos, the whole of humanity, and all beings. By living with you, I want to learn to love everyone and all species. If I succeed in loving you, I will be able to love everyone and all species on Earth... This is the real message of love.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
@@ -502,7 +351,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 14,
"metadata": {},
"outputs": [
{
@@ -510,7 +359,7 @@
"output_type": "stream",
"text": [
"New IDs\n",
"['972846bd-87ae-493b-8ba3-a3d023c03948', '8171122e-cbda-4eb7-a711-6625b120893b', '53b54409-ac19-4d90-b518-d7c40bf5ee5d']\n"
"['adf8064e-9c0e-46e2-b193-169c36432f4c', 'cf65b5ed-a0f4-491a-86ad-dcacc23c2815', '2ef52efd-d9b7-4077-bc14-defdf0b7dd2f']\n"
]
}
],
@@ -552,7 +401,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 15,
"metadata": {},
"outputs": [
{
@@ -560,25 +409,25 @@
"output_type": "stream",
"text": [
"~~~~ Document 0 ~~~~\n",
"auto-generated id: 67d5b23f-b2d2-4872-80ad-5834ea08aa64\n",
"auto-generated id: 91e77b39-a528-40c6-a58a-486ae85f991a\n",
"author: John Grogan, Marley and Me: Life and Love With the World's Worst Dog\n",
"quote: Such short little lives our pets have to spend with us, and they spend most of it waiting for us to come home each day. It is amazing how much love and laughter they bring into our lives and even how much closer we become with each other because of them.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 1 ~~~~\n",
"auto-generated id: a9b28eb0-a21c-45bf-9e60-ab2b80e988d8\n",
"auto-generated id: c585b4ec-92b5-4579-948c-0529373abc2a\n",
"author: John Grogan, Marley and Me: Life and Love With the World's Worst Dog\n",
"quote: Dogs are great. Bad dogs, if you can really call them that, are perhaps the greatest of them all.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 2 ~~~~\n",
"auto-generated id: ee7434c8-2551-4651-8a22-58514980fb4a\n",
"auto-generated id: 5768b31c-fac4-4af7-84b4-fb11bbfcb590\n",
"author: Colleen Houck, Tiger's Curse\n",
"quote: He then put both hands on the door on either side of my head and leaned in close, pinning me against it. I trembled like a downy rabbit caught in the clutches of a wolf. The wolf came closer. He bent his head and began nuzzling my cheek. The problem was…I wanted the wolf to devour me.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 3 ~~~~\n",
"auto-generated id: 9170804c-a155-473b-ab93-8a561dd48f91\n",
"auto-generated id: 94f1b9fb-ad57-4f65-b470-7f49dd6c274c\n",
"author: Ray Bradbury\n",
"quote: Stuff your eyes with wonder,\" he said, \"live as if you'd drop dead in ten seconds. See the world. It's more fantastic than any dream made or paid for in factories. Ask no guarantees, ask for no security, there never was such an animal. And if there were, it would be related to the great sloth which hangs upside down in a tree all day every day, sleeping its life away. To hell with that,\" he said, \"shake the tree and knock the great sloth down on his ass.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
@@ -607,7 +456,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 16,
"metadata": {},
"outputs": [
{
@@ -615,25 +464,25 @@
"output_type": "stream",
"text": [
"~~~~ Document 0 ~~~~\n",
"auto-generated id: 2c1d6ee1-b742-45ea-bed6-24a1f655c849\n",
"auto-generated id: 6d9e67a6-0427-41e6-9e24-050518120d74\n",
"author: Roy T. Bennett, The Light in the Heart\n",
"quote: Never lose hope. Storms make people stronger and never last forever.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 1 ~~~~\n",
"auto-generated id: 5962c2cf-ffb5-4e03-9257-bdd630b5c7e9\n",
"auto-generated id: 7d426e59-7935-4bcf-a676-cbe8dd4860e7\n",
"author: Roy T. Bennett, The Light in the Heart\n",
"quote: Difficulties and adversities viciously force all their might on us and cause us to fall apart, but they are necessary elements of individual growth and reveal our true potential. We have got to endure and overcome them, and move forward. Never lose hope. Storms make people stronger and never last forever.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 2 ~~~~\n",
"auto-generated id: 3bbcc4ca-de89-4196-9a46-190a50bf6c47\n",
"auto-generated id: 6ec05e48-d162-440d-8819-001d2f3712f9\n",
"author: Vincent van Gogh, The Letters of Vincent van Gogh\n",
"quote: There is peace even in the storm\n",
"~~~~~~~~~~~~~~~~~~~~\n",
"\n",
"~~~~ Document 3 ~~~~\n",
"auto-generated id: 37d8cf02-fc2f-429d-b2b6-260a05286108\n",
"auto-generated id: d3c3de59-4da4-4ae6-8f6d-83ed905dd320\n",
"author: Edwin Morgan, A Book of Lives\n",
"quote: Valentine WeatherKiss me with rain on your eyelashes,come on, let us sway together,under the trees, and to hell with thunder.\n",
"~~~~~~~~~~~~~~~~~~~~\n",
@@ -665,7 +514,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
@@ -684,7 +533,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -698,7 +547,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.11.12"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,450 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7679dd7b-7ed4-4755-a499-824deadba708",
"metadata": {},
"source": [
"# Gel \n",
"\n",
"> An implementation of LangChain vectorstore abstraction using `gel` as the backend.\n",
"\n",
"[Gel](https://www.geldata.com/) is an open-source PostgreSQL data layer optimized for fast development to production cycle. It comes with a high-level strictly typed graph-like data model, composable hierarchical query language, full SQL support, migrations, Auth and AI modules.\n",
"\n",
"The code lives in an integration package called [langchain-gel](https://github.com/geldata/langchain-gel).\n",
"\n",
"## Setup\n",
"\n",
"First install relevant packages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92df32f0",
"metadata": {},
"outputs": [],
"source": [
"! pip install -qU gel langchain-gel "
]
},
{
"cell_type": "markdown",
"id": "68ef6ebb",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"In order to use Gel as a backend for your `VectorStore`, you're going to need a working Gel instance.\n",
"Fortunately, it doesn't have to involve Docker containers or anything complicated, unless you want to!\n",
"\n",
"To set up a local instance, run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b79938d3",
"metadata": {},
"outputs": [],
"source": [
"! gel project init --non-interactive"
]
},
{
"cell_type": "markdown",
"id": "08e79230",
"metadata": {},
"source": [
"If you are using [Gel Cloud](https://cloud.geldata.com/) (and you should!), add one more argument to that command:\n",
"\n",
"```bash\n",
"gel project init --server-instance <org-name>/<instance-name>\n",
"```\n",
"\n",
"For a comprehensive list of ways to run Gel, take a look at [Running Gel](https://docs.geldata.com/reference/running) section of the reference docs.\n",
"\n",
"### Set up the schema\n",
"\n",
"[Gel schema](https://docs.geldata.com/reference/datamodel) is an explicit high-level description of your application's data model. \n",
"Aside from enabling you to define exactly how your data is going to be laid out, it drives Gel's many powerful features such as links, access policies, functions, triggers, constraints, indexes, and more.\n",
"\n",
"The LangChain's `VectorStore` expects the following layout for the schema:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "9a7edd58",
"metadata": {},
"outputs": [],
"source": [
"schema_content = \"\"\"\n",
"using extension pgvector;\n",
" \n",
"module default {\n",
" scalar type EmbeddingVector extending ext::pgvector::vector<1536>;\n",
"\n",
" type Record {\n",
" required collection: str;\n",
" text: str;\n",
" embedding: EmbeddingVector; \n",
" external_id: str {\n",
" constraint exclusive;\n",
" };\n",
" metadata: json;\n",
"\n",
" index ext::pgvector::hnsw_cosine(m := 16, ef_construction := 128)\n",
" on (.embedding)\n",
" } \n",
"}\n",
"\"\"\".strip()\n",
"\n",
"with open(\"dbschema/default.gel\", \"w\") as f:\n",
" f.write(schema_content)"
]
},
{
"cell_type": "markdown",
"id": "90320ef1",
"metadata": {},
"source": [
"In order to apply schema changes to the database, run a migration using Gel's [migration mechanism](https://docs.geldata.com/reference/datamodel/migrations):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cdff483e",
"metadata": {},
"outputs": [],
"source": [
"! gel migration create --non-interactive\n",
"! gel migrate"
]
},
{
"cell_type": "markdown",
"id": "b2290ef2",
"metadata": {},
"source": [
"From this point onward, `GelVectorStore` can be used as a drop-in replacement for any other vectorstore available in LangChain."
]
},
{
"cell_type": "markdown",
"id": "ec44dfcc",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "94f5c129",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "979a65bd-742f-4b0d-be1e-c0baae245ec6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_gel import GelVectorStore\n",
"\n",
"vector_store = GelVectorStore(\n",
" embeddings=embeddings,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "61a224a1-d70b-4daf-86ba-ab6e43c08b50",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"### Add items to vector store\n",
"\n",
"Note that adding documents by ID will over-write any existing documents that match that ID."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88a288cc-ffd4-4800-b011-750c72b9fd10",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=\"there are cats in the pond\",\n",
" metadata={\"id\": \"1\", \"location\": \"pond\", \"topic\": \"animals\"},\n",
" ),\n",
" Document(\n",
" page_content=\"ducks are also found in the pond\",\n",
" metadata={\"id\": \"2\", \"location\": \"pond\", \"topic\": \"animals\"},\n",
" ),\n",
" Document(\n",
" page_content=\"fresh apples are available at the market\",\n",
" metadata={\"id\": \"3\", \"location\": \"market\", \"topic\": \"food\"},\n",
" ),\n",
" Document(\n",
" page_content=\"the market also sells fresh oranges\",\n",
" metadata={\"id\": \"4\", \"location\": \"market\", \"topic\": \"food\"},\n",
" ),\n",
" Document(\n",
" page_content=\"the new art exhibit is fascinating\",\n",
" metadata={\"id\": \"5\", \"location\": \"museum\", \"topic\": \"art\"},\n",
" ),\n",
" Document(\n",
" page_content=\"a sculpture exhibit is also at the museum\",\n",
" metadata={\"id\": \"6\", \"location\": \"museum\", \"topic\": \"art\"},\n",
" ),\n",
" Document(\n",
" page_content=\"a new coffee shop opened on Main Street\",\n",
" metadata={\"id\": \"7\", \"location\": \"Main Street\", \"topic\": \"food\"},\n",
" ),\n",
" Document(\n",
" page_content=\"the book club meets at the library\",\n",
" metadata={\"id\": \"8\", \"location\": \"library\", \"topic\": \"reading\"},\n",
" ),\n",
" Document(\n",
" page_content=\"the library hosts a weekly story time for kids\",\n",
" metadata={\"id\": \"9\", \"location\": \"library\", \"topic\": \"reading\"},\n",
" ),\n",
" Document(\n",
" page_content=\"a cooking class for beginners is offered at the community center\",\n",
" metadata={\"id\": \"10\", \"location\": \"community center\", \"topic\": \"classes\"},\n",
" ),\n",
"]\n",
"\n",
"vector_store.add_documents(docs, ids=[doc.metadata[\"id\"] for doc in docs])"
]
},
{
"cell_type": "markdown",
"id": "0c712fa3",
"metadata": {},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a5b2b71f-49eb-407d-b03a-dea4c0a517d6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vector_store.delete(ids=[\"3\"])"
]
},
{
"cell_type": "markdown",
"id": "59f82250-7903-4279-8300-062542c83416",
"metadata": {},
"source": [
"## Query vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"\n",
"### Filtering Support\n",
"\n",
"The vectorstore supports a set of filters that can be applied against the metadata fields of the documents.\n",
"\n",
"| Operator | Meaning/Category |\n",
"|----------|-------------------------|\n",
"| \\$eq | Equality (==) |\n",
"| \\$ne | Inequality (!=) |\n",
"| \\$lt | Less than (&lt;) |\n",
"| \\$lte | Less than or equal (&lt;=) |\n",
"| \\$gt | Greater than (>) |\n",
"| \\$gte | Greater than or equal (>=) |\n",
"| \\$in | Special Cased (in) |\n",
"| \\$nin | Special Cased (not in) |\n",
"| \\$between | Special Cased (between) |\n",
"| \\$like | Text (like) |\n",
"| \\$ilike | Text (case-insensitive like) |\n",
"| \\$and | Logical (and) |\n",
"| \\$or | Logical (or) |\n",
"\n",
"### Query directly\n",
"\n",
"Performing a simple similarity search can be done as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f15a2359-6dc3-4099-8214-785f167a9ca4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"results = vector_store.similarity_search(\n",
" \"kitty\", k=10, filter={\"id\": {\"$in\": [\"1\", \"5\", \"2\", \"9\"]}}\n",
")\n",
"for doc in results:\n",
" print(f\"* {doc.page_content} [{doc.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "d92ea049-1b1f-4ae9-9525-35750fe2e52e",
"metadata": {},
"source": [
"If you provide a dict with multiple fields, but no operators, the top level will be interpreted as a logical **AND** filter"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88f919e4-e4b0-4b5f-99b3-24c675c26d33",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vector_store.similarity_search(\n",
" \"ducks\",\n",
" k=10,\n",
" filter={\n",
" \"id\": {\"$in\": [\"1\", \"5\", \"2\", \"9\"]},\n",
" \"location\": {\"$in\": [\"pond\", \"market\"]},\n",
" },\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88f423a4-6575-4fb8-9be2-a3da01106591",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vector_store.similarity_search(\n",
" \"ducks\",\n",
" k=10,\n",
" filter={\n",
" \"$and\": [\n",
" {\"id\": {\"$in\": [\"1\", \"5\", \"2\", \"9\"]}},\n",
" {\"location\": {\"$in\": [\"pond\", \"market\"]}},\n",
" ]\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2e65adc1",
"metadata": {},
"source": [
"If you want to execute a similarity search and receive the corresponding scores you can run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d92e7b3",
"metadata": {},
"outputs": [],
"source": [
"results = vector_store.similarity_search_with_score(query=\"cats\", k=1)\n",
"for doc, score in results:\n",
" print(f\"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "8d40db8c",
"metadata": {},
"source": [
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7cd1fb75",
"metadata": {},
"outputs": [],
"source": [
"retriever = vector_store.as_retriever(search_kwargs={\"k\": 1})\n",
"retriever.invoke(\"kitty\")"
]
},
{
"cell_type": "markdown",
"id": "7ecd77a0",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials](/docs/tutorials/)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)"
]
},
{
"cell_type": "markdown",
"id": "33a5f0e6",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all GelVectorStore features and configurations head to the API reference: https://python.langchain.com/api_reference/"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -553,7 +553,10 @@
"cell_type": "markdown",
"id": "8edb47106e1a46a883d545849b8ab81b",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"\n",
@@ -576,6 +579,9 @@
"id": "10185d26023b46108eb7d9f57d49d2b3",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
@@ -603,7 +609,10 @@
"cell_type": "markdown",
"id": "8763a12b2bbd4a93a75aff182afb95dc",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"> - When you use `BM25BuiltInFunction`, please note that the full-text search is available in Milvus Standalone and Milvus Distributed, but not in Milvus Lite, although it is on the roadmap for future inclusion. It will also be available in Zilliz Cloud (fully-managed Milvus) soon. Please reach out to support@zilliz.com for more information.\n",
@@ -617,7 +626,10 @@
"cell_type": "markdown",
"id": "7623eae2785240b9bd12b16a66d81610",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### Rerank the candidates\n",
@@ -632,6 +644,9 @@
"id": "7cdc8c89c7104fffa095e18ddfef8986",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"pycharm": {
"name": "#%%\n"
}
@@ -645,14 +660,6 @@
")"
]
},
{
"cell_type": "markdown",
"id": "b3965036",
"metadata": {},
"source": [
"For more information about Full-text search and Hybrid search, please refer to the [Using Full-Text Search with LangChain and Milvus](https://milvus.io/docs/full_text_search_with_langchain.md) and [Hybrid Retrieval with LangChain and Milvus](https://milvus.io/docs/milvus_hybrid_search_retriever.md)."
]
},
{
"cell_type": "markdown",
"id": "8ac953f1",
@@ -813,7 +820,7 @@
"provenance": []
},
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -827,7 +834,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.13.2"
}
},
"nbformat": 4,

View File

@@ -37,6 +37,7 @@ def _reorder_keys(p):
"downloads",
"downloads_updated_at",
"disabled",
"include_in_api_ref",
]
if set(keys) - set(key_order):
raise ValueError(f"Unexpected keys: {set(keys) - set(key_order)}")

View File

@@ -9,8 +9,10 @@ export default function EmbeddingTabs(props) {
hideOpenai,
azureOpenaiParams,
hideAzureOpenai,
googleParams,
hideGoogle,
googleGenAIParams,
hideGoogleGenAI,
googleVertexAIParams,
hideGoogleVertexAI,
awsParams,
hideAws,
huggingFaceParams,
@@ -38,7 +40,8 @@ export default function EmbeddingTabs(props) {
const azureParamsOrDefault =
azureOpenaiParams ??
`\n azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],\n azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],\n openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],\n`;
const googleParamsOrDefault = googleParams ?? `model="text-embedding-004"`;
const googleGenAIParamsOrDefault = googleGenAIParams ?? `model="models/embedding-001"`;
const googleVertexAIParamsOrDefault = googleVertexAIParams ?? `model="text-embedding-004"`;
const awsParamsOrDefault = awsParams ?? `model_id="amazon.titan-embed-text-v2:0"`;
const huggingFaceParamsOrDefault = huggingFaceParams ?? `model_name="sentence-transformers/all-mpnet-base-v2"`;
const ollamaParamsOrDefault = ollamaParams ?? `model="llama3"`;
@@ -73,13 +76,22 @@ export default function EmbeddingTabs(props) {
shouldHide: hideAzureOpenai,
},
{
value: "Google",
label: "Google",
text: `from langchain_google_vertexai import VertexAIEmbeddings\n\n${embeddingVarName} = VertexAIEmbeddings(${googleParamsOrDefault})`,
value: "GoogleGenAI",
label: "Google Gemini",
text: `from langchain_google_genai import GoogleGenerativeAIEmbeddings\n\n${embeddingVarName} = GoogleGenerativeAIEmbeddings(${googleGenAIParamsOrDefault})`,
apiKeyName: "GOOGLE_API_KEY",
packageName: "langchain-google-genai",
default: false,
shouldHide: hideGoogleGenAI,
},
{
value: "GoogleVertexAI",
label: "Google Vertex",
text: `from langchain_google_vertexai import VertexAIEmbeddings\n\n${embeddingVarName} = VertexAIEmbeddings(${googleVertexAIParamsOrDefault})`,
apiKeyName: undefined,
packageName: "langchain-google-vertexai",
default: false,
shouldHide: hideGoogle,
shouldHide: hideGoogleVertexAI,
},
{
value: "AWS",

View File

@@ -461,14 +461,6 @@ const FEATURE_TABLES = {
apiLink: "https://python.langchain.com/api_reference/elasticsearch/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html",
package: "langchain_elasticsearch"
},
{
name: "MilvusCollectionHybridSearchRetriever",
link: "milvus_hybrid_search",
selfHost: true,
cloudOffering: false,
apiLink: "https://python.langchain.com/api_reference/milvus/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html",
package: "langchain_milvus"
},
{
name: "VertexAISearchRetriever",
link: "google_vertex_ai_search",

View File

@@ -153,6 +153,10 @@
{
"source": "/api_reference/tests/:path(.*/?)*",
"destination": "/api_reference/standard_tests/:path"
},
{
"source": "/docs/integrations/retrievers/milvus_hybrid_search(/?)",
"destination": "https://python.langchain.com/v0.2/docs/integrations/retrievers/milvus_hybrid_search/"
}
]
}

View File

@@ -2,3 +2,5 @@ httpx
grpcio
aiohttp<3.11
protobuf<3.21
tenacity
urllib3

View File

@@ -520,6 +520,8 @@ class RunManager(BaseRunManager):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_text",
@@ -542,6 +544,8 @@ class RunManager(BaseRunManager):
retry_state (RetryCallState): The retry state.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_retry",
@@ -601,6 +605,8 @@ class AsyncRunManager(BaseRunManager, ABC):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_text",
@@ -623,6 +629,8 @@ class AsyncRunManager(BaseRunManager, ABC):
retry_state (RetryCallState): The retry state.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_retry",
@@ -675,6 +683,8 @@ class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
The chunk. Defaults to None.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_llm_new_token",
@@ -694,6 +704,8 @@ class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
response (LLMResult): The LLM result.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_llm_end",
@@ -718,6 +730,8 @@ class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
- response (LLMResult): The response which was generated before
the error occurred.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_llm_error",
@@ -750,7 +764,6 @@ class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
inheritable_metadata=self.inheritable_metadata,
)
@shielded
async def on_llm_new_token(
self,
token: str,
@@ -766,6 +779,8 @@ class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
The chunk. Defaults to None.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_llm_new_token",
@@ -786,6 +801,8 @@ class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
response (LLMResult): The LLM result.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_llm_end",
@@ -814,6 +831,8 @@ class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_llm_error",
@@ -836,6 +855,8 @@ class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin):
outputs (Union[dict[str, Any], Any]): The outputs of the chain.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_chain_end",
@@ -858,6 +879,8 @@ class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin):
error (Exception or KeyboardInterrupt): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_chain_error",
@@ -879,6 +902,8 @@ class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_agent_action",
@@ -900,6 +925,8 @@ class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_agent_finish",
@@ -942,6 +969,8 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
outputs (Union[dict[str, Any], Any]): The outputs of the chain.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_chain_end",
@@ -965,6 +994,8 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
error (Exception or KeyboardInterrupt): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_chain_error",
@@ -976,7 +1007,6 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
**kwargs,
)
@shielded
async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run when agent action is received.
@@ -987,6 +1017,8 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_agent_action",
@@ -998,7 +1030,6 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
**kwargs,
)
@shielded
async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run when agent finish is received.
@@ -1009,6 +1040,8 @@ class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin):
Returns:
Any: The result of the callback.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_agent_finish",
@@ -1035,6 +1068,8 @@ class CallbackManagerForToolRun(ParentRunManager, ToolManagerMixin):
output (Any): The output of the tool.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_tool_end",
@@ -1057,6 +1092,8 @@ class CallbackManagerForToolRun(ParentRunManager, ToolManagerMixin):
error (Exception or KeyboardInterrupt): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_tool_error",
@@ -1089,7 +1126,6 @@ class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin):
inheritable_metadata=self.inheritable_metadata,
)
@shielded
async def on_tool_end(self, output: Any, **kwargs: Any) -> None:
"""Async run when the tool ends running.
@@ -1097,6 +1133,8 @@ class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin):
output (Any): The output of the tool.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_tool_end",
@@ -1108,7 +1146,6 @@ class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin):
**kwargs,
)
@shielded
async def on_tool_error(
self,
error: BaseException,
@@ -1120,6 +1157,8 @@ class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin):
error (Exception or KeyboardInterrupt): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_tool_error",
@@ -1146,6 +1185,8 @@ class CallbackManagerForRetrieverRun(ParentRunManager, RetrieverManagerMixin):
documents (Sequence[Document]): The retrieved documents.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_retriever_end",
@@ -1168,6 +1209,8 @@ class CallbackManagerForRetrieverRun(ParentRunManager, RetrieverManagerMixin):
error (BaseException): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
handle_event(
self.handlers,
"on_retriever_error",
@@ -1213,6 +1256,8 @@ class AsyncCallbackManagerForRetrieverRun(
documents (Sequence[Document]): The retrieved documents.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_retriever_end",
@@ -1236,6 +1281,8 @@ class AsyncCallbackManagerForRetrieverRun(
error (BaseException): The error.
**kwargs (Any): Additional keyword arguments.
"""
if not self.handlers:
return
await ahandle_event(
self.handlers,
"on_retriever_error",
@@ -1521,6 +1568,8 @@ class CallbackManager(BaseCallbackManager):
.. versionadded:: 0.2.14
"""
if not self.handlers:
return
if kwargs:
msg = (
"The dispatcher API does not accept additional keyword arguments."
@@ -1998,6 +2047,8 @@ class AsyncCallbackManager(BaseCallbackManager):
.. versionadded:: 0.2.14
"""
if not self.handlers:
return
if run_id is None:
run_id = uuid.uuid4()

View File

@@ -8,7 +8,7 @@ from io import BufferedReader, BytesIO
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any, Literal, Optional, Union, cast
from pydantic import ConfigDict, Field, field_validator, model_validator
from pydantic import ConfigDict, Field, model_validator
from langchain_core.load.serializable import Serializable
@@ -33,7 +33,7 @@ class BaseMedia(Serializable):
# The ID field is optional at the moment.
# It will likely become required in a future major release after
# it has been adopted by enough vectorstore implementations.
id: Optional[str] = None
id: Optional[str] = Field(default=None, coerce_numbers_to_str=True)
"""An optional identifier for the document.
Ideally this should be unique across the document collection and formatted
@@ -45,17 +45,6 @@ class BaseMedia(Serializable):
metadata: dict = Field(default_factory=dict)
"""Arbitrary metadata associated with the content."""
@field_validator("id", mode="before")
def cast_id_to_str(cls, id_value: Any) -> Optional[str]:
"""Coerce the id field to a string.
Args:
id_value: The id value to coerce.
"""
if id_value is not None:
return str(id_value)
return id_value
class Blob(BaseMedia):
"""Blob represents raw data by either reference or value.

View File

@@ -466,10 +466,9 @@ def index(
_source_ids = cast("Sequence[str]", source_ids)
uids_to_delete = record_manager.list_keys(
group_ids=_source_ids, before=index_start_dt
)
if uids_to_delete:
while uids_to_delete := record_manager.list_keys(
group_ids=_source_ids, before=index_start_dt, limit=cleanup_batch_size
):
# Then delete from vector store.
_delete(destination, uids_to_delete)
# First delete from record store.
@@ -780,10 +779,9 @@ async def aindex(
_source_ids = cast("Sequence[str]", source_ids)
uids_to_delete = await record_manager.alist_keys(
group_ids=_source_ids, before=index_start_dt
)
if uids_to_delete:
while uids_to_delete := await record_manager.alist_keys(
group_ids=_source_ids, before=index_start_dt, limit=cleanup_batch_size
):
# Then delete from vector store.
await _adelete(destination, uids_to_delete)
# First delete from record store.

View File

@@ -66,6 +66,7 @@ from langchain_core.outputs import (
LLMResult,
RunInfo,
)
from langchain_core.outputs.chat_generation import merge_chat_generation_chunks
from langchain_core.prompt_values import ChatPromptValue, PromptValue, StringPromptValue
from langchain_core.rate_limiters import BaseRateLimiter
from langchain_core.runnables import RunnableMap, RunnablePassthrough
@@ -411,8 +412,8 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
**kwargs: Any,
) -> bool:
"""Determine if a given model call should hit the streaming API."""
sync_not_implemented = type(self)._stream == BaseChatModel._stream
async_not_implemented = type(self)._astream == BaseChatModel._astream
sync_not_implemented = type(self)._stream == BaseChatModel._stream # noqa: SLF001
async_not_implemented = type(self)._astream == BaseChatModel._astream # noqa: SLF001
# Check if streaming is implemented.
if (not async_api) and sync_not_implemented:
@@ -485,34 +486,41 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
run_id=config.pop("run_id", None),
batch_size=1,
)
generation: Optional[ChatGenerationChunk] = None
chunks: list[ChatGenerationChunk] = []
if self.rate_limiter:
self.rate_limiter.acquire(blocking=True)
try:
input_messages = _normalize_messages(messages)
run_id = "-".join((_LC_ID_PREFIX, str(run_manager.run_id)))
for chunk in self._stream(input_messages, stop=stop, **kwargs):
if chunk.message.id is None:
chunk.message.id = f"{_LC_ID_PREFIX}-{run_manager.run_id}"
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yield chunk.message
if generation is None:
generation = chunk
else:
generation += chunk
except BaseException as e:
generations_with_error_metadata = _generate_response_from_error(e)
if generation:
generations = [[generation], generations_with_error_metadata]
chat_generation_chunk = merge_chat_generation_chunks(chunks)
if chat_generation_chunk:
generations = [
[chat_generation_chunk],
generations_with_error_metadata,
]
else:
generations = [generations_with_error_metadata]
run_manager.on_llm_error(e, response=LLMResult(generations=generations)) # type: ignore[arg-type]
run_manager.on_llm_error(
e,
response=LLMResult(generations=generations), # type: ignore[arg-type]
)
raise
generation = merge_chat_generation_chunks(chunks)
if generation is None:
err = ValueError("No generation chunks were returned")
run_manager.on_llm_error(err, response=LLMResult(generations=[]))
@@ -575,29 +583,29 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
if self.rate_limiter:
await self.rate_limiter.aacquire(blocking=True)
generation: Optional[ChatGenerationChunk] = None
chunks: list[ChatGenerationChunk] = []
try:
input_messages = _normalize_messages(messages)
run_id = "-".join((_LC_ID_PREFIX, str(run_manager.run_id)))
async for chunk in self._astream(
input_messages,
stop=stop,
**kwargs,
):
if chunk.message.id is None:
chunk.message.id = f"{_LC_ID_PREFIX}-{run_manager.run_id}"
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
await run_manager.on_llm_new_token(
cast("str", chunk.message.content), chunk=chunk
)
chunks.append(chunk)
yield chunk.message
if generation is None:
generation = chunk
else:
generation += chunk
except BaseException as e:
generations_with_error_metadata = _generate_response_from_error(e)
if generation:
generations = [[generation], generations_with_error_metadata]
chat_generation_chunk = merge_chat_generation_chunks(chunks)
if chat_generation_chunk:
generations = [[chat_generation_chunk], generations_with_error_metadata]
else:
generations = [generations_with_error_metadata]
await run_manager.on_llm_error(
@@ -606,7 +614,8 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
)
raise
if generation is None:
generation = merge_chat_generation_chunks(chunks)
if not generation:
err = ValueError("No generation chunks were returned")
await run_manager.on_llm_error(err, response=LLMResult(generations=[]))
raise err

View File

@@ -522,7 +522,7 @@ class BaseLLM(BaseLanguageModel[str], ABC):
stop: Optional[list[str]] = None,
**kwargs: Any,
) -> Iterator[str]:
if type(self)._stream == BaseLLM._stream:
if type(self)._stream == BaseLLM._stream: # noqa: SLF001
# model doesn't implement streaming, so use default implementation
yield self.invoke(input, config=config, stop=stop, **kwargs)
else:
@@ -590,8 +590,8 @@ class BaseLLM(BaseLanguageModel[str], ABC):
**kwargs: Any,
) -> AsyncIterator[str]:
if (
type(self)._astream is BaseLLM._astream
and type(self)._stream is BaseLLM._stream
type(self)._astream is BaseLLM._astream # noqa: SLF001
and type(self)._stream is BaseLLM._stream # noqa: SLF001
):
yield await self.ainvoke(input, config=config, stop=stop, **kwargs)
return

View File

@@ -194,6 +194,7 @@ class AIMessage(BaseMessage):
"invalid_tool_calls": self.invalid_tool_calls,
}
# TODO: remove this logic if possible, reducing breaking nature of changes
@model_validator(mode="before")
@classmethod
def _backwards_compat_tool_calls(cls, values: dict) -> Any:

View File

@@ -4,7 +4,7 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any, Optional, Union, cast
from pydantic import ConfigDict, Field, field_validator
from pydantic import ConfigDict, Field
from langchain_core.load.serializable import Serializable
from langchain_core.utils import get_bolded_text
@@ -52,7 +52,7 @@ class BaseMessage(Serializable):
model implementation.
"""
id: Optional[str] = None
id: Optional[str] = Field(default=None, coerce_numbers_to_str=True)
"""An optional unique identifier for the message. This should ideally be
provided by the provider/model which created the message."""
@@ -60,13 +60,6 @@ class BaseMessage(Serializable):
extra="allow",
)
@field_validator("id", mode="before")
def cast_id_to_str(cls, id_value: Any) -> Optional[str]:
"""Coerce the id field to a string."""
if id_value is not None:
return str(id_value)
return id_value
def __init__(
self, content: Union[str, list[Union[str, dict]]], **kwargs: Any
) -> None:

View File

@@ -2,17 +2,14 @@
from __future__ import annotations
from typing import TYPE_CHECKING, Literal, Union
from typing import Literal, Union
from pydantic import model_validator
from pydantic import computed_field
from langchain_core.messages import BaseMessage, BaseMessageChunk
from langchain_core.outputs.generation import Generation
from langchain_core.utils._merge import merge_dicts
if TYPE_CHECKING:
from typing_extensions import Self
class ChatGeneration(Generation):
"""A single chat generation output.
@@ -28,48 +25,30 @@ class ChatGeneration(Generation):
via callbacks).
"""
text: str = ""
"""*SHOULD NOT BE SET DIRECTLY* The text contents of the output message."""
message: BaseMessage
"""The message output by the chat model."""
# Override type to be ChatGeneration, ignore mypy error as this is intentional
type: Literal["ChatGeneration"] = "ChatGeneration" # type: ignore[assignment]
"""Type is used exclusively for serialization purposes."""
@model_validator(mode="after")
def set_text(self) -> Self:
"""Set the text attribute to be the contents of the message.
Args:
values: The values of the object.
Returns:
The values of the object with the text attribute set.
Raises:
ValueError: If the message is not a string or a list.
"""
try:
text = ""
if isinstance(self.message.content, str):
text = self.message.content
# Assumes text in content blocks in OpenAI format.
# Uses first text block.
elif isinstance(self.message.content, list):
for block in self.message.content:
if isinstance(block, str):
text = block
break
if isinstance(block, dict) and "text" in block:
text = block["text"]
break
else:
pass
self.text = text
except (KeyError, AttributeError) as e:
msg = "Error while initializing ChatGeneration"
raise ValueError(msg) from e
return self
@computed_field # type: ignore[prop-decorator]
@property
def text(self) -> str:
"""Set the text attribute to be the contents of the message."""
text_ = ""
if isinstance(self.message.content, str):
text_ = self.message.content
# Assumes text in content blocks in OpenAI format.
# Uses first text block.
elif isinstance(self.message.content, list):
for block in self.message.content:
if isinstance(block, str):
text_ = block
break
if isinstance(block, dict) and "text" in block:
text_ = block["text"]
break
return text_
class ChatGenerationChunk(ChatGeneration):
@@ -80,7 +59,7 @@ class ChatGenerationChunk(ChatGeneration):
message: BaseMessageChunk
"""The message chunk output by the chat model."""
# Override type to be ChatGeneration, ignore mypy error as this is intentional
type: Literal["ChatGenerationChunk"] = "ChatGenerationChunk" # type: ignore[assignment]
"""Type is used exclusively for serialization purposes."""
@@ -115,3 +94,16 @@ class ChatGenerationChunk(ChatGeneration):
)
msg = f"unsupported operand type(s) for +: '{type(self)}' and '{type(other)}'"
raise TypeError(msg)
def merge_chat_generation_chunks(
chunks: list[ChatGenerationChunk],
) -> Union[ChatGenerationChunk, None]:
"""Merge a list of ChatGenerationChunks into a single ChatGenerationChunk."""
if not chunks:
return None
if len(chunks) == 1:
return chunks[0]
return chunks[0] + chunks[1:]

View File

@@ -4,6 +4,8 @@ from __future__ import annotations
from typing import Any, Literal, Optional
from pydantic import computed_field
from langchain_core.load import Serializable
from langchain_core.utils._merge import merge_dicts
@@ -24,14 +26,30 @@ class Generation(Serializable):
for more information.
"""
text: str
"""Generated text output."""
def __init__(
self,
text: str = "",
generation_info: Optional[dict[str, Any]] = None,
**kwargs: Any,
):
"""Initialize a Generation."""
super().__init__(generation_info=generation_info, **kwargs)
self._text = text
# workaround for ChatGeneration so that we can use a computed field to populate
# the text field from the message content (parent class needs to have a property)
@computed_field # type: ignore[prop-decorator]
@property
def text(self) -> str:
"""The text contents of the output."""
return self._text
generation_info: Optional[dict[str, Any]] = None
"""Raw response from the provider.
May include things like the reason for finishing or token log probabilities.
"""
type: Literal["Generation"] = "Generation"
"""Type is used exclusively for serialization purposes.
Set to "Generation" for this class."""
@@ -53,6 +71,16 @@ class Generation(Serializable):
class GenerationChunk(Generation):
"""Generation chunk, which can be concatenated with other Generation chunks."""
def __init__(
self,
text: str = "",
generation_info: Optional[dict[str, Any]] = None,
**kwargs: Any,
):
"""Initialize a GenerationChunk."""
super().__init__(text=text, generation_info=generation_info, **kwargs)
self._text = text
def __add__(self, other: GenerationChunk) -> GenerationChunk:
"""Concatenate two GenerationChunks."""
if isinstance(other, GenerationChunk):

View File

@@ -131,7 +131,7 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
"""
runnable: Runnable[Input, Output] = self
while isinstance(runnable, DynamicRunnable):
runnable, config = runnable._prepare(merge_configs(runnable.config, config))
runnable, config = runnable._prepare(merge_configs(runnable.config, config)) # noqa: SLF001
return runnable, cast("RunnableConfig", config)
@abstractmethod

View File

@@ -163,16 +163,21 @@ class AsciiCanvas:
self.point(x0 + width, y0 + height, "+")
class _EdgeViewer:
def __init__(self) -> None:
self.pts: list[tuple[float]] = []
def setpath(self, pts: list[tuple[float]]) -> None:
self.pts = pts
def _build_sugiyama_layout(
vertices: Mapping[str, str], edges: Sequence[LangEdge]
) -> Any:
try:
from grandalf.graphs import Edge, Graph, Vertex # type: ignore[import-untyped]
from grandalf.layouts import SugiyamaLayout # type: ignore[import-untyped]
from grandalf.routing import ( # type: ignore[import-untyped]
EdgeViewer,
route_with_lines,
)
from grandalf.routing import route_with_lines # type: ignore[import-untyped]
except ImportError as exc:
msg = "Install grandalf to draw graphs: `pip install grandalf`."
raise ImportError(msg) from exc
@@ -199,7 +204,7 @@ def _build_sugiyama_layout(
minw = min(v.view.w for v in vertices_list)
for edge in edges_:
edge.view = EdgeViewer()
edge.view = _EdgeViewer()
sug = SugiyamaLayout(graph.C[0])
graph = graph.C[0]
@@ -277,7 +282,7 @@ def draw_ascii(vertices: Mapping[str, str], edges: Sequence[LangEdge]) -> str:
ylist.extend((vertex.view.xy[1], vertex.view.xy[1] + vertex.view.h))
for edge in sug.g.sE:
for x, y in edge.view._pts:
for x, y in edge.view.pts:
xlist.append(x)
ylist.append(y)
@@ -293,12 +298,12 @@ def draw_ascii(vertices: Mapping[str, str], edges: Sequence[LangEdge]) -> str:
# NOTE: first draw edges so that node boxes could overwrite them
for edge in sug.g.sE:
if len(edge.view._pts) <= 1:
if len(edge.view.pts) <= 1:
msg = "Not enough points to draw an edge"
raise ValueError(msg)
for index in range(1, len(edge.view._pts)):
start = edge.view._pts[index - 1]
end = edge.view._pts[index]
for index in range(1, len(edge.view.pts)):
start = edge.view.pts[index - 1]
end = edge.view.pts[index]
start_x = int(round(start[0] - minx))
start_y = int(round(start[1] - miny))

View File

@@ -845,7 +845,7 @@ class ChildTool(BaseTool):
child_config = patch_config(config, callbacks=run_manager.get_child())
with set_config_context(child_config) as context:
func_to_check = (
self._run if self.__class__._arun is BaseTool._arun else self._arun
self._run if self.__class__._arun is BaseTool._arun else self._arun # noqa: SLF001
)
if signature(func_to_check).parameters.get("run_manager"):
tool_kwargs["run_manager"] = run_manager
@@ -1077,16 +1077,18 @@ def get_all_basemodel_annotations(
"""
# cls has no subscript: cls = FooBar
if isinstance(cls, type):
# Gather pydantic field objects (v2: model_fields / v1: __fields__)
fields = getattr(cls, "model_fields", {}) or getattr(cls, "__fields__", {})
alias_map = {field.alias: name for name, field in fields.items() if field.alias}
annotations: dict[str, type] = {}
for name, param in inspect.signature(cls).parameters.items():
# Exclude hidden init args added by pydantic Config. For example if
# BaseModel(extra="allow") then "extra_data" will part of init sig.
if (
fields := getattr(cls, "model_fields", {}) # pydantic v2+
or getattr(cls, "__fields__", {}) # pydantic v1
) and name not in fields:
if fields and name not in fields and name not in alias_map:
continue
annotations[name] = param.annotation
field_name = alias_map.get(name, name)
annotations[field_name] = param.annotation
orig_bases: tuple = getattr(cls, "__orig_bases__", ())
# cls has subscript: cls = FooBar[int]
else:

View File

@@ -50,8 +50,8 @@ def log_error_once(method: str, exception: Exception) -> None:
def wait_for_all_tracers() -> None:
"""Wait for all tracers to finish."""
if rt._CLIENT is not None:
rt._CLIENT.flush()
if rt._CLIENT is not None: # noqa: SLF001
rt._CLIENT.flush() # noqa: SLF001
def get_client() -> Client:
@@ -123,8 +123,8 @@ class LangChainTracer(BaseTracer):
run.tags = self.tags.copy()
super()._start_trace(run)
if run._client is None:
run._client = self.client # type: ignore[misc]
if run.ls_client is None:
run.ls_client = self.client
def on_chat_model_start(
self,

View File

@@ -379,7 +379,7 @@ def _get_key(
try:
# This allows for custom falsy data types
# https://github.com/noahmorrison/chevron/issues/35
if resolved_scope._CHEVRON_return_scope_when_falsy: # type: ignore[union-attr]
if resolved_scope._CHEVRON_return_scope_when_falsy: # type: ignore[union-attr] # noqa: SLF001
return resolved_scope
except AttributeError:
if resolved_scope in (0, False):

View File

@@ -1,3 +1,3 @@
"""langchain-core version information and utilities."""
VERSION = "0.3.59"
VERSION = "0.3.60"

View File

@@ -7,17 +7,16 @@ authors = []
license = {text = "MIT"}
requires-python = ">=3.9"
dependencies = [
"langsmith<0.4,>=0.1.125",
"langsmith<0.4,>=0.1.126",
"tenacity!=8.4.0,<10.0.0,>=8.1.0",
"jsonpatch<2.0,>=1.33",
"PyYAML>=5.3",
"packaging<25,>=23.2",
"typing-extensions>=4.7",
"pydantic<3.0.0,>=2.5.2; python_full_version < \"3.12.4\"",
"pydantic<3.0.0,>=2.7.4; python_full_version >= \"3.12.4\"",
"pydantic>=2.7.4",
]
name = "langchain-core"
version = "0.3.59"
version = "0.3.60"
description = "Building applications with LLMs through composability"
readme = "README.md"
@@ -106,7 +105,6 @@ ignore = [
"ERA",
"PLR2004",
"RUF",
"SLF",
]
flake8-type-checking.runtime-evaluated-base-classes = ["pydantic.BaseModel","langchain_core.load.serializable.Serializable","langchain_core.runnables.base.RunnableSerializable"]
flake8-annotations.allow-star-arg-any = true
@@ -133,5 +131,5 @@ classmethod-decorators = [ "classmethod", "langchain_core.utils.pydantic.pre_ini
"tests/unit_tests/runnables/test_runnable.py" = [ "E501",]
"tests/unit_tests/runnables/test_graph.py" = [ "E501",]
"tests/unit_tests/test_tools.py" = [ "ARG",]
"tests/**" = [ "D", "S",]
"tests/**" = [ "D", "S", "SLF",]
"scripts/**" = [ "INP", "S",]

View File

@@ -1882,7 +1882,7 @@ async def test_adeduplication(
}
def test_cleanup_with_different_batchsize(
def test_full_cleanup_with_different_batchsize(
record_manager: InMemoryRecordManager, vector_store: VectorStore
) -> None:
"""Check that we can clean up with different batch size."""
@@ -1919,7 +1919,56 @@ def test_cleanup_with_different_batchsize(
}
async def test_async_cleanup_with_different_batchsize(
def test_incremental_cleanup_with_different_batchsize(
record_manager: InMemoryRecordManager, vector_store: VectorStore
) -> None:
"""Check that we can clean up with different batch size."""
docs = [
Document(
page_content="This is a test document.",
metadata={"source": str(d)},
)
for d in range(1000)
]
assert index(
docs,
record_manager,
vector_store,
source_id_key="source",
cleanup="incremental",
) == {
"num_added": 1000,
"num_deleted": 0,
"num_skipped": 0,
"num_updated": 0,
}
docs = [
Document(
page_content="Different doc",
metadata={"source": str(d)},
)
for d in range(1001)
]
assert index(
docs,
record_manager,
vector_store,
source_id_key="source",
cleanup="incremental",
cleanup_batch_size=17,
) == {
"num_added": 1001,
"num_deleted": 1000,
"num_skipped": 0,
"num_updated": 0,
}
async def test_afull_cleanup_with_different_batchsize(
arecord_manager: InMemoryRecordManager, vector_store: InMemoryVectorStore
) -> None:
"""Check that we can clean up with different batch size."""
@@ -1956,6 +2005,54 @@ async def test_async_cleanup_with_different_batchsize(
}
async def test_aincremental_cleanup_with_different_batchsize(
arecord_manager: InMemoryRecordManager, vector_store: InMemoryVectorStore
) -> None:
"""Check that we can clean up with different batch size."""
docs = [
Document(
page_content="This is a test document.",
metadata={"source": str(d)},
)
for d in range(1000)
]
assert await aindex(
docs,
arecord_manager,
vector_store,
source_id_key="source",
cleanup="incremental",
) == {
"num_added": 1000,
"num_deleted": 0,
"num_skipped": 0,
"num_updated": 0,
}
docs = [
Document(
page_content="Different doc",
metadata={"source": str(d)},
)
for d in range(1001)
]
assert await aindex(
docs,
arecord_manager,
vector_store,
cleanup="incremental",
source_id_key="source",
cleanup_batch_size=17,
) == {
"num_added": 1001,
"num_deleted": 1000,
"num_skipped": 0,
"num_updated": 0,
}
def test_deduplication_v2(
record_manager: InMemoryRecordManager, vector_store: VectorStore
) -> None:

View File

@@ -2146,6 +2146,15 @@ def test__get_all_basemodel_annotations_v1() -> None:
assert actual == expected
def test_get_all_basemodel_annotations_aliases() -> None:
class CalculatorInput(BaseModel):
a: int = Field(description="first number", alias="A")
b: int = Field(description="second number")
actual = get_all_basemodel_annotations(CalculatorInput)
assert actual == {"a": int, "b": int}
def test_tool_annotations_preserved() -> None:
"""Test that annotations are preserved when creating a tool."""

213
libs/core/uv.lock generated
View File

@@ -935,7 +935,7 @@ wheels = [
[[package]]
name = "langchain-core"
version = "0.3.59"
version = "0.3.60"
source = { editable = "." }
dependencies = [
{ name = "jsonpatch" },
@@ -984,10 +984,9 @@ typing = [
[package.metadata]
requires-dist = [
{ name = "jsonpatch", specifier = ">=1.33,<2.0" },
{ name = "langsmith", specifier = ">=0.1.125,<0.4" },
{ name = "langsmith", specifier = ">=0.1.126,<0.4" },
{ name = "packaging", specifier = ">=23.2,<25" },
{ name = "pydantic", marker = "python_full_version < '3.12.4'", specifier = ">=2.5.2,<3.0.0" },
{ name = "pydantic", marker = "python_full_version >= '3.12.4'", specifier = ">=2.7.4,<3.0.0" },
{ name = "pydantic", specifier = ">=2.7.4" },
{ name = "pyyaml", specifier = ">=5.3" },
{ name = "tenacity", specifier = ">=8.1.0,!=8.4.0,<10.0.0" },
{ name = "typing-extensions", specifier = ">=4.7" },
@@ -1718,7 +1717,7 @@ wheels = [
[[package]]
name = "pydantic"
version = "2.11.1"
version = "2.11.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
@@ -1726,118 +1725,118 @@ dependencies = [
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/93/a3/698b87a4d4d303d7c5f62ea5fbf7a79cab236ccfbd0a17847b7f77f8163e/pydantic-2.11.1.tar.gz", hash = "sha256:442557d2910e75c991c39f4b4ab18963d57b9b55122c8b2a9cd176d8c29ce968", size = 782817 }
sdist = { url = "https://files.pythonhosted.org/packages/77/ab/5250d56ad03884ab5efd07f734203943c8a8ab40d551e208af81d0257bf2/pydantic-2.11.4.tar.gz", hash = "sha256:32738d19d63a226a52eed76645a98ee07c1f410ee41d93b4afbfa85ed8111c2d", size = 786540 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/12/f9221a949f2419e2e23847303c002476c26fbcfd62dc7f3d25d0bec5ca99/pydantic-2.11.1-py3-none-any.whl", hash = "sha256:5b6c415eee9f8123a14d859be0c84363fec6b1feb6b688d6435801230b56e0b8", size = 442648 },
{ url = "https://files.pythonhosted.org/packages/e7/12/46b65f3534d099349e38ef6ec98b1a5a81f42536d17e0ba382c28c67ba67/pydantic-2.11.4-py3-none-any.whl", hash = "sha256:d9615eaa9ac5a063471da949c8fc16376a84afb5024688b3ff885693506764eb", size = 443900 },
]
[[package]]
name = "pydantic-core"
version = "2.33.0"
version = "2.33.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b9/05/91ce14dfd5a3a99555fce436318cc0fd1f08c4daa32b3248ad63669ea8b4/pydantic_core-2.33.0.tar.gz", hash = "sha256:40eb8af662ba409c3cbf4a8150ad32ae73514cd7cb1f1a2113af39763dd616b3", size = 434080 }
sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/29/43/0649ad07e66b36a3fb21442b425bd0348ac162c5e686b36471f363201535/pydantic_core-2.33.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:71dffba8fe9ddff628c68f3abd845e91b028361d43c5f8e7b3f8b91d7d85413e", size = 2042968 },
{ url = "https://files.pythonhosted.org/packages/a0/a6/975fea4774a459e495cb4be288efd8b041ac756a0a763f0b976d0861334b/pydantic_core-2.33.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:abaeec1be6ed535a5d7ffc2e6c390083c425832b20efd621562fbb5bff6dc518", size = 1860347 },
{ url = "https://files.pythonhosted.org/packages/aa/49/7858dadad305101a077ec4d0c606b6425a2b134ea8d858458a6d287fd871/pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:759871f00e26ad3709efc773ac37b4d571de065f9dfb1778012908bcc36b3a73", size = 1910060 },
{ url = "https://files.pythonhosted.org/packages/8d/4f/6522527911d9c5fe6d76b084d8b388d5c84b09d113247b39f91937500b34/pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dcfebee69cd5e1c0b76a17e17e347c84b00acebb8dd8edb22d4a03e88e82a207", size = 1997129 },
{ url = "https://files.pythonhosted.org/packages/75/d0/06f396da053e3d73001ea4787e56b4d7132a87c0b5e2e15a041e808c35cd/pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1b1262b912435a501fa04cd213720609e2cefa723a07c92017d18693e69bf00b", size = 2140389 },
{ url = "https://files.pythonhosted.org/packages/f5/6b/b9ff5b69cd4ef007cf665463f3be2e481dc7eb26c4a55b2f57a94308c31a/pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4726f1f3f42d6a25678c67da3f0b10f148f5655813c5aca54b0d1742ba821b8f", size = 2754237 },
{ url = "https://files.pythonhosted.org/packages/53/80/b4879de375cdf3718d05fcb60c9aa1f119d28e261dafa51b6a69c78f7178/pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e790954b5093dff1e3a9a2523fddc4e79722d6f07993b4cd5547825c3cbf97b5", size = 2007433 },
{ url = "https://files.pythonhosted.org/packages/46/24/54054713dc0af98a94eab37e0f4294dfd5cd8f70b2ca9dcdccd15709fd7e/pydantic_core-2.33.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:34e7fb3abe375b5c4e64fab75733d605dda0f59827752debc99c17cb2d5f3276", size = 2123980 },
{ url = "https://files.pythonhosted.org/packages/3a/4c/257c1cb89e14cfa6e95ebcb91b308eb1dd2b348340ff76a6e6fcfa9969e1/pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ecb158fb9b9091b515213bed3061eb7deb1d3b4e02327c27a0ea714ff46b0760", size = 2087433 },
{ url = "https://files.pythonhosted.org/packages/0c/62/927df8a39ad78ef7b82c5446e01dec9bb0043e1ad71d8f426062f5f014db/pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:4d9149e7528af8bbd76cc055967e6e04617dcb2a2afdaa3dea899406c5521faa", size = 2260242 },
{ url = "https://files.pythonhosted.org/packages/74/f2/389414f7c77a100954e84d6f52a82bd1788ae69db72364376d8a73b38765/pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e81a295adccf73477220e15ff79235ca9dcbcee4be459eb9d4ce9a2763b8386c", size = 2258227 },
{ url = "https://files.pythonhosted.org/packages/53/99/94516313e15d906a1264bb40faf24a01a4af4e2ca8a7c10dd173b6513c5a/pydantic_core-2.33.0-cp310-cp310-win32.whl", hash = "sha256:f22dab23cdbce2005f26a8f0c71698457861f97fc6318c75814a50c75e87d025", size = 1925523 },
{ url = "https://files.pythonhosted.org/packages/7d/67/cc789611c6035a0b71305a1ec6ba196256ced76eba8375f316f840a70456/pydantic_core-2.33.0-cp310-cp310-win_amd64.whl", hash = "sha256:9cb2390355ba084c1ad49485d18449b4242da344dea3e0fe10babd1f0db7dcfc", size = 1951872 },
{ url = "https://files.pythonhosted.org/packages/f0/93/9e97af2619b4026596487a79133e425c7d3c374f0a7f100f3d76bcdf9c83/pydantic_core-2.33.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a608a75846804271cf9c83e40bbb4dab2ac614d33c6fd5b0c6187f53f5c593ef", size = 2042784 },
{ url = "https://files.pythonhosted.org/packages/42/b4/0bba8412fd242729feeb80e7152e24f0e1a1c19f4121ca3d4a307f4e6222/pydantic_core-2.33.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1c69aa459f5609dec2fa0652d495353accf3eda5bdb18782bc5a2ae45c9273a", size = 1858179 },
{ url = "https://files.pythonhosted.org/packages/69/1f/c1c40305d929bd08af863df64b0a26203b70b352a1962d86f3bcd52950fe/pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9ec80eb5a5f45a2211793f1c4aeddff0c3761d1c70d684965c1807e923a588b", size = 1909396 },
{ url = "https://files.pythonhosted.org/packages/0f/99/d2e727375c329c1e652b5d450fbb9d56e8c3933a397e4bd46e67c68c2cd5/pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e925819a98318d17251776bd3d6aa9f3ff77b965762155bdad15d1a9265c4cfd", size = 1998264 },
{ url = "https://files.pythonhosted.org/packages/9c/2e/3119a33931278d96ecc2e9e1b9d50c240636cfeb0c49951746ae34e4de74/pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5bf68bb859799e9cec3d9dd8323c40c00a254aabb56fe08f907e437005932f2b", size = 2140588 },
{ url = "https://files.pythonhosted.org/packages/35/bd/9267bd1ba55f17c80ef6cb7e07b3890b4acbe8eb6014f3102092d53d9300/pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1b2ea72dea0825949a045fa4071f6d5b3d7620d2a208335207793cf29c5a182d", size = 2746296 },
{ url = "https://files.pythonhosted.org/packages/6f/ed/ef37de6478a412ee627cbebd73e7b72a680f45bfacce9ff1199de6e17e88/pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1583539533160186ac546b49f5cde9ffc928062c96920f58bd95de32ffd7bffd", size = 2005555 },
{ url = "https://files.pythonhosted.org/packages/dd/84/72c8d1439585d8ee7bc35eb8f88a04a4d302ee4018871f1f85ae1b0c6625/pydantic_core-2.33.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:23c3e77bf8a7317612e5c26a3b084c7edeb9552d645742a54a5867635b4f2453", size = 2124452 },
{ url = "https://files.pythonhosted.org/packages/a7/8f/cb13de30c6a3e303423751a529a3d1271c2effee4b98cf3e397a66ae8498/pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a7a7f2a3f628d2f7ef11cb6188bcf0b9e1558151d511b974dfea10a49afe192b", size = 2087001 },
{ url = "https://files.pythonhosted.org/packages/83/d0/e93dc8884bf288a63fedeb8040ac8f29cb71ca52e755f48e5170bb63e55b/pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:f1fb026c575e16f673c61c7b86144517705865173f3d0907040ac30c4f9f5915", size = 2261663 },
{ url = "https://files.pythonhosted.org/packages/4c/ba/4b7739c95efa0b542ee45fd872c8f6b1884ab808cf04ce7ac6621b6df76e/pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:635702b2fed997e0ac256b2cfbdb4dd0bf7c56b5d8fba8ef03489c03b3eb40e2", size = 2257786 },
{ url = "https://files.pythonhosted.org/packages/cc/98/73cbca1d2360c27752cfa2fcdcf14d96230e92d7d48ecd50499865c56bf7/pydantic_core-2.33.0-cp311-cp311-win32.whl", hash = "sha256:07b4ced28fccae3f00626eaa0c4001aa9ec140a29501770a88dbbb0966019a86", size = 1925697 },
{ url = "https://files.pythonhosted.org/packages/9a/26/d85a40edeca5d8830ffc33667d6fef329fd0f4bc0c5181b8b0e206cfe488/pydantic_core-2.33.0-cp311-cp311-win_amd64.whl", hash = "sha256:4927564be53239a87770a5f86bdc272b8d1fbb87ab7783ad70255b4ab01aa25b", size = 1949859 },
{ url = "https://files.pythonhosted.org/packages/7e/0b/5a381605f0b9870465b805f2c86c06b0a7c191668ebe4117777306c2c1e5/pydantic_core-2.33.0-cp311-cp311-win_arm64.whl", hash = "sha256:69297418ad644d521ea3e1aa2e14a2a422726167e9ad22b89e8f1130d68e1e9a", size = 1907978 },
{ url = "https://files.pythonhosted.org/packages/a9/c4/c9381323cbdc1bb26d352bc184422ce77c4bc2f2312b782761093a59fafc/pydantic_core-2.33.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6c32a40712e3662bebe524abe8abb757f2fa2000028d64cc5a1006016c06af43", size = 2025127 },
{ url = "https://files.pythonhosted.org/packages/6f/bd/af35278080716ecab8f57e84515c7dc535ed95d1c7f52c1c6f7b313a9dab/pydantic_core-2.33.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8ec86b5baa36f0a0bfb37db86c7d52652f8e8aa076ab745ef7725784183c3fdd", size = 1851687 },
{ url = "https://files.pythonhosted.org/packages/12/e4/a01461225809c3533c23bd1916b1e8c2e21727f0fea60ab1acbffc4e2fca/pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4deac83a8cc1d09e40683be0bc6d1fa4cde8df0a9bf0cda5693f9b0569ac01b6", size = 1892232 },
{ url = "https://files.pythonhosted.org/packages/51/17/3d53d62a328fb0a49911c2962036b9e7a4f781b7d15e9093c26299e5f76d/pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:175ab598fb457a9aee63206a1993874badf3ed9a456e0654273e56f00747bbd6", size = 1977896 },
{ url = "https://files.pythonhosted.org/packages/30/98/01f9d86e02ec4a38f4b02086acf067f2c776b845d43f901bd1ee1c21bc4b/pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5f36afd0d56a6c42cf4e8465b6441cf546ed69d3a4ec92724cc9c8c61bd6ecf4", size = 2127717 },
{ url = "https://files.pythonhosted.org/packages/3c/43/6f381575c61b7c58b0fd0b92134c5a1897deea4cdfc3d47567b3ff460a4e/pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0a98257451164666afafc7cbf5fb00d613e33f7e7ebb322fbcd99345695a9a61", size = 2680287 },
{ url = "https://files.pythonhosted.org/packages/01/42/c0d10d1451d161a9a0da9bbef023b8005aa26e9993a8cc24dc9e3aa96c93/pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ecc6d02d69b54a2eb83ebcc6f29df04957f734bcf309d346b4f83354d8376862", size = 2008276 },
{ url = "https://files.pythonhosted.org/packages/20/ca/e08df9dba546905c70bae44ced9f3bea25432e34448d95618d41968f40b7/pydantic_core-2.33.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1a69b7596c6603afd049ce7f3835bcf57dd3892fc7279f0ddf987bebed8caa5a", size = 2115305 },
{ url = "https://files.pythonhosted.org/packages/03/1f/9b01d990730a98833113581a78e595fd40ed4c20f9693f5a658fb5f91eff/pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ea30239c148b6ef41364c6f51d103c2988965b643d62e10b233b5efdca8c0099", size = 2068999 },
{ url = "https://files.pythonhosted.org/packages/20/18/fe752476a709191148e8b1e1139147841ea5d2b22adcde6ee6abb6c8e7cf/pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:abfa44cf2f7f7d7a199be6c6ec141c9024063205545aa09304349781b9a125e6", size = 2241488 },
{ url = "https://files.pythonhosted.org/packages/81/22/14738ad0a0bf484b928c9e52004f5e0b81dd8dabbdf23b843717b37a71d1/pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20d4275f3c4659d92048c70797e5fdc396c6e4446caf517ba5cad2db60cd39d3", size = 2248430 },
{ url = "https://files.pythonhosted.org/packages/e8/27/be7571e215ac8d321712f2433c445b03dbcd645366a18f67b334df8912bc/pydantic_core-2.33.0-cp312-cp312-win32.whl", hash = "sha256:918f2013d7eadea1d88d1a35fd4a1e16aaf90343eb446f91cb091ce7f9b431a2", size = 1908353 },
{ url = "https://files.pythonhosted.org/packages/be/3a/be78f28732f93128bd0e3944bdd4b3970b389a1fbd44907c97291c8dcdec/pydantic_core-2.33.0-cp312-cp312-win_amd64.whl", hash = "sha256:aec79acc183865bad120b0190afac467c20b15289050648b876b07777e67ea48", size = 1955956 },
{ url = "https://files.pythonhosted.org/packages/21/26/b8911ac74faa994694b76ee6a22875cc7a4abea3c381fdba4edc6c6bef84/pydantic_core-2.33.0-cp312-cp312-win_arm64.whl", hash = "sha256:5461934e895968655225dfa8b3be79e7e927e95d4bd6c2d40edd2fa7052e71b6", size = 1903259 },
{ url = "https://files.pythonhosted.org/packages/79/20/de2ad03ce8f5b3accf2196ea9b44f31b0cd16ac6e8cfc6b21976ed45ec35/pydantic_core-2.33.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f00e8b59e1fc8f09d05594aa7d2b726f1b277ca6155fc84c0396db1b373c4555", size = 2032214 },
{ url = "https://files.pythonhosted.org/packages/f9/af/6817dfda9aac4958d8b516cbb94af507eb171c997ea66453d4d162ae8948/pydantic_core-2.33.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1a73be93ecef45786d7d95b0c5e9b294faf35629d03d5b145b09b81258c7cd6d", size = 1852338 },
{ url = "https://files.pythonhosted.org/packages/44/f3/49193a312d9c49314f2b953fb55740b7c530710977cabe7183b8ef111b7f/pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff48a55be9da6930254565ff5238d71d5e9cd8c5487a191cb85df3bdb8c77365", size = 1896913 },
{ url = "https://files.pythonhosted.org/packages/06/e0/c746677825b2e29a2fa02122a8991c83cdd5b4c5f638f0664d4e35edd4b2/pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:26a4ea04195638dcd8c53dadb545d70badba51735b1594810e9768c2c0b4a5da", size = 1986046 },
{ url = "https://files.pythonhosted.org/packages/11/ec/44914e7ff78cef16afb5e5273d480c136725acd73d894affdbe2a1bbaad5/pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:41d698dcbe12b60661f0632b543dbb119e6ba088103b364ff65e951610cb7ce0", size = 2128097 },
{ url = "https://files.pythonhosted.org/packages/fe/f5/c6247d424d01f605ed2e3802f338691cae17137cee6484dce9f1ac0b872b/pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ae62032ef513fe6281ef0009e30838a01057b832dc265da32c10469622613885", size = 2681062 },
{ url = "https://files.pythonhosted.org/packages/f0/85/114a2113b126fdd7cf9a9443b1b1fe1b572e5bd259d50ba9d5d3e1927fa9/pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f225f3a3995dbbc26affc191d0443c6c4aa71b83358fd4c2b7d63e2f6f0336f9", size = 2007487 },
{ url = "https://files.pythonhosted.org/packages/e6/40/3c05ed28d225c7a9acd2b34c5c8010c279683a870219b97e9f164a5a8af0/pydantic_core-2.33.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5bdd36b362f419c78d09630cbaebc64913f66f62bda6d42d5fbb08da8cc4f181", size = 2121382 },
{ url = "https://files.pythonhosted.org/packages/8a/22/e70c086f41eebd323e6baa92cc906c3f38ddce7486007eb2bdb3b11c8f64/pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:2a0147c0bef783fd9abc9f016d66edb6cac466dc54a17ec5f5ada08ff65caf5d", size = 2072473 },
{ url = "https://files.pythonhosted.org/packages/3e/84/d1614dedd8fe5114f6a0e348bcd1535f97d76c038d6102f271433cd1361d/pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:c860773a0f205926172c6644c394e02c25421dc9a456deff16f64c0e299487d3", size = 2249468 },
{ url = "https://files.pythonhosted.org/packages/b0/c0/787061eef44135e00fddb4b56b387a06c303bfd3884a6df9bea5cb730230/pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:138d31e3f90087f42aa6286fb640f3c7a8eb7bdae829418265e7e7474bd2574b", size = 2254716 },
{ url = "https://files.pythonhosted.org/packages/ae/e2/27262eb04963201e89f9c280f1e10c493a7a37bc877e023f31aa72d2f911/pydantic_core-2.33.0-cp313-cp313-win32.whl", hash = "sha256:d20cbb9d3e95114325780f3cfe990f3ecae24de7a2d75f978783878cce2ad585", size = 1916450 },
{ url = "https://files.pythonhosted.org/packages/13/8d/25ff96f1e89b19e0b70b3cd607c9ea7ca27e1dcb810a9cd4255ed6abf869/pydantic_core-2.33.0-cp313-cp313-win_amd64.whl", hash = "sha256:ca1103d70306489e3d006b0f79db8ca5dd3c977f6f13b2c59ff745249431a606", size = 1956092 },
{ url = "https://files.pythonhosted.org/packages/1b/64/66a2efeff657b04323ffcd7b898cb0354d36dae3a561049e092134a83e9c/pydantic_core-2.33.0-cp313-cp313-win_arm64.whl", hash = "sha256:6291797cad239285275558e0a27872da735b05c75d5237bbade8736f80e4c225", size = 1908367 },
{ url = "https://files.pythonhosted.org/packages/52/54/295e38769133363d7ec4a5863a4d579f331728c71a6644ff1024ee529315/pydantic_core-2.33.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:7b79af799630af263eca9ec87db519426d8c9b3be35016eddad1832bac812d87", size = 1813331 },
{ url = "https://files.pythonhosted.org/packages/4c/9c/0c8ea02db8d682aa1ef48938abae833c1d69bdfa6e5ec13b21734b01ae70/pydantic_core-2.33.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eabf946a4739b5237f4f56d77fa6668263bc466d06a8036c055587c130a46f7b", size = 1986653 },
{ url = "https://files.pythonhosted.org/packages/8e/4f/3fb47d6cbc08c7e00f92300e64ba655428c05c56b8ab6723bd290bae6458/pydantic_core-2.33.0-cp313-cp313t-win_amd64.whl", hash = "sha256:8a1d581e8cdbb857b0e0e81df98603376c1a5c34dc5e54039dcc00f043df81e7", size = 1931234 },
{ url = "https://files.pythonhosted.org/packages/32/b1/933e907c395a17c2ffa551112da2e6e725a200f951a91f61ae0b595a437d/pydantic_core-2.33.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:7c9c84749f5787781c1c45bb99f433402e484e515b40675a5d121ea14711cf61", size = 2043225 },
{ url = "https://files.pythonhosted.org/packages/05/92/86daeceaa2cf5e054fcc73e0fa17fe210aa004baf3d0530e4e0b4a0f08ce/pydantic_core-2.33.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:64672fa888595a959cfeff957a654e947e65bbe1d7d82f550417cbd6898a1d6b", size = 1877319 },
{ url = "https://files.pythonhosted.org/packages/20/c0/fab069cff6986c596a28af96f720ff84ec3ee5de6487f274e2b2f2d79c55/pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26bc7367c0961dec292244ef2549afa396e72e28cc24706210bd44d947582c59", size = 1910568 },
{ url = "https://files.pythonhosted.org/packages/6d/b5/c02cba6e0c661eb62eb1588a5775ba3e14d80f04071d684a8bd8ae1ca75b/pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ce72d46eb201ca43994303025bd54d8a35a3fc2a3495fac653d6eb7205ce04f4", size = 1997899 },
{ url = "https://files.pythonhosted.org/packages/cc/dc/96a4bb1ea6777e0329d609ade93cc3dca9bc71fd9cbe3f044c8ac39e7c24/pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:14229c1504287533dbf6b1fc56f752ce2b4e9694022ae7509631ce346158de11", size = 2140646 },
{ url = "https://files.pythonhosted.org/packages/88/3d/9c8ce0dc418fa9b10bc994449ca6d251493525a6debc5f73b07a367b3ced/pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:085d8985b1c1e48ef271e98a658f562f29d89bda98bf120502283efbc87313eb", size = 2753924 },
{ url = "https://files.pythonhosted.org/packages/17/d6/a9cee7d4689d51bfd01107c2ec8de394f56e974ea4ae7e2d624712bed67a/pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31860fbda80d8f6828e84b4a4d129fd9c4535996b8249cfb8c720dc2a1a00bb8", size = 2008316 },
{ url = "https://files.pythonhosted.org/packages/d5/ea/c2578b67b28f3e51323841632e217a5fdd0a8f3fce852bb16782e637cda7/pydantic_core-2.33.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f200b2f20856b5a6c3a35f0d4e344019f805e363416e609e9b47c552d35fd5ea", size = 2124634 },
{ url = "https://files.pythonhosted.org/packages/1f/ae/236dbc8085a88aec1fd8369c6062fff3b40463918af90d20a2058b967f0e/pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5f72914cfd1d0176e58ddc05c7a47674ef4222c8253bf70322923e73e14a4ac3", size = 2087826 },
{ url = "https://files.pythonhosted.org/packages/12/ad/8292aebcd787b03167a62df5221e613b76b263b5a05c2310217e88772b75/pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:91301a0980a1d4530d4ba7e6a739ca1a6b31341252cb709948e0aca0860ce0ae", size = 2260866 },
{ url = "https://files.pythonhosted.org/packages/83/f9/d89c9e306f69395fb5b0d6e83e99980046c2b3a7cc2839a43b869838bf60/pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7419241e17c7fbe5074ba79143d5523270e04f86f1b3a0dff8df490f84c8273a", size = 2259118 },
{ url = "https://files.pythonhosted.org/packages/30/f1/4da918dcd75898006a6b4da848f231306a2d8b2fda35c7679df76a4ae3d7/pydantic_core-2.33.0-cp39-cp39-win32.whl", hash = "sha256:7a25493320203005d2a4dac76d1b7d953cb49bce6d459d9ae38e30dd9f29bc9c", size = 1925241 },
{ url = "https://files.pythonhosted.org/packages/4f/53/a31aaa220ac133f05e4e3622f65ad9b02e6cbd89723d8d035f5effac8701/pydantic_core-2.33.0-cp39-cp39-win_amd64.whl", hash = "sha256:82a4eba92b7ca8af1b7d5ef5f3d9647eee94d1f74d21ca7c21e3a2b92e008358", size = 1953427 },
{ url = "https://files.pythonhosted.org/packages/44/77/85e173b715e1a277ce934f28d877d82492df13e564fa68a01c96f36a47ad/pydantic_core-2.33.0-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e2762c568596332fdab56b07060c8ab8362c56cf2a339ee54e491cd503612c50", size = 2040129 },
{ url = "https://files.pythonhosted.org/packages/33/e7/33da5f8a94bbe2191cfcd15bd6d16ecd113e67da1b8c78d3cc3478112dab/pydantic_core-2.33.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:5bf637300ff35d4f59c006fff201c510b2b5e745b07125458a5389af3c0dff8c", size = 1872656 },
{ url = "https://files.pythonhosted.org/packages/b4/7a/9600f222bea840e5b9ba1f17c0acc79b669b24542a78c42c6a10712c0aae/pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c151ce3d59ed56ebd7ce9ce5986a409a85db697d25fc232f8e81f195aa39a1", size = 1903731 },
{ url = "https://files.pythonhosted.org/packages/81/d2/94c7ca4e24c5dcfb74df92e0836c189e9eb6814cf62d2f26a75ea0a906db/pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ee65f0cc652261744fd07f2c6e6901c914aa6c5ff4dcfaf1136bc394d0dd26b", size = 2083966 },
{ url = "https://files.pythonhosted.org/packages/b8/74/a0259989d220e8865ed6866a6d40539e40fa8f507e587e35d2414cc081f8/pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:024d136ae44d233e6322027bbf356712b3940bee816e6c948ce4b90f18471b3d", size = 2118951 },
{ url = "https://files.pythonhosted.org/packages/13/4c/87405ed04d6d07597920b657f082a8e8e58bf3034178bb9044b4d57a91e2/pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e37f10f6d4bc67c58fbd727108ae1d8b92b397355e68519f1e4a7babb1473442", size = 2079632 },
{ url = "https://files.pythonhosted.org/packages/5a/4c/bcb02970ef91d4cd6de7c6893101302637da456bc8b52c18ea0d047b55ce/pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:502ed542e0d958bd12e7c3e9a015bce57deaf50eaa8c2e1c439b512cb9db1e3a", size = 2250541 },
{ url = "https://files.pythonhosted.org/packages/a3/2b/dbe5450c4cd904be5da736dcc7f2357b828199e29e38de19fc81f988b288/pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:715c62af74c236bf386825c0fdfa08d092ab0f191eb5b4580d11c3189af9d330", size = 2255685 },
{ url = "https://files.pythonhosted.org/packages/ca/a6/ca1d35f695d81f639c5617fc9efb44caad21a9463383fa45364b3044175a/pydantic_core-2.33.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:bccc06fa0372151f37f6b69834181aa9eb57cf8665ed36405fb45fbf6cac3bae", size = 2082395 },
{ url = "https://files.pythonhosted.org/packages/2b/b2/553e42762e7b08771fca41c0230c1ac276f9e79e78f57628e1b7d328551d/pydantic_core-2.33.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5d8dc9f63a26f7259b57f46a7aab5af86b2ad6fbe48487500bb1f4b27e051e4c", size = 2041207 },
{ url = "https://files.pythonhosted.org/packages/85/81/a91a57bbf3efe53525ab75f65944b8950e6ef84fe3b9a26c1ec173363263/pydantic_core-2.33.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:30369e54d6d0113d2aa5aee7a90d17f225c13d87902ace8fcd7bbf99b19124db", size = 1873736 },
{ url = "https://files.pythonhosted.org/packages/9c/d2/5ab52e9f551cdcbc1ee99a0b3ef595f56d031f66f88e5ca6726c49f9ce65/pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3eb479354c62067afa62f53bb387827bee2f75c9c79ef25eef6ab84d4b1ae3b", size = 1903794 },
{ url = "https://files.pythonhosted.org/packages/2f/5f/a81742d3f3821b16f1265f057d6e0b68a3ab13a814fe4bffac536a1f26fd/pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0310524c833d91403c960b8a3cf9f46c282eadd6afd276c8c5edc617bd705dc9", size = 2083457 },
{ url = "https://files.pythonhosted.org/packages/b5/2f/e872005bc0fc47f9c036b67b12349a8522d32e3bda928e82d676e2a594d1/pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:eddb18a00bbb855325db27b4c2a89a4ba491cd6a0bd6d852b225172a1f54b36c", size = 2119537 },
{ url = "https://files.pythonhosted.org/packages/d3/13/183f13ce647202eaf3dada9e42cdfc59cbb95faedd44d25f22b931115c7f/pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:ade5dbcf8d9ef8f4b28e682d0b29f3008df9842bb5ac48ac2c17bc55771cc976", size = 2080069 },
{ url = "https://files.pythonhosted.org/packages/23/8b/b6be91243da44a26558d9c3a9007043b3750334136c6550551e8092d6d96/pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:2c0afd34f928383e3fd25740f2050dbac9d077e7ba5adbaa2227f4d4f3c8da5c", size = 2251618 },
{ url = "https://files.pythonhosted.org/packages/aa/c5/fbcf1977035b834f63eb542e74cd6c807177f383386175b468f0865bcac4/pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:7da333f21cd9df51d5731513a6d39319892947604924ddf2e24a4612975fb936", size = 2255374 },
{ url = "https://files.pythonhosted.org/packages/2f/f8/66f328e411f1c9574b13c2c28ab01f308b53688bbbe6ca8fb981e6cabc42/pydantic_core-2.33.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:4b6d77c75a57f041c5ee915ff0b0bb58eabb78728b69ed967bc5b780e8f701b8", size = 2082099 },
{ url = "https://files.pythonhosted.org/packages/a7/b2/7d0182cb46cfa1e003a5a52b6a15d50ad3c191a34ca5e6f5726a56ac016f/pydantic_core-2.33.0-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba95691cf25f63df53c1d342413b41bd7762d9acb425df8858d7efa616c0870e", size = 2040349 },
{ url = "https://files.pythonhosted.org/packages/58/9f/dc18700d82cd4e053ff02155d40cff89b08d8583668a0b54ca1b223d3132/pydantic_core-2.33.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:4f1ab031feb8676f6bd7c85abec86e2935850bf19b84432c64e3e239bffeb1ec", size = 1873052 },
{ url = "https://files.pythonhosted.org/packages/06/a9/a30a2603121b5841dc2b8dea4e18db74fa83c8c9d4804401dec23bcd3bb0/pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58c1151827eef98b83d49b6ca6065575876a02d2211f259fb1a6b7757bd24dd8", size = 1904205 },
{ url = "https://files.pythonhosted.org/packages/53/b7/cc7638fd83ad8bb19cab297e3f0a669bd9633830833865c064a74ff5a1c1/pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a66d931ea2c1464b738ace44b7334ab32a2fd50be023d863935eb00f42be1778", size = 2084567 },
{ url = "https://files.pythonhosted.org/packages/c4/f0/37ba8bdc15d2c233b2a3675160cc1b205e30dd9ef4cd6d3dfe069799e160/pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bcf0bab28995d483f6c8d7db25e0d05c3efa5cebfd7f56474359e7137f39856", size = 2119072 },
{ url = "https://files.pythonhosted.org/packages/eb/29/e553e2e9c16e5ad9370e947f15585db4f7438ab4b52c53f93695c99831cd/pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:89670d7a0045acb52be0566df5bc8b114ac967c662c06cf5e0c606e4aadc964b", size = 2080432 },
{ url = "https://files.pythonhosted.org/packages/65/ca/268cae039ea91366ba88b9a848977b7189cb7675cb2cd9ee273464a20d91/pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:b716294e721d8060908dbebe32639b01bfe61b15f9f57bcc18ca9a0e00d9520b", size = 2251007 },
{ url = "https://files.pythonhosted.org/packages/3c/a4/5ca3a14b5d992e63a766b8883d4ba8b4d353ef6a2d9f59ee5d60e728998a/pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fc53e05c16697ff0c1c7c2b98e45e131d4bfb78068fffff92a82d169cbb4c7b7", size = 2256435 },
{ url = "https://files.pythonhosted.org/packages/da/a2/2670964d7046025b96f8c6d35c38e5310ec6aa1681e4158ef31ab21a4727/pydantic_core-2.33.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:68504959253303d3ae9406b634997a2123a0b0c1da86459abbd0ffc921695eac", size = 2082790 },
{ url = "https://files.pythonhosted.org/packages/e5/92/b31726561b5dae176c2d2c2dc43a9c5bfba5d32f96f8b4c0a600dd492447/pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8", size = 2028817 },
{ url = "https://files.pythonhosted.org/packages/a3/44/3f0b95fafdaca04a483c4e685fe437c6891001bf3ce8b2fded82b9ea3aa1/pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d", size = 1861357 },
{ url = "https://files.pythonhosted.org/packages/30/97/e8f13b55766234caae05372826e8e4b3b96e7b248be3157f53237682e43c/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d", size = 1898011 },
{ url = "https://files.pythonhosted.org/packages/9b/a3/99c48cf7bafc991cc3ee66fd544c0aae8dc907b752f1dad2d79b1b5a471f/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572", size = 1982730 },
{ url = "https://files.pythonhosted.org/packages/de/8e/a5b882ec4307010a840fb8b58bd9bf65d1840c92eae7534c7441709bf54b/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02", size = 2136178 },
{ url = "https://files.pythonhosted.org/packages/e4/bb/71e35fc3ed05af6834e890edb75968e2802fe98778971ab5cba20a162315/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b", size = 2736462 },
{ url = "https://files.pythonhosted.org/packages/31/0d/c8f7593e6bc7066289bbc366f2235701dcbebcd1ff0ef8e64f6f239fb47d/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2", size = 2005652 },
{ url = "https://files.pythonhosted.org/packages/d2/7a/996d8bd75f3eda405e3dd219ff5ff0a283cd8e34add39d8ef9157e722867/pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a", size = 2113306 },
{ url = "https://files.pythonhosted.org/packages/ff/84/daf2a6fb2db40ffda6578a7e8c5a6e9c8affb251a05c233ae37098118788/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac", size = 2073720 },
{ url = "https://files.pythonhosted.org/packages/77/fb/2258da019f4825128445ae79456a5499c032b55849dbd5bed78c95ccf163/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a", size = 2244915 },
{ url = "https://files.pythonhosted.org/packages/d8/7a/925ff73756031289468326e355b6fa8316960d0d65f8b5d6b3a3e7866de7/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b", size = 2241884 },
{ url = "https://files.pythonhosted.org/packages/0b/b0/249ee6d2646f1cdadcb813805fe76265745c4010cf20a8eba7b0e639d9b2/pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22", size = 1910496 },
{ url = "https://files.pythonhosted.org/packages/66/ff/172ba8f12a42d4b552917aa65d1f2328990d3ccfc01d5b7c943ec084299f/pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640", size = 1955019 },
{ url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584 },
{ url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071 },
{ url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823 },
{ url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792 },
{ url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338 },
{ url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998 },
{ url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200 },
{ url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890 },
{ url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359 },
{ url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883 },
{ url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074 },
{ url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538 },
{ url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909 },
{ url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786 },
{ url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000 },
{ url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996 },
{ url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957 },
{ url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199 },
{ url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296 },
{ url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109 },
{ url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028 },
{ url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044 },
{ url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881 },
{ url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034 },
{ url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187 },
{ url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628 },
{ url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866 },
{ url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894 },
{ url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688 },
{ url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808 },
{ url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580 },
{ url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859 },
{ url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810 },
{ url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498 },
{ url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611 },
{ url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924 },
{ url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196 },
{ url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389 },
{ url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223 },
{ url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473 },
{ url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269 },
{ url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921 },
{ url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162 },
{ url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560 },
{ url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777 },
{ url = "https://files.pythonhosted.org/packages/53/ea/bbe9095cdd771987d13c82d104a9c8559ae9aec1e29f139e286fd2e9256e/pydantic_core-2.33.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a2b911a5b90e0374d03813674bf0a5fbbb7741570dcd4b4e85a2e48d17def29d", size = 2028677 },
{ url = "https://files.pythonhosted.org/packages/49/1d/4ac5ed228078737d457a609013e8f7edc64adc37b91d619ea965758369e5/pydantic_core-2.33.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6fa6dfc3e4d1f734a34710f391ae822e0a8eb8559a85c6979e14e65ee6ba2954", size = 1864735 },
{ url = "https://files.pythonhosted.org/packages/23/9a/2e70d6388d7cda488ae38f57bc2f7b03ee442fbcf0d75d848304ac7e405b/pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c54c939ee22dc8e2d545da79fc5381f1c020d6d3141d3bd747eab59164dc89fb", size = 1898467 },
{ url = "https://files.pythonhosted.org/packages/ff/2e/1568934feb43370c1ffb78a77f0baaa5a8b6897513e7a91051af707ffdc4/pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53a57d2ed685940a504248187d5685e49eb5eef0f696853647bf37c418c538f7", size = 1983041 },
{ url = "https://files.pythonhosted.org/packages/01/1a/1a1118f38ab64eac2f6269eb8c120ab915be30e387bb561e3af904b12499/pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09fb9dd6571aacd023fe6aaca316bd01cf60ab27240d7eb39ebd66a3a15293b4", size = 2136503 },
{ url = "https://files.pythonhosted.org/packages/5c/da/44754d1d7ae0f22d6d3ce6c6b1486fc07ac2c524ed8f6eca636e2e1ee49b/pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0e6116757f7959a712db11f3e9c0a99ade00a5bbedae83cb801985aa154f071b", size = 2736079 },
{ url = "https://files.pythonhosted.org/packages/4d/98/f43cd89172220ec5aa86654967b22d862146bc4d736b1350b4c41e7c9c03/pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d55ab81c57b8ff8548c3e4947f119551253f4e3787a7bbc0b6b3ca47498a9d3", size = 2006508 },
{ url = "https://files.pythonhosted.org/packages/2b/cc/f77e8e242171d2158309f830f7d5d07e0531b756106f36bc18712dc439df/pydantic_core-2.33.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c20c462aa4434b33a2661701b861604913f912254e441ab8d78d30485736115a", size = 2113693 },
{ url = "https://files.pythonhosted.org/packages/54/7a/7be6a7bd43e0a47c147ba7fbf124fe8aaf1200bc587da925509641113b2d/pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44857c3227d3fb5e753d5fe4a3420d6376fa594b07b621e220cd93703fe21782", size = 2074224 },
{ url = "https://files.pythonhosted.org/packages/2a/07/31cf8fadffbb03be1cb520850e00a8490c0927ec456e8293cafda0726184/pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:eb9b459ca4df0e5c87deb59d37377461a538852765293f9e6ee834f0435a93b9", size = 2245403 },
{ url = "https://files.pythonhosted.org/packages/b6/8d/bbaf4c6721b668d44f01861f297eb01c9b35f612f6b8e14173cb204e6240/pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9fcd347d2cc5c23b06de6d3b7b8275be558a0c90549495c699e379a80bf8379e", size = 2242331 },
{ url = "https://files.pythonhosted.org/packages/bb/93/3cc157026bca8f5006250e74515119fcaa6d6858aceee8f67ab6dc548c16/pydantic_core-2.33.2-cp39-cp39-win32.whl", hash = "sha256:83aa99b1285bc8f038941ddf598501a86f1536789740991d7d8756e34f1e74d9", size = 1910571 },
{ url = "https://files.pythonhosted.org/packages/5b/90/7edc3b2a0d9f0dda8806c04e511a67b0b7a41d2187e2003673a996fb4310/pydantic_core-2.33.2-cp39-cp39-win_amd64.whl", hash = "sha256:f481959862f57f29601ccced557cc2e817bce7533ab8e01a797a48b49c9692b3", size = 1956504 },
{ url = "https://files.pythonhosted.org/packages/30/68/373d55e58b7e83ce371691f6eaa7175e3a24b956c44628eb25d7da007917/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa", size = 2023982 },
{ url = "https://files.pythonhosted.org/packages/a4/16/145f54ac08c96a63d8ed6442f9dec17b2773d19920b627b18d4f10a061ea/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29", size = 1858412 },
{ url = "https://files.pythonhosted.org/packages/41/b1/c6dc6c3e2de4516c0bb2c46f6a373b91b5660312342a0cf5826e38ad82fa/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d", size = 1892749 },
{ url = "https://files.pythonhosted.org/packages/12/73/8cd57e20afba760b21b742106f9dbdfa6697f1570b189c7457a1af4cd8a0/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e", size = 2067527 },
{ url = "https://files.pythonhosted.org/packages/e3/d5/0bb5d988cc019b3cba4a78f2d4b3854427fc47ee8ec8e9eaabf787da239c/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c", size = 2108225 },
{ url = "https://files.pythonhosted.org/packages/f1/c5/00c02d1571913d496aabf146106ad8239dc132485ee22efe08085084ff7c/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec", size = 2069490 },
{ url = "https://files.pythonhosted.org/packages/22/a8/dccc38768274d3ed3a59b5d06f59ccb845778687652daa71df0cab4040d7/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052", size = 2237525 },
{ url = "https://files.pythonhosted.org/packages/d4/e7/4f98c0b125dda7cf7ccd14ba936218397b44f50a56dd8c16a3091df116c3/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c", size = 2238446 },
{ url = "https://files.pythonhosted.org/packages/ce/91/2ec36480fdb0b783cd9ef6795753c1dea13882f2e68e73bce76ae8c21e6a/pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808", size = 2066678 },
{ url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200 },
{ url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123 },
{ url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852 },
{ url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484 },
{ url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896 },
{ url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475 },
{ url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013 },
{ url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715 },
{ url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757 },
{ url = "https://files.pythonhosted.org/packages/08/98/dbf3fdfabaf81cda5622154fda78ea9965ac467e3239078e0dcd6df159e7/pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:87acbfcf8e90ca885206e98359d7dca4bcbb35abdc0ff66672a293e1d7a19101", size = 2024034 },
{ url = "https://files.pythonhosted.org/packages/8d/99/7810aa9256e7f2ccd492590f86b79d370df1e9292f1f80b000b6a75bd2fb/pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:7f92c15cd1e97d4b12acd1cc9004fa092578acfa57b67ad5e43a197175d01a64", size = 1858578 },
{ url = "https://files.pythonhosted.org/packages/d8/60/bc06fa9027c7006cc6dd21e48dbf39076dc39d9abbaf718a1604973a9670/pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3f26877a748dc4251cfcfda9dfb5f13fcb034f5308388066bcfe9031b63ae7d", size = 1892858 },
{ url = "https://files.pythonhosted.org/packages/f2/40/9d03997d9518816c68b4dfccb88969756b9146031b61cd37f781c74c9b6a/pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dac89aea9af8cd672fa7b510e7b8c33b0bba9a43186680550ccf23020f32d535", size = 2068498 },
{ url = "https://files.pythonhosted.org/packages/d8/62/d490198d05d2d86672dc269f52579cad7261ced64c2df213d5c16e0aecb1/pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:970919794d126ba8645f3837ab6046fb4e72bbc057b3709144066204c19a455d", size = 2108428 },
{ url = "https://files.pythonhosted.org/packages/9a/ec/4cd215534fd10b8549015f12ea650a1a973da20ce46430b68fc3185573e8/pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3eb3fe62804e8f859c49ed20a8451342de53ed764150cb14ca71357c765dc2a6", size = 2069854 },
{ url = "https://files.pythonhosted.org/packages/1a/1a/abbd63d47e1d9b0d632fee6bb15785d0889c8a6e0a6c3b5a8e28ac1ec5d2/pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:3abcd9392a36025e3bd55f9bd38d908bd17962cc49bc6da8e7e96285336e2bca", size = 2237859 },
{ url = "https://files.pythonhosted.org/packages/80/1c/fa883643429908b1c90598fd2642af8839efd1d835b65af1f75fba4d94fe/pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:3a1c81334778f9e3af2f8aeb7a960736e5cab1dfebfb26aabca09afd2906c039", size = 2239059 },
{ url = "https://files.pythonhosted.org/packages/d4/29/3cade8a924a61f60ccfa10842f75eb12787e1440e2b8660ceffeb26685e7/pydantic_core-2.33.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2807668ba86cb38c6817ad9bc66215ab8584d1d304030ce4f0887336f28a5e27", size = 2066661 },
]
[[package]]

View File

@@ -5,650 +5,660 @@ packages:
- name: langchain-core
path: libs/core
repo: langchain-ai/langchain
downloads: 51178135
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 52153634
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-text-splitters
path: libs/text-splitters
repo: langchain-ai/langchain
downloads: 18371499
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 18674701
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain
path: libs/langchain
repo: langchain-ai/langchain
downloads: 68611637
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 71877707
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-community
path: libs/community
repo: langchain-ai/langchain-community
downloads: 20961009
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 21531980
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-experimental
path: libs/experimental
repo: langchain-ai/langchain-experimental
downloads: 1651817
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 1640028
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-cli
path: libs/cli
repo: langchain-ai/langchain
downloads: 55074
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 52924
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-ai21
path: libs/ai21
repo: langchain-ai/langchain-ai21
downloads: 4684
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 4360
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-anthropic
path: libs/partners/anthropic
repo: langchain-ai/langchain
js: '@langchain/anthropic'
downloads: 2205980
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2260743
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-chroma
path: libs/partners/chroma
repo: langchain-ai/langchain
downloads: 934777
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 952160
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-exa
path: libs/partners/exa
repo: langchain-ai/langchain
provider_page: exa_search
js: '@langchain/exa'
downloads: 5949
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 6843
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-fireworks
path: libs/partners/fireworks
repo: langchain-ai/langchain
downloads: 253744
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 302517
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-groq
path: libs/partners/groq
repo: langchain-ai/langchain
js: '@langchain/groq'
downloads: 713166
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 711682
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-huggingface
path: libs/partners/huggingface
repo: langchain-ai/langchain
downloads: 565389
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 568294
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-ibm
path: libs/ibm
repo: langchain-ai/langchain-ibm
js: '@langchain/ibm'
downloads: 193195
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 217224
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-localai
path: libs/localai
repo: mkhludnev/langchain-localai
downloads: 811
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 762
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-milvus
path: libs/milvus
repo: langchain-ai/langchain-milvus
downloads: 207750
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 195215
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-mistralai
path: libs/partners/mistralai
repo: langchain-ai/langchain
js: '@langchain/mistralai'
downloads: 333887
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 317214
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-mongodb
path: libs/langchain-mongodb
repo: langchain-ai/langchain-mongodb
provider_page: mongodb_atlas
js: '@langchain/mongodb'
downloads: 229323
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 213285
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-nomic
path: libs/partners/nomic
repo: langchain-ai/langchain
js: '@langchain/nomic'
downloads: 13453
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 11886
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-openai
path: libs/partners/openai
repo: langchain-ai/langchain
js: '@langchain/openai'
downloads: 12632953
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 13665726
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-pinecone
path: libs/pinecone
repo: langchain-ai/langchain-pinecone
js: '@langchain/pinecone'
downloads: 731139
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 729476
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-prompty
path: libs/partners/prompty
repo: langchain-ai/langchain
provider_page: microsoft
downloads: 2215
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2138
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-qdrant
path: libs/partners/qdrant
repo: langchain-ai/langchain
js: '@langchain/qdrant'
downloads: 156264
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 159329
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-scrapegraph
path: .
repo: ScrapeGraphAI/langchain-scrapegraph
downloads: 1338
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 1177
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-sema4
path: libs/sema4
repo: langchain-ai/langchain-sema4
provider_page: robocorp
downloads: 1864
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 1609
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-together
path: libs/together
repo: langchain-ai/langchain-together
downloads: 84925
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 82472
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-upstage
path: libs/upstage
repo: langchain-ai/langchain-upstage
downloads: 20074
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 20558
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-voyageai
path: libs/partners/voyageai
repo: langchain-ai/langchain
downloads: 31164
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 27698
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-aws
name_title: AWS
path: libs/aws
repo: langchain-ai/langchain-aws
js: '@langchain/aws'
downloads: 2756214
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2946295
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-astradb
name_title: DataStax Astra DB
path: libs/astradb
repo: langchain-ai/langchain-datastax
downloads: 100973
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 100092
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-google-genai
name_title: Google Generative AI
path: libs/genai
repo: langchain-ai/langchain-google
provider_page: google
js: '@langchain/google-genai'
downloads: 1860492
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 1951902
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-google-vertexai
path: libs/vertexai
repo: langchain-ai/langchain-google
provider_page: google
js: '@langchain/google-vertexai'
downloads: 14375847
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 15219755
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-google-community
path: libs/community
repo: langchain-ai/langchain-google
provider_page: google
downloads: 4565784
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 4529167
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-weaviate
path: libs/weaviate
repo: langchain-ai/langchain-weaviate
js: '@langchain/weaviate'
downloads: 42280
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 41265
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-cohere
path: libs/cohere
repo: langchain-ai/langchain-cohere
js: '@langchain/cohere'
downloads: 816207
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 859764
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-elasticsearch
path: libs/elasticsearch
repo: langchain-ai/langchain-elastic
downloads: 182874
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 169912
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-nvidia-ai-endpoints
path: libs/ai-endpoints
repo: langchain-ai/langchain-nvidia
provider_page: nvidia
downloads: 178772
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 171961
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-postgres
path: .
repo: langchain-ai/langchain-postgres
provider_page: pgvector
downloads: 751590
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 720037
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-redis
path: libs/redis
repo: langchain-ai/langchain-redis
js: '@langchain/redis'
downloads: 43514
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 41653
downloads_updated_at: '2025-05-08T20:26:05.985970+00:00'
- name: langchain-unstructured
path: libs/unstructured
repo: langchain-ai/langchain-unstructured
downloads: 152489
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 139927
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-azure-ai
path: libs/azure-ai
repo: langchain-ai/langchain-azure
provider_page: azure_ai
js: '@langchain/openai'
downloads: 29862
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 27904
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-azure-dynamic-sessions
path: libs/azure-dynamic-sessions
repo: langchain-ai/langchain-azure
provider_page: microsoft
js: '@langchain/azure-dynamic-sessions'
downloads: 9328
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 8945
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-sqlserver
path: libs/sqlserver
repo: langchain-ai/langchain-azure
provider_page: microsoft
downloads: 2519
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2370
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-cerebras
path: libs/cerebras
repo: langchain-ai/langchain-cerebras
downloads: 66301
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 41520
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-snowflake
path: libs/snowflake
repo: langchain-ai/langchain-snowflake
downloads: 2235
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2037
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: databricks-langchain
name_title: Databricks
path: integrations/langchain
repo: databricks/databricks-ai-bridge
provider_page: databricks
downloads: 112136
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 126587
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-couchbase
path: .
repo: Couchbase-Ecosystem/langchain-couchbase
downloads: 1251
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 1350
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-ollama
path: libs/partners/ollama
repo: langchain-ai/langchain
js: '@langchain/ollama'
downloads: 924780
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 948591
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-box
path: libs/box
repo: box-community/langchain-box
downloads: 703
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 654
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-tests
path: libs/standard-tests
repo: langchain-ai/langchain
downloads: 267152
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 262480
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-neo4j
path: libs/neo4j
repo: langchain-ai/langchain-neo4j
downloads: 55071
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 52537
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-linkup
path: .
repo: LinkupPlatform/langchain-linkup
downloads: 782
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 616
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-yt-dlp
path: .
repo: aqib0770/langchain-yt-dlp
downloads: 2369
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 2276
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-oceanbase
path: .
repo: oceanbase/langchain-oceanbase
downloads: 73
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 68
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-predictionguard
path: .
repo: predictionguard/langchain-predictionguard
downloads: 4063
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 4351
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-cratedb
path: .
repo: crate/langchain-cratedb
downloads: 216
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 198
downloads_updated_at: '2025-05-08T20:26:13.914570+00:00'
- name: langchain-modelscope
path: .
repo: modelscope/langchain-modelscope
downloads: 141
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 110
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-falkordb
path: .
repo: kingtroga/langchain-falkordb
downloads: 129
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 99
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-dappier
path: .
repo: DappierAI/langchain-dappier
downloads: 343
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 300
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-pull-md
path: .
repo: chigwell/langchain-pull-md
downloads: 135
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 115
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-kuzu
path: .
repo: kuzudb/langchain-kuzu
downloads: 760
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 777
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-docling
path: .
repo: DS4SD/docling-langchain
downloads: 18845
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 19927
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-lindorm-integration
path: .
repo: AlwaysBluer/langchain-lindorm-integration
provider_page: lindorm
downloads: 69
downloads_updated_at: '2025-04-22T15:24:39.289813+00:00'
downloads: 53
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-hyperbrowser
path: .
repo: hyperbrowserai/langchain-hyperbrowser
downloads: 523
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 448
downloads_updated_at: '2025-05-08T20:26:23.196172+00:00'
- name: langchain-fmp-data
path: .
repo: MehdiZare/langchain-fmp-data
downloads: 108
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 122
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: tilores-langchain
name_title: Tilores
path: .
repo: tilotech/tilores-langchain
provider_page: tilores
downloads: 124
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 133
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-pipeshift
path: .
repo: pipeshift-org/langchain-pipeshift
downloads: 119
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 93
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-payman-tool
path: .
repo: paymanai/langchain-payman-tool
downloads: 226
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 172
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-sambanova
path: .
repo: sambanova/langchain-sambanova
downloads: 53108
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 52905
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-deepseek
path: libs/partners/deepseek
repo: langchain-ai/langchain
provider_page: deepseek
js: '@langchain/deepseek'
downloads: 100570
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 200311
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-jenkins
path: .
repo: Amitgb14/langchain_jenkins
downloads: 200
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 172
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-goodfire
path: .
repo: keenanpepper/langchain-goodfire
downloads: 314
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 256
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-nimble
path: .
repo: Nimbleway/langchain-nimble
downloads: 214
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 191
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-apify
path: .
repo: apify/langchain-apify
downloads: 886
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 1062
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langfair
name_title: LangFair
path: .
repo: cvs-health/langfair
downloads: 1692
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 1663
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-abso
path: .
repo: lunary-ai/langchain-abso
downloads: 233
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 203
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-graph-retriever
name_title: Graph RAG
path: packages/langchain-graph-retriever
repo: datastax/graph-rag
provider_page: graph_rag
downloads: 47297
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 52322
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-xai
path: libs/partners/xai
repo: langchain-ai/langchain
downloads: 44422
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 48297
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-salesforce
path: .
repo: colesmcintosh/langchain-salesforce
downloads: 455
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 982
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-discord-shikenso
path: .
repo: Shikenso-Analytics/langchain-discord
downloads: 137
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 109
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-vdms
name_title: VDMS
path: .
repo: IntelLabs/langchain-vdms
downloads: 11847
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 11916
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-deeplake
path: .
repo: activeloopai/langchain-deeplake
downloads: 117
downloads_updated_at: '2025-04-22T15:25:01.432566+00:00'
downloads: 63
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-cognee
path: .
repo: topoteretes/langchain-cognee
downloads: 114
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 90
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-prolog
path: .
repo: apisani1/langchain-prolog
downloads: 175
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 153
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-permit
path: .
repo: permitio/langchain-permit
downloads: 266
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 176
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-pymupdf4llm
path: .
repo: lakinduboteju/langchain-pymupdf4llm
downloads: 5324
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 6332
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-writer
path: .
repo: writer/langchain-writer
downloads: 728
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 745
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-taiga
name_title: Taiga
path: .
repo: Shikenso-Analytics/langchain-taiga
downloads: 439
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 444
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-tableau
name_title: Tableau
path: .
repo: Tab-SE/tableau_langchain
downloads: 551
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 543
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: ads4gpts-langchain
name_title: ADS4GPTs
path: libs/python-sdk/ads4gpts-langchain
repo: ADS4GPTs/ads4gpts
provider_page: ads4gpts
downloads: 626
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 521
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-contextual
name_title: Contextual AI
path: langchain-contextual
repo: ContextualAI//langchain-contextual
downloads: 365
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 502
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-valthera
name_title: Valthera
path: .
repo: valthera/langchain-valthera
downloads: 213
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 173
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-opengradient
path: .
repo: OpenGradient/og-langchain
downloads: 263
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 135
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: goat-sdk-adapter-langchain
name_title: GOAT SDK
path: python/src/adapters/langchain
repo: goat-sdk/goat
provider_page: goat
downloads: 418
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 395
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-netmind
path: .
repo: protagolabs/langchain-netmind
downloads: 65
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 47
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-agentql
path: langchain
repo: tinyfish-io/agentql-integrations
downloads: 227
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 337
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-xinference
path: .
repo: TheSongg/langchain-xinference
downloads: 132
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 103
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: powerscale-rag-connector
name_title: PowerScale RAG Connector
path: .
repo: dell/powerscale-rag-connector
provider_page: dell
downloads: 89
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 74
downloads_updated_at: '2025-05-08T20:26:32.333380+00:00'
- name: langchain-tavily
path: .
repo: tavily-ai/langchain-tavily
include_in_api_ref: true
js: '@langchain/tavily'
downloads: 13796
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 28335
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
include_in_api_ref: true
- name: langchain-zotero-retriever
name_title: Zotero
path: .
repo: TimBMK/langchain-zotero-retriever
provider_page: zotero
downloads: 72
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 60
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-naver
name_title: Naver
path: .
repo: NaverCloudPlatform/langchain-naver
provider_page: naver
downloads: 239
downloads_updated_at: '2025-04-22T15:43:47.979572+00:00'
downloads: 682
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-naver-community
name_title: Naver (community-maintained)
path: .
repo: e7217/langchain-naver-community
provider_page: naver
downloads: 119
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 147
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-memgraph
path: .
repo: memgraph/langchain-memgraph
downloads: 222
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 929
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-vectara
path: libs/vectara
repo: vectara/langchain-vectara
downloads: 284
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 265
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-oxylabs
path: .
repo: oxylabs/langchain-oxylabs
downloads: 141
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 60
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-perplexity
path: libs/partners/perplexity
repo: langchain-ai/langchain
downloads: 3297
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 7587
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-runpod
name_title: RunPod
path: .
repo: runpod/langchain-runpod
provider_page: runpod
downloads: 283
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 272
downloads_updated_at: '2025-05-08T20:26:54.465130+00:00'
- name: langchain-mariadb
path: .
repo: mariadb-corporation/langchain-mariadb
downloads: 428
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 497
downloads_updated_at: '2025-05-08T20:27:05.631733+00:00'
- name: langchain-qwq
path: .
repo: yigit353/langchain-qwq
provider_page: alibaba_cloud
downloads: 1062
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 794
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-litellm
path: .
repo: akshay-dongare/langchain-litellm
downloads: 2114
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 4379
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-cloudflare
path: libs/langchain-cloudflare
repo: cloudflare/langchain-cloudflare
downloads: 766
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 943
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-ydb
path: .
repo: ydb-platform/langchain-ydb
downloads: 231
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 181
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-singlestore
name_title: SingleStore
path: .
repo: singlestore-labs/langchain-singlestore
downloads: 116
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 131
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-galaxia-retriever
path: .
repo: rrozanski-smabbler/galaxia-langchain
provider_page: galaxia
downloads: 319
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 374
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-valyu
path: .
repo: valyu-network/langchain-valyu
downloads: 120
downloads_updated_at: '2025-04-22T15:25:24.644345+00:00'
downloads: 141
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-hana
name_title: SAP HANA Cloud
path: .
repo: SAP/langchain-integration-for-sap-hana-cloud
name_title: SAP HANA Cloud
provider_page: sap
downloads: 315
downloads_updated_at: '2025-04-27T19:45:43.938924+00:00'
downloads: 350
downloads_updated_at: '2025-05-08T20:27:18.592135+00:00'
- name: langchain-gel
path: .
repo: geldata/langchain-gel
provider_page: gel
- name: langchain-aerospike
path: .
repo: aerospike/langchain-aerospike
- name: langchain-brightdata
repo: luminati-io/langchain-brightdata
path: .

View File

@@ -866,29 +866,46 @@ class ChatAnthropic(BaseChatModel):
See LangChain `docs <https://python.langchain.com/docs/integrations/chat/anthropic/>`_
for more detail.
.. code-block:: python
Web search:
from langchain_anthropic import ChatAnthropic
.. code-block:: python
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")
from langchain_anthropic import ChatAnthropic
tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
llm_with_tools = llm.bind_tools([tool])
llm = ChatAnthropic(model="claude-3-5-sonnet-latest")
response = llm_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text())
response.tool_calls
tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3}
llm_with_tools = llm.bind_tools([tool])
.. code-block:: none
response = llm_with_tools.invoke(
"How do I update a web app to TypeScript 5.5?"
)
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
Text editor:
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]
.. code-block:: python
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")
tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
llm_with_tools = llm.bind_tools([tool])
response = llm_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text())
response.tool_calls
.. code-block:: none
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]
Response metadata
.. code-block:: python
@@ -1744,6 +1761,12 @@ def _make_message_chunk_from_anthropic_event(
# See https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/streaming/_messages.py # noqa: E501
if event.type == "message_start" and stream_usage:
usage_metadata = _create_usage_metadata(event.message.usage)
# We pick up a cumulative count of output_tokens at the end of the stream,
# so here we zero out to avoid double counting.
usage_metadata["total_tokens"] = (
usage_metadata["total_tokens"] - usage_metadata["output_tokens"]
)
usage_metadata["output_tokens"] = 0
if hasattr(event.message, "model"):
response_metadata = {"model_name": event.message.model}
else:
@@ -1817,7 +1840,11 @@ def _make_message_chunk_from_anthropic_event(
tool_call_chunks=[tool_call_chunk], # type: ignore
)
elif event.type == "message_delta" and stream_usage:
usage_metadata = _create_usage_metadata(event.usage)
usage_metadata = UsageMetadata(
input_tokens=0,
output_tokens=event.usage.output_tokens,
total_tokens=event.usage.output_tokens,
)
message_chunk = AIMessageChunk(
content="",
usage_metadata=usage_metadata,

View File

@@ -46,7 +46,7 @@ def test_stream() -> None:
if token.usage_metadata is not None:
if token.usage_metadata.get("input_tokens"):
chunks_with_input_token_counts += 1
elif token.usage_metadata.get("output_tokens"):
if token.usage_metadata.get("output_tokens"):
chunks_with_output_token_counts += 1
chunks_with_model_name += int("model_name" in token.response_metadata)
if chunks_with_input_token_counts != 1 or chunks_with_output_token_counts != 1:
@@ -85,7 +85,7 @@ async def test_astream() -> None:
if token.usage_metadata is not None:
if token.usage_metadata.get("input_tokens"):
chunks_with_input_token_counts += 1
elif token.usage_metadata.get("output_tokens"):
if token.usage_metadata.get("output_tokens"):
chunks_with_output_token_counts += 1
if chunks_with_input_token_counts != 1 or chunks_with_output_token_counts != 1:
raise AssertionError(
@@ -134,6 +134,9 @@ async def test_stream_usage() -> None:
async for token in model.astream("hi"):
assert isinstance(token, AIMessageChunk)
assert token.usage_metadata is None
async def test_stream_usage_override() -> None:
# check we override with kwarg
model = ChatAnthropic(model_name=MODEL_NAME) # type: ignore[call-arg]
assert model.stream_usage

View File

@@ -251,8 +251,6 @@ def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict["role"] = "user"
elif isinstance(message, AIMessage):
message_dict["role"] = "assistant"
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
if message.tool_calls or message.invalid_tool_calls:
message_dict["tool_calls"] = [
_lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
@@ -267,6 +265,10 @@ def _convert_message_to_dict(message: BaseMessage) -> dict:
{k: v for k, v in tool_call.items() if k in tool_call_supported_props}
for tool_call in message_dict["tool_calls"]
]
elif "function_call" in message.additional_kwargs:
# OpenAI raises 400 if both function_call and tool_calls are present in the
# same message.
message_dict["function_call"] = message.additional_kwargs["function_call"]
else:
pass
# If tool calls present, content null value should be None not empty string.

View File

@@ -447,7 +447,12 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
# please refer to
# https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
def _get_len_safe_embeddings(
self, texts: list[str], *, engine: str, chunk_size: Optional[int] = None
self,
texts: list[str],
*,
engine: str,
chunk_size: Optional[int] = None,
**kwargs: Any,
) -> list[list[float]]:
"""
Generate length-safe embeddings for a list of texts.
@@ -465,11 +470,12 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
List[List[float]]: A list of embeddings for each input text.
"""
_chunk_size = chunk_size or self.chunk_size
client_kwargs = {**self._invocation_params, **kwargs}
_iter, tokens, indices = self._tokenize(texts, _chunk_size)
batched_embeddings: list[list[float]] = []
for i in _iter:
response = self.client.create(
input=tokens[i : i + _chunk_size], **self._invocation_params
input=tokens[i : i + _chunk_size], **client_kwargs
)
if not isinstance(response, dict):
response = response.model_dump()
@@ -483,9 +489,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
def empty_embedding() -> list[float]:
nonlocal _cached_empty_embedding
if _cached_empty_embedding is None:
average_embedded = self.client.create(
input="", **self._invocation_params
)
average_embedded = self.client.create(input="", **client_kwargs)
if not isinstance(average_embedded, dict):
average_embedded = average_embedded.model_dump()
_cached_empty_embedding = average_embedded["data"][0]["embedding"]
@@ -496,7 +500,12 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
# please refer to
# https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
async def _aget_len_safe_embeddings(
self, texts: list[str], *, engine: str, chunk_size: Optional[int] = None
self,
texts: list[str],
*,
engine: str,
chunk_size: Optional[int] = None,
**kwargs: Any,
) -> list[list[float]]:
"""
Asynchronously generate length-safe embeddings for a list of texts.
@@ -515,11 +524,12 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
"""
_chunk_size = chunk_size or self.chunk_size
client_kwargs = {**self._invocation_params, **kwargs}
_iter, tokens, indices = self._tokenize(texts, _chunk_size)
batched_embeddings: list[list[float]] = []
for i in range(0, len(tokens), _chunk_size):
response = await self.async_client.create(
input=tokens[i : i + _chunk_size], **self._invocation_params
input=tokens[i : i + _chunk_size], **client_kwargs
)
if not isinstance(response, dict):
@@ -535,7 +545,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
nonlocal _cached_empty_embedding
if _cached_empty_embedding is None:
average_embedded = await self.async_client.create(
input="", **self._invocation_params
input="", **client_kwargs
)
if not isinstance(average_embedded, dict):
average_embedded = average_embedded.model_dump()
@@ -545,7 +555,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
return [e if e is not None else await empty_embedding() for e in embeddings]
def embed_documents(
self, texts: list[str], chunk_size: int | None = None
self, texts: list[str], chunk_size: Optional[int] = None, **kwargs: Any
) -> list[list[float]]:
"""Call out to OpenAI's embedding endpoint for embedding search docs.
@@ -553,16 +563,18 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
kwargs: Additional keyword arguments to pass to the embedding API.
Returns:
List of embeddings, one for each text.
"""
chunk_size_ = chunk_size or self.chunk_size
client_kwargs = {**self._invocation_params, **kwargs}
if not self.check_embedding_ctx_length:
embeddings: list[list[float]] = []
for i in range(0, len(texts), chunk_size_):
response = self.client.create(
input=texts[i : i + chunk_size_], **self._invocation_params
input=texts[i : i + chunk_size_], **client_kwargs
)
if not isinstance(response, dict):
response = response.model_dump()
@@ -573,11 +585,11 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
# than the maximum context and use length-safe embedding function.
engine = cast(str, self.deployment)
return self._get_len_safe_embeddings(
texts, engine=engine, chunk_size=chunk_size
texts, engine=engine, chunk_size=chunk_size, **kwargs
)
async def aembed_documents(
self, texts: list[str], chunk_size: int | None = None
self, texts: list[str], chunk_size: Optional[int] = None, **kwargs: Any
) -> list[list[float]]:
"""Call out to OpenAI's embedding endpoint async for embedding search docs.
@@ -585,16 +597,18 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
kwargs: Additional keyword arguments to pass to the embedding API.
Returns:
List of embeddings, one for each text.
"""
chunk_size_ = chunk_size or self.chunk_size
client_kwargs = {**self._invocation_params, **kwargs}
if not self.check_embedding_ctx_length:
embeddings: list[list[float]] = []
for i in range(0, len(texts), chunk_size_):
response = await self.async_client.create(
input=texts[i : i + chunk_size_], **self._invocation_params
input=texts[i : i + chunk_size_], **client_kwargs
)
if not isinstance(response, dict):
response = response.model_dump()
@@ -605,28 +619,30 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
# than the maximum context and use length-safe embedding function.
engine = cast(str, self.deployment)
return await self._aget_len_safe_embeddings(
texts, engine=engine, chunk_size=chunk_size
texts, engine=engine, chunk_size=chunk_size, **kwargs
)
def embed_query(self, text: str) -> list[float]:
def embed_query(self, text: str, **kwargs: Any) -> list[float]:
"""Call out to OpenAI's embedding endpoint for embedding query text.
Args:
text: The text to embed.
kwargs: Additional keyword arguments to pass to the embedding API.
Returns:
Embedding for the text.
"""
return self.embed_documents([text])[0]
return self.embed_documents([text], **kwargs)[0]
async def aembed_query(self, text: str) -> list[float]:
async def aembed_query(self, text: str, **kwargs: Any) -> list[float]:
"""Call out to OpenAI's embedding endpoint async for embedding query text.
Args:
text: The text to embed.
kwargs: Additional keyword arguments to pass to the embedding API.
Returns:
Embedding for the text.
"""
embeddings = await self.aembed_documents([text])
embeddings = await self.aembed_documents([text], **kwargs)
return embeddings[0]

View File

@@ -7,12 +7,12 @@ authors = []
license = { text = "MIT" }
requires-python = ">=3.9"
dependencies = [
"langchain-core<1.0.0,>=0.3.58",
"langchain-core<1.0.0,>=0.3.59",
"openai<2.0.0,>=1.68.2",
"tiktoken<1,>=0.7",
]
name = "langchain-openai"
version = "0.3.16"
version = "0.3.17"
description = "An integration package connecting OpenAI and LangChain"
readme = "README.md"

View File

@@ -614,6 +614,50 @@ def test_openai_invoke_name(mock_client: MagicMock) -> None:
assert res.name == "Erick"
def test_function_calls_with_tool_calls(mock_client: MagicMock) -> None:
# Test that we ignore function calls if tool_calls are present
llm = ChatOpenAI(model="gpt-4.1-mini")
tool_call_message = AIMessage(
content="",
additional_kwargs={
"function_call": {
"name": "get_weather",
"arguments": '{"location": "Boston"}',
}
},
tool_calls=[
{
"name": "get_weather",
"args": {"location": "Boston"},
"id": "abc123",
"type": "tool_call",
}
],
)
messages = [
HumanMessage("What's the weather in Boston?"),
tool_call_message,
ToolMessage(content="It's sunny.", name="get_weather", tool_call_id="abc123"),
]
with patch.object(llm, "client", mock_client):
_ = llm.invoke(messages)
_, call_kwargs = mock_client.create.call_args
call_messages = call_kwargs["messages"]
tool_call_message_payload = call_messages[1]
assert "tool_calls" in tool_call_message_payload
assert "function_call" not in tool_call_message_payload
# Test we don't ignore function calls if tool_calls are not present
cast(AIMessage, messages[1]).tool_calls = []
with patch.object(llm, "client", mock_client):
_ = llm.invoke(messages)
_, call_kwargs = mock_client.create.call_args
call_messages = call_kwargs["messages"]
tool_call_message_payload = call_messages[1]
assert "function_call" in tool_call_message_payload
assert "tool_calls" not in tool_call_message_payload
def test_custom_token_counting() -> None:
def token_encoder(text: str) -> list[int]:
return [1, 2, 3]

View File

@@ -57,3 +57,42 @@ def test_embed_documents_with_custom_chunk_size_no_check_ctx_length() -> None:
mock_create.assert_any_call(input=texts[3:4], **embeddings._invocation_params)
assert result == [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6], [0.7, 0.8]]
def test_embed_with_kwargs() -> None:
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small", check_embedding_ctx_length=False
)
texts = ["text1", "text2"]
with patch.object(embeddings.client, "create") as mock_create:
mock_create.side_effect = [
{"data": [{"embedding": [0.1, 0.2, 0.3]}, {"embedding": [0.4, 0.5, 0.6]}]}
]
result = embeddings.embed_documents(texts, dimensions=3)
mock_create.assert_any_call(
input=texts, dimensions=3, **embeddings._invocation_params
)
assert result == [[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]]
async def test_embed_with_kwargs_async() -> None:
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small",
check_embedding_ctx_length=False,
dimensions=4, # also check that runtime kwargs take precedence
)
texts = ["text1", "text2"]
with patch.object(embeddings.async_client, "create") as mock_create:
mock_create.side_effect = [
{"data": [{"embedding": [0.1, 0.2, 0.3]}, {"embedding": [0.4, 0.5, 0.6]}]}
]
result = await embeddings.aembed_documents(texts, dimensions=3)
client_kwargs = embeddings._invocation_params.copy()
assert client_kwargs["dimensions"] == 4
client_kwargs["dimensions"] = 3
mock_create.assert_any_call(input=texts, **client_kwargs)
assert result == [[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]]

View File

@@ -462,7 +462,7 @@ wheels = [
[[package]]
name = "langchain-core"
version = "0.3.58"
version = "0.3.59"
source = { editable = "../../core" }
dependencies = [
{ name = "jsonpatch" },
@@ -477,10 +477,9 @@ dependencies = [
[package.metadata]
requires-dist = [
{ name = "jsonpatch", specifier = ">=1.33,<2.0" },
{ name = "langsmith", specifier = ">=0.1.125,<0.4" },
{ name = "langsmith", specifier = ">=0.1.126,<0.4" },
{ name = "packaging", specifier = ">=23.2,<25" },
{ name = "pydantic", marker = "python_full_version < '3.12.4'", specifier = ">=2.5.2,<3.0.0" },
{ name = "pydantic", marker = "python_full_version >= '3.12.4'", specifier = ">=2.7.4,<3.0.0" },
{ name = "pydantic", specifier = ">=2.7.4" },
{ name = "pyyaml", specifier = ">=5.3" },
{ name = "tenacity", specifier = ">=8.1.0,!=8.4.0,<10.0.0" },
{ name = "typing-extensions", specifier = ">=4.7" },
@@ -521,7 +520,7 @@ typing = [
[[package]]
name = "langchain-openai"
version = "0.3.16"
version = "0.3.17"
source = { editable = "." }
dependencies = [
{ name = "langchain-core" },

View File

@@ -18,14 +18,30 @@ class CharacterTextSplitter(TextSplitter):
self._is_separator_regex = is_separator_regex
def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
separator = (
"""Split into chunks without re-inserting lookaround separators."""
# 1. Determine split pattern: raw regex or escaped literal
sep_pattern = (
self._separator if self._is_separator_regex else re.escape(self._separator)
)
splits = _split_text_with_regex(text, separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
# 2. Initial split (keep separator if requested)
splits = _split_text_with_regex(text, sep_pattern, self._keep_separator)
# 3. Detect zero-width lookaround so we never re-insert it
lookaround_prefixes = ("(?=", "(?<!", "(?<=", "(?!")
is_lookaround = self._is_separator_regex and any(
self._separator.startswith(p) for p in lookaround_prefixes
)
# 4. Decide merge separator:
# - if keep_separator or lookaround → dont re-insert
# - else → re-insert literal separator
merge_sep = ""
if not (self._keep_separator or is_lookaround):
merge_sep = self._separator
# 5. Merge adjacent splits and return
return self._merge_splits(splits, merge_sep)
def _split_text_with_regex(

View File

@@ -3373,3 +3373,51 @@ def test_html_splitter_with_media_preservation() -> None:
]
assert documents == expected
def test_character_text_splitter_discard_regex_separator_on_merge() -> None:
"""Test that regex lookahead separator is not re-inserted when merging."""
text = "SCE191 First chunk. SCE103 Second chunk."
splitter = CharacterTextSplitter(
separator=r"(?=SCE\d{3})",
is_separator_regex=True,
chunk_size=200,
chunk_overlap=0,
keep_separator=False,
)
output = splitter.split_text(text)
assert output == ["SCE191 First chunk. SCE103 Second chunk."]
@pytest.mark.parametrize(
"separator,is_regex,text,chunk_size,expected",
[
# 1) regex lookaround & split happens
# "abcmiddef" split by "(?<=mid)" → ["abcmid","def"], chunk_size=5 keeps both
(r"(?<=mid)", True, "abcmiddef", 5, ["abcmid", "def"]),
# 2) regex lookaround & no split
# chunk_size=100 merges back into ["abcmiddef"]
(r"(?<=mid)", True, "abcmiddef", 100, ["abcmiddef"]),
# 3) literal separator & split happens
# split on "mid" → ["abc","def"], chunk_size=3 keeps both
("mid", False, "abcmiddef", 3, ["abc", "def"]),
# 4) literal separator & no split
# chunk_size=100 merges back into ["abcmiddef"]
("mid", False, "abcmiddef", 100, ["abcmiddef"]),
],
)
def test_character_text_splitter_chunk_size_effect(
separator: str,
is_regex: bool,
text: str,
chunk_size: int,
expected: List[str],
) -> None:
splitter = CharacterTextSplitter(
separator=separator,
is_separator_regex=is_regex,
chunk_size=chunk_size,
chunk_overlap=0,
keep_separator=False,
)
assert splitter.split_text(text) == expected