mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-07 01:30:24 +00:00
Compare commits
27 Commits
langchain=
...
cc/bench
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d3b05c1753 | ||
|
|
fb4996ccda | ||
|
|
45a067509f | ||
|
|
23c3fa65d4 | ||
|
|
13d67cf37e | ||
|
|
7f989d3c3b | ||
|
|
b7968c2b7d | ||
|
|
2f0c6421a1 | ||
|
|
cfe13f673a | ||
|
|
5599c59d4a | ||
|
|
11d68a0b9e | ||
|
|
566774a893 | ||
|
|
255a6d668a | ||
|
|
cbf4c0e565 | ||
|
|
dc66737f03 | ||
|
|
499dc35cfb | ||
|
|
42c1159991 | ||
|
|
cc6139860c | ||
|
|
ae8f58ac6f | ||
|
|
346731544b | ||
|
|
c1b86cc929 | ||
|
|
376f70be96 | ||
|
|
ac2de920b1 | ||
|
|
e02eed5489 | ||
|
|
5414527236 | ||
|
|
881c6534a6 | ||
|
|
5e9eb19a83 |
6
.github/CONTRIBUTING.md
vendored
6
.github/CONTRIBUTING.md
vendored
@@ -3,4 +3,8 @@
|
||||
Hi there! Thank you for even being interested in contributing to LangChain.
|
||||
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
|
||||
|
||||
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).
|
||||
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
|
||||
|
||||
## New features
|
||||
|
||||
For new features, please start a new [discussion](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.
|
||||
|
||||
13
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
13
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,7 +1,6 @@
|
||||
name: "\U0001F41B Bug Report"
|
||||
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
|
||||
labels: ["bug"]
|
||||
type: bug
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
@@ -14,7 +13,9 @@ body:
|
||||
if there's another way to solve your problem:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
|
||||
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
@@ -24,7 +25,7 @@ body:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: This is a bug, not a usage question.
|
||||
- label: This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
|
||||
required: true
|
||||
- label: I added a clear and descriptive title that summarizes this issue.
|
||||
required: true
|
||||
@@ -34,8 +35,6 @@ body:
|
||||
required: true
|
||||
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
|
||||
required: true
|
||||
- label: This is not related to the langchain-community package.
|
||||
required: true
|
||||
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
|
||||
required: true
|
||||
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
|
||||
@@ -119,7 +118,3 @@ body:
|
||||
python -m langchain_core.sys_info
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
9
.github/ISSUE_TEMPLATE/config.yml
vendored
9
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,9 +1,6 @@
|
||||
blank_issues_enabled: false
|
||||
version: 2.1
|
||||
contact_links:
|
||||
- name: 📚 Documentation
|
||||
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
|
||||
about: Report an issue related to the LangChain documentation
|
||||
- name: 💬 LangChain Forum
|
||||
url: https://forum.langchain.com/
|
||||
about: General community discussions and support
|
||||
- name: LangChain Forum
|
||||
url: https://forum.langchain.com/
|
||||
about: General community discussions, support, and feature requests
|
||||
|
||||
59
.github/ISSUE_TEMPLATE/documentation.yml
vendored
Normal file
59
.github/ISSUE_TEMPLATE/documentation.yml
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
name: Documentation
|
||||
description: Report an issue related to the LangChain documentation.
|
||||
title: "docs: <Please write a comprehensive title after the 'docs: ' prefix>"
|
||||
labels: [documentation]
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to report an issue in the documentation.
|
||||
|
||||
Only report issues with documentation here, explain if there are
|
||||
any missing topics or if you found a mistake in the documentation.
|
||||
|
||||
Do **NOT** use this to ask usage questions or reporting issues with your code.
|
||||
|
||||
If you have usage questions or need help solving some problem,
|
||||
please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
If you're in the wrong place, here are some helpful links to find a better
|
||||
place to ask your question:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
- type: input
|
||||
id: url
|
||||
attributes:
|
||||
label: URL
|
||||
description: URL to documentation
|
||||
validations:
|
||||
required: false
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checklist
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I added a very descriptive title to this issue.
|
||||
required: true
|
||||
- label: I included a link to the documentation page I am referring to (if applicable).
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue with current documentation:"
|
||||
description: >
|
||||
Please make sure to leave a reference to the document/code you're
|
||||
referring to. Feel free to include names of classes, functions, methods
|
||||
or concepts you'd like to see documented more.
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Idea or request for content:"
|
||||
description: >
|
||||
Please describe as clearly as possible what topics you think are missing
|
||||
from the current documentation.
|
||||
118
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
118
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
@@ -1,118 +0,0 @@
|
||||
name: "✨ Feature Request"
|
||||
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
|
||||
labels: ["feature request"]
|
||||
type: feature
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to request a new feature.
|
||||
|
||||
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
Relevant links to check before filing a feature request to see if your request has already been made or
|
||||
if there's another way to achieve what you want:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: This is a feature request, not a bug report or usage question.
|
||||
required: true
|
||||
- label: I added a clear and descriptive title that summarizes the feature request.
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar feature request and didn't find it.
|
||||
required: true
|
||||
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
|
||||
required: true
|
||||
- label: This is not related to the langchain-community package.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Feature Description
|
||||
description: |
|
||||
Please provide a clear and concise description of the feature you would like to see added to LangChain.
|
||||
|
||||
What specific functionality are you requesting? Be as detailed as possible.
|
||||
placeholder: |
|
||||
I would like LangChain to support...
|
||||
|
||||
This feature would allow users to...
|
||||
- type: textarea
|
||||
id: use-case
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Use Case
|
||||
description: |
|
||||
Describe the specific use case or problem this feature would solve.
|
||||
|
||||
Why do you need this feature? What problem does it solve for you or other users?
|
||||
placeholder: |
|
||||
I'm trying to build an application that...
|
||||
|
||||
Currently, I have to work around this by...
|
||||
|
||||
This feature would help me/users to...
|
||||
- type: textarea
|
||||
id: proposed-solution
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Proposed Solution
|
||||
description: |
|
||||
If you have ideas about how this feature could be implemented, please describe them here.
|
||||
|
||||
This is optional but can be helpful for maintainers to understand your vision.
|
||||
placeholder: |
|
||||
I think this could be implemented by...
|
||||
|
||||
The API could look like...
|
||||
|
||||
```python
|
||||
# Example of how the feature might work
|
||||
```
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Alternatives Considered
|
||||
description: |
|
||||
Have you considered any alternative solutions or workarounds?
|
||||
|
||||
What other approaches have you tried or considered?
|
||||
placeholder: |
|
||||
I've tried using...
|
||||
|
||||
Alternative approaches I considered:
|
||||
1. ...
|
||||
2. ...
|
||||
|
||||
But these don't work because...
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Additional Context
|
||||
description: |
|
||||
Add any other context, screenshots, examples, or references that would help explain your feature request.
|
||||
placeholder: |
|
||||
Related issues: #...
|
||||
|
||||
Similar features in other libraries:
|
||||
- ...
|
||||
|
||||
Additional context or examples:
|
||||
- ...
|
||||
7
.github/ISSUE_TEMPLATE/privileged.yml
vendored
7
.github/ISSUE_TEMPLATE/privileged.yml
vendored
@@ -4,7 +4,12 @@ body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
|
||||
Thanks for your interest in LangChain! 🚀
|
||||
|
||||
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
|
||||
|
||||
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
|
||||
or are a regular contributor to LangChain with previous merged pull requests.
|
||||
- type: checkboxes
|
||||
id: privileged
|
||||
attributes:
|
||||
|
||||
91
.github/ISSUE_TEMPLATE/task.yml
vendored
91
.github/ISSUE_TEMPLATE/task.yml
vendored
@@ -1,91 +0,0 @@
|
||||
name: "📋 Task"
|
||||
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
|
||||
labels: ["task"]
|
||||
type: task
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for creating a task to help organize LangChain development.
|
||||
|
||||
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
|
||||
|
||||
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
|
||||
- type: checkboxes
|
||||
id: maintainer
|
||||
attributes:
|
||||
label: Maintainer task
|
||||
description: Confirm that you are allowed to create a task here.
|
||||
options:
|
||||
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: task-description
|
||||
attributes:
|
||||
label: Task Description
|
||||
description: |
|
||||
Provide a clear and detailed description of the task.
|
||||
|
||||
What needs to be done? Be specific about the scope and requirements.
|
||||
placeholder: |
|
||||
This task involves...
|
||||
|
||||
The goal is to...
|
||||
|
||||
Specific requirements:
|
||||
- ...
|
||||
- ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: acceptance-criteria
|
||||
attributes:
|
||||
label: Acceptance Criteria
|
||||
description: |
|
||||
Define the criteria that must be met for this task to be considered complete.
|
||||
|
||||
What are the specific deliverables or outcomes expected?
|
||||
placeholder: |
|
||||
This task will be complete when:
|
||||
- [ ] ...
|
||||
- [ ] ...
|
||||
- [ ] ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Context and Background
|
||||
description: |
|
||||
Provide any relevant context, background information, or links to related issues/PRs.
|
||||
|
||||
Why is this task needed? What problem does it solve?
|
||||
placeholder: |
|
||||
Background:
|
||||
- ...
|
||||
|
||||
Related issues/PRs:
|
||||
- #...
|
||||
|
||||
Additional context:
|
||||
- ...
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: dependencies
|
||||
attributes:
|
||||
label: Dependencies
|
||||
description: |
|
||||
List any dependencies or blockers for this task.
|
||||
|
||||
Are there other tasks, issues, or external factors that need to be completed first?
|
||||
placeholder: |
|
||||
This task depends on:
|
||||
- [ ] Issue #...
|
||||
- [ ] PR #...
|
||||
- [ ] External dependency: ...
|
||||
|
||||
Blocked by:
|
||||
- ...
|
||||
validations:
|
||||
required: false
|
||||
11
.github/PULL_REQUEST_TEMPLATE.md
vendored
11
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,5 +1,3 @@
|
||||
(Replace this entire block of text)
|
||||
|
||||
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
|
||||
|
||||
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
|
||||
@@ -11,13 +9,14 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
|
||||
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
|
||||
- Allowed `{SCOPE}` values (optional):
|
||||
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
|
||||
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
|
||||
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
|
||||
- Once you've written the title, please delete this checklist item; do not include it in the PR.
|
||||
|
||||
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
|
||||
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
|
||||
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
|
||||
- **Dependencies:** any dependencies required for this change
|
||||
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
|
||||
1. A test for the integration, preferably unit tests that do not rely on network access,
|
||||
@@ -27,7 +26,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
|
||||
|
||||
Additional guidelines:
|
||||
|
||||
- Most PRs should not touch more than one package.
|
||||
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
|
||||
- Changes should be backwards compatible.
|
||||
- Make sure optional dependencies are imported within a function.
|
||||
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
|
||||
- Most PRs should not touch more than one package.
|
||||
- Changes should be backwards compatible.
|
||||
|
||||
24
.github/actions/uv_setup/action.yml
vendored
24
.github/actions/uv_setup/action.yml
vendored
@@ -1,22 +1,12 @@
|
||||
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
|
||||
|
||||
name: uv-install
|
||||
description: Set up Python and uv with caching
|
||||
description: Set up Python and uv
|
||||
|
||||
inputs:
|
||||
python-version:
|
||||
description: Python version, supporting MAJOR.MINOR only
|
||||
required: true
|
||||
enable-cache:
|
||||
description: Enable caching for uv dependencies
|
||||
required: false
|
||||
default: 'true'
|
||||
cache-suffix:
|
||||
description: Custom cache key suffix for cache invalidation
|
||||
required: false
|
||||
default: ''
|
||||
working-directory:
|
||||
description: Working directory for cache glob scoping
|
||||
required: false
|
||||
default: '**'
|
||||
|
||||
env:
|
||||
UV_VERSION: "0.5.25"
|
||||
@@ -25,13 +15,7 @@ runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Install uv and set the python version
|
||||
uses: astral-sh/setup-uv@v6
|
||||
uses: astral-sh/setup-uv@v5
|
||||
with:
|
||||
version: ${{ env.UV_VERSION }}
|
||||
python-version: ${{ inputs.python-version }}
|
||||
enable-cache: ${{ inputs.enable-cache }}
|
||||
cache-dependency-glob: |
|
||||
${{ inputs.working-directory }}/pyproject.toml
|
||||
${{ inputs.working-directory }}/uv.lock
|
||||
${{ inputs.working-directory }}/requirements*.txt
|
||||
cache-suffix: ${{ inputs.cache-suffix }}
|
||||
|
||||
4
.github/scripts/check_diff.py
vendored
4
.github/scripts/check_diff.py
vendored
@@ -132,10 +132,6 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
|
||||
|
||||
elif dir_ == "libs/langchain" and job == "extended-tests":
|
||||
py_versions = ["3.9", "3.13"]
|
||||
elif dir_ == "libs/langchain_v1":
|
||||
py_versions = ["3.10", "3.13"]
|
||||
elif dir_ in {"libs/cli"}:
|
||||
py_versions = ["3.10", "3.13"]
|
||||
|
||||
elif dir_ == ".":
|
||||
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
|
||||
|
||||
@@ -27,14 +27,12 @@ jobs:
|
||||
timeout-minutes: 20
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Integration Dependencies'
|
||||
shell: bash
|
||||
|
||||
4
.github/workflows/_integration_test.yml
vendored
4
.github/workflows/_integration_test.yml
vendored
@@ -28,14 +28,12 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: integration-tests-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Integration Dependencies'
|
||||
shell: bash
|
||||
|
||||
4
.github/workflows/_lint.yml
vendored
4
.github/workflows/_lint.yml
vendored
@@ -33,14 +33,12 @@ jobs:
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: lint-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Lint & Typing Dependencies'
|
||||
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
||||
|
||||
62
.github/workflows/_release.yml
vendored
62
.github/workflows/_release.yml
vendored
@@ -43,7 +43,7 @@ jobs:
|
||||
version: ${{ steps.check-version.outputs.version }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
@@ -92,7 +92,7 @@ jobs:
|
||||
outputs:
|
||||
release-body: ${{ steps.generate-release-body.outputs.release-body }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain
|
||||
path: langchain
|
||||
@@ -183,36 +183,13 @@ jobs:
|
||||
needs:
|
||||
- build
|
||||
- release-notes
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: Publish to test PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
# We overwrite any existing distributions with the same name and version.
|
||||
# This is *only for CI use* and is *extremely dangerous* otherwise!
|
||||
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
|
||||
skip-existing: true
|
||||
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
|
||||
attestations: false
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
permissions: write-all
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
|
||||
secrets: inherit
|
||||
|
||||
pre-release-checks:
|
||||
needs:
|
||||
@@ -222,7 +199,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We explicitly *don't* set up caching here. This ensures our tests are
|
||||
# maximally sensitive to catching breakage.
|
||||
@@ -243,7 +220,7 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -312,8 +289,7 @@ jobs:
|
||||
env:
|
||||
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
|
||||
run: |
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
|
||||
make tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
@@ -386,7 +362,7 @@ jobs:
|
||||
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We implement this conditional as Github Actions does not have good support
|
||||
# for conditionally needing steps. https://github.com/actions/runner/issues/491
|
||||
@@ -403,7 +379,7 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
if: startsWith(inputs.working-directory, 'libs/core')
|
||||
with:
|
||||
name: dist
|
||||
@@ -417,7 +393,7 @@ jobs:
|
||||
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
|
||||
| awk '{print $2}' \
|
||||
| sed 's|refs/tags/||' \
|
||||
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
|
||||
| grep -Ev '==[^=]*(\.?dev[0-9]*|\.?rc[0-9]*)$' \
|
||||
| sort -Vr \
|
||||
| head -n 1
|
||||
)"
|
||||
@@ -464,14 +440,14 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -503,14 +479,14 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
5
.github/workflows/_test.yml
vendored
5
.github/workflows/_test.yml
vendored
@@ -32,16 +32,13 @@ jobs:
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
id: setup-python
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
run: uv sync --group test --dev
|
||||
|
||||
4
.github/workflows/_test_doc_imports.yml
vendored
4
.github/workflows/_test_doc_imports.yml
vendored
@@ -21,14 +21,12 @@ jobs:
|
||||
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
|
||||
4
.github/workflows/_test_pydantic.yml
vendored
4
.github/workflows/_test_pydantic.yml
vendored
@@ -34,14 +34,12 @@ jobs:
|
||||
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-pydantic-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
|
||||
106
.github/workflows/_test_release.yml
vendored
Normal file
106
.github/workflows/_test_release.yml
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
name: '🧪 Test Release Package'
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
dangerous-nonmaster-release:
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Release from a non-master branch (danger!)"
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: "3.11"
|
||||
UV_FROZEN: "true"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
outputs:
|
||||
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
|
||||
version: ${{ steps.check-version.outputs.version }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
# We want to keep this build stage *separate* from the release stage,
|
||||
# so that there's no sharing of permissions between them.
|
||||
# The release stage has trusted publishing and GitHub repo contents write access,
|
||||
# and we want to keep the scope of that access limited just to the release job.
|
||||
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
|
||||
# could get access to our GitHub or PyPI credentials.
|
||||
#
|
||||
# Per the trusted publishing GitHub Action:
|
||||
# > It is strongly advised to separate jobs for building [...]
|
||||
# > from the publish job.
|
||||
# https://github.com/pypa/gh-action-pypi-publish#non-goals
|
||||
- name: '📦 Build Project for Distribution'
|
||||
run: uv build
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '⬆️ Upload Build Artifacts'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: '🔍 Extract Version Information'
|
||||
id: check-version
|
||||
shell: python
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
import os
|
||||
import tomllib
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
pkg_name = data["project"]["name"]
|
||||
version = data["project"]["version"]
|
||||
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
|
||||
f.write(f"pkg-name={pkg_name}\n")
|
||||
f.write(f"version={version}\n")
|
||||
|
||||
publish:
|
||||
needs:
|
||||
- build
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: Publish to test PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
|
||||
# We overwrite any existing distributions with the same name and version.
|
||||
# This is *only for CI use* and is *extremely dangerous* otherwise!
|
||||
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
|
||||
skip-existing: true
|
||||
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
|
||||
attestations: false
|
||||
6
.github/workflows/api_doc_build.yml
vendored
6
.github/workflows/api_doc_build.yml
vendored
@@ -17,10 +17,10 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: langchain
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-api-docs-html
|
||||
path: langchain-api-docs-html
|
||||
@@ -72,7 +72,7 @@ jobs:
|
||||
done
|
||||
|
||||
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
|
||||
uses: actions/setup-python@v6
|
||||
uses: actions/setup-python@v5
|
||||
id: setup-python
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
2
.github/workflows/check-broken-links.yml
vendored
2
.github/workflows/check-broken-links.yml
vendored
@@ -13,7 +13,7 @@ jobs:
|
||||
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
- name: '🟢 Setup Node.js 18.x'
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
|
||||
31
.github/workflows/check_core_versions.yml
vendored
31
.github/workflows/check_core_versions.yml
vendored
@@ -16,34 +16,19 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '✅ Verify pyproject.toml & version.py Match'
|
||||
run: |
|
||||
# Check core versions
|
||||
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
|
||||
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
|
||||
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
|
||||
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
|
||||
|
||||
# Compare core versions
|
||||
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
|
||||
# Compare the two versions
|
||||
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
|
||||
echo "langchain-core versions in pyproject.toml and version.py do not match!"
|
||||
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
|
||||
echo "version.py version: $CORE_VERSION_PY_VERSION"
|
||||
echo "pyproject.toml version: $PYPROJECT_VERSION"
|
||||
echo "version.py version: $VERSION_PY_VERSION"
|
||||
exit 1
|
||||
else
|
||||
echo "Core versions match: $CORE_PYPROJECT_VERSION"
|
||||
fi
|
||||
|
||||
# Check langchain_v1 versions
|
||||
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
|
||||
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
|
||||
|
||||
# Compare langchain_v1 versions
|
||||
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
|
||||
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
|
||||
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
|
||||
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
|
||||
exit 1
|
||||
else
|
||||
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
|
||||
echo "Versions match: $PYPROJECT_VERSION"
|
||||
fi
|
||||
|
||||
73
.github/workflows/check_diffs.yml
vendored
73
.github/workflows/check_diffs.yml
vendored
@@ -33,9 +33,9 @@ jobs:
|
||||
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
- name: '🐍 Setup Python 3.11'
|
||||
uses: actions/setup-python@v6
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: '📂 Get Changed Files'
|
||||
@@ -54,7 +54,6 @@ jobs:
|
||||
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
|
||||
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
|
||||
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
|
||||
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
|
||||
# Run linting only on packages that have changed files
|
||||
lint:
|
||||
needs: [ build ]
|
||||
@@ -139,14 +138,12 @@ jobs:
|
||||
run:
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.job-configs.python-version }}
|
||||
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
|
||||
- name: '📦 Install Dependencies & Run Extended Tests'
|
||||
shell: bash
|
||||
@@ -169,72 +166,10 @@ jobs:
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
# Run codspeed benchmarks only on packages that have changed files
|
||||
codspeed:
|
||||
name: '⚡ CodSpeed Benchmarks'
|
||||
needs: [ build ]
|
||||
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
|
||||
fail-fast: false
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
# We have to use 3.12 as 3.13 is not yet supported
|
||||
- name: '📦 Install UV Package Manager'
|
||||
uses: astral-sh/setup-uv@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
run: uv sync --group test
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
|
||||
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
|
||||
uses: CodSpeedHQ/action@v4
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
|
||||
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
||||
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
|
||||
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
with:
|
||||
token: ${{ secrets.CODSPEED_TOKEN }}
|
||||
run: |
|
||||
cd ${{ matrix.job-configs.working-directory }}
|
||||
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
|
||||
uv run --no-sync pytest ./tests/benchmarks --codspeed
|
||||
else
|
||||
uv run --no-sync pytest ./tests/ --codspeed
|
||||
fi
|
||||
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
|
||||
|
||||
# Final status check - ensures all required jobs passed before allowing merge
|
||||
ci_success:
|
||||
name: '✅ CI Success'
|
||||
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
|
||||
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
|
||||
if: |
|
||||
always()
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
4
.github/workflows/check_new_docs.yml
vendored
4
.github/workflows/check_new_docs.yml
vendored
@@ -22,8 +22,8 @@ jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/setup-python@v6
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
- id: files
|
||||
|
||||
66
.github/workflows/codspeed.yml
vendored
Normal file
66
.github/workflows/codspeed.yml
vendored
Normal file
@@ -0,0 +1,66 @@
|
||||
name: '⚡ CodSpeed'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
|
||||
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
|
||||
DEEPSEEK_API_KEY: foo
|
||||
FIREWORKS_API_KEY: foo
|
||||
|
||||
jobs:
|
||||
codspeed:
|
||||
name: 'Benchmark'
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- working-directory: libs/core
|
||||
mode: walltime
|
||||
- working-directory: libs/partners/openai
|
||||
- working-directory: libs/partners/anthropic
|
||||
- working-directory: libs/partners/deepseek
|
||||
- working-directory: libs/partners/fireworks
|
||||
- working-directory: libs/partners/xai
|
||||
- working-directory: libs/partners/mistralai
|
||||
- working-directory: libs/partners/groq
|
||||
fail-fast: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We have to use 3.12 as 3.13 is not yet supported
|
||||
- name: '📦 Install UV Package Manager'
|
||||
uses: astral-sh/setup-uv@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
run: uv sync --group test
|
||||
working-directory: ${{ matrix.working-directory }}
|
||||
|
||||
- name: '⚡ Run Benchmarks: ${{ matrix.working-directory }}'
|
||||
uses: CodSpeedHQ/action@v3
|
||||
with:
|
||||
token: ${{ secrets.CODSPEED_TOKEN }}
|
||||
run: |
|
||||
cd ${{ matrix.working-directory }}
|
||||
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
|
||||
uv run --no-sync pytest ./tests/benchmarks --codspeed
|
||||
else
|
||||
uv run --no-sync pytest ./tests/ --codspeed
|
||||
fi
|
||||
mode: ${{ matrix.mode || 'instrumentation' }}
|
||||
2
.github/workflows/people.yml
vendored
2
.github/workflows/people.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
env:
|
||||
GITHUB_CONTEXT: ${{ toJson(github) }}
|
||||
run: echo "$GITHUB_CONTEXT"
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
# Ref: https://github.com/actions/runner/issues/2033
|
||||
- name: '🔧 Fix Git Safe Directory in Container'
|
||||
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
|
||||
|
||||
10
.github/workflows/pr_lint.yml
vendored
10
.github/workflows/pr_lint.yml
vendored
@@ -26,10 +26,9 @@
|
||||
# • release — prepare a new release
|
||||
#
|
||||
# Allowed Scopes (optional):
|
||||
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
|
||||
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
|
||||
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
|
||||
# xai, infra
|
||||
# core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek,
|
||||
# exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai,
|
||||
# perplexity, prompty, qdrant, xai
|
||||
#
|
||||
# Rules & Tips for New Committers:
|
||||
# 1. Subject (type) must start with a lowercase letter and, if possible, be
|
||||
@@ -63,7 +62,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: '✅ Validate Conventional Commits Format'
|
||||
uses: amannn/action-semantic-pull-request@v6
|
||||
uses: amannn/action-semantic-pull-request@v5
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
@@ -85,7 +84,6 @@ jobs:
|
||||
cli
|
||||
langchain
|
||||
langchain_v1
|
||||
langchain_legacy
|
||||
standard-tests
|
||||
text-splitters
|
||||
docs
|
||||
|
||||
8
.github/workflows/run_notebooks.yml
vendored
8
.github/workflows/run_notebooks.yml
vendored
@@ -26,23 +26,21 @@ jobs:
|
||||
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
|
||||
name: '📑 Test Documentation Notebooks'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ github.event.inputs.python_version || '3.11' }}
|
||||
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
|
||||
working-directory: ${{ github.event.inputs.working-directory || '**' }}
|
||||
|
||||
- name: '🔐 Authenticate to Google Cloud'
|
||||
id: 'auth'
|
||||
uses: google-github-actions/auth@v3
|
||||
uses: google-github-actions/auth@v2
|
||||
with:
|
||||
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
||||
|
||||
- name: '🔐 Configure AWS Credentials'
|
||||
uses: aws-actions/configure-aws-credentials@v5
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
||||
12
.github/workflows/scheduled_test.yml
vendored
12
.github/workflows/scheduled_test.yml
vendored
@@ -20,7 +20,7 @@ env:
|
||||
POETRY_VERSION: "1.8.4"
|
||||
UV_FROZEN: "true"
|
||||
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
|
||||
POETRY_LIBS: ("libs/partners/aws")
|
||||
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
|
||||
|
||||
jobs:
|
||||
# Generate dynamic test matrix based on input parameters or defaults
|
||||
@@ -68,14 +68,14 @@ jobs:
|
||||
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: langchain
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-google
|
||||
path: langchain-google
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-aws
|
||||
path: langchain-aws
|
||||
@@ -106,12 +106,12 @@ jobs:
|
||||
|
||||
- name: '🔐 Authenticate to Google Cloud'
|
||||
id: 'auth'
|
||||
uses: google-github-actions/auth@v3
|
||||
uses: google-github-actions/auth@v2
|
||||
with:
|
||||
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
||||
|
||||
- name: '🔐 Configure AWS Credentials'
|
||||
uses: aws-actions/configure-aws-credentials@v5
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
||||
18
README.md
18
README.md
@@ -8,12 +8,16 @@
|
||||
<br>
|
||||
</div>
|
||||
|
||||
[](https://github.com/langchain-ai/langchain/releases)
|
||||
[](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://pypistats.org/packages/langchain-core)
|
||||
[](https://pypistats.org/packages/langchain-core)
|
||||
[](https://star-history.com/#langchain-ai/langchain)
|
||||
[](https://github.com/langchain-ai/langchain/issues)
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
|
||||
[](https://codspeed.io/langchain-ai/langchain)
|
||||
[](https://twitter.com/langchainai)
|
||||
[](https://codspeed.io/langchain-ai/langchain)
|
||||
|
||||
> [!NOTE]
|
||||
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
@@ -41,7 +45,7 @@ interface for models, embeddings, vector stores, and more.
|
||||
Use LangChain for:
|
||||
|
||||
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
|
||||
external/internal systems, drawing from LangChain’s vast library of integrations with
|
||||
external / internal systems, drawing from LangChain’s vast library of integrations with
|
||||
model providers, tools, vector stores, retrievers, and more.
|
||||
- **Model interoperability**. Swap models in and out as your engineering team
|
||||
experiments to find the best choice for your application’s needs. As the industry
|
||||
@@ -56,7 +60,7 @@ applications.
|
||||
|
||||
To improve your LLM application development, pair LangChain with:
|
||||
|
||||
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and
|
||||
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
|
||||
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
|
||||
visibility in production, and improve performance over time.
|
||||
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
|
||||
@@ -64,8 +68,9 @@ reliably handle complex tasks with LangGraph, our low-level agent orchestration
|
||||
framework. LangGraph offers customizable architecture, long-term memory, and
|
||||
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
|
||||
Uber, Klarna, and GitLab.
|
||||
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy
|
||||
and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across
|
||||
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/) - Deploy
|
||||
and scale agents effortlessly with a purpose-built deployment platform for long
|
||||
running, stateful workflows. Discover, reuse, configure, and share agents across
|
||||
teams — and iterate quickly with visual prototyping in
|
||||
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
|
||||
|
||||
@@ -80,4 +85,3 @@ concepts behind the LangChain framework.
|
||||
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
|
||||
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
|
||||
navigating base packages and integrations for LangChain.
|
||||
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
|
||||
|
||||
15
SECURITY.md
15
SECURITY.md
@@ -4,9 +4,9 @@ LangChain has a large ecosystem of integrations with various external resources
|
||||
|
||||
## Best practices
|
||||
|
||||
When building such applications, developers should remember to follow good security practices:
|
||||
When building such applications developers should remember to follow good security practices:
|
||||
|
||||
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
|
||||
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
|
||||
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
|
||||
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
|
||||
|
||||
@@ -22,7 +22,9 @@ Example scenarios with mitigation strategies:
|
||||
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
|
||||
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
|
||||
|
||||
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
|
||||
If you're building applications that access external resources like file systems, APIs
|
||||
or databases, consider speaking with your company's security team to determine how to best
|
||||
design and secure your applications.
|
||||
|
||||
## Reporting OSS Vulnerabilities
|
||||
|
||||
@@ -36,7 +38,9 @@ Before reporting a vulnerability, please review:
|
||||
|
||||
1) In-Scope Targets and Out-of-Scope Targets below.
|
||||
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
|
||||
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
|
||||
3) The [Best Practices](#best-practices) above to
|
||||
understand what we consider to be a security vulnerability vs. developer
|
||||
responsibility.
|
||||
|
||||
### In-Scope Targets
|
||||
|
||||
@@ -63,7 +67,8 @@ All out of scope targets defined by huntr as well as:
|
||||
for more details, but generally tools interact with the real world. Developers are
|
||||
expected to understand the security implications of their code and are responsible
|
||||
for the security of their tools.
|
||||
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
|
||||
* Code documented with security notices. This will be decided on a case by
|
||||
case basis, but likely will not be eligible for a bounty as the code is already
|
||||
documented with guidelines for developers that should be followed for making their
|
||||
application secure.
|
||||
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
|
||||
|
||||
@@ -64,4 +64,3 @@ Notebook | Description
|
||||
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
|
||||
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
|
||||
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
|
||||
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.
|
||||
|
||||
@@ -79,17 +79,6 @@
|
||||
"tool_executor = ToolExecutor(tools)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "168152fc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"📘 **Note on `SystemMessage` usage with LangGraph-based agents**\n",
|
||||
"\n",
|
||||
"When constructing the `messages` list for an agent, you *must* manually include any `SystemMessage`s.\n",
|
||||
"Unlike some agent executors in LangChain that set a default, LangGraph requires explicit inclusion."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe6e8f78-1ef7-42ad-b2bf-835ed5850553",
|
||||
|
||||
@@ -47,12 +47,10 @@
|
||||
"source": [
|
||||
"### Prerequisites\n",
|
||||
"\n",
|
||||
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
|
||||
"\n",
|
||||
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
|
||||
"Please install the Oracle Database [python-oracledb driver](https://pypi.org/project/oracledb/) to use LangChain with Oracle AI Vector Search:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"$ python -m pip install -U langchain-oracledb\n",
|
||||
"$ python -m pip install --upgrade oracledb\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@@ -219,7 +217,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"\n",
|
||||
"# please update with your related information\n",
|
||||
"# make sure that you have onnx file in the system\n",
|
||||
@@ -298,7 +296,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# loading from Oracle Database table\n",
|
||||
@@ -356,7 +354,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_community.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# using 'database' provider\n",
|
||||
@@ -397,7 +395,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# split by default parameters\n",
|
||||
@@ -454,7 +452,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# using ONNX model loaded to Oracle Database\n",
|
||||
@@ -500,14 +498,14 @@
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"import oracledb\n",
|
||||
"from langchain_oracledb.document_loaders.oracleai import (\n",
|
||||
"from langchain_community.document_loaders.oracleai import (\n",
|
||||
" OracleDocLoader,\n",
|
||||
" OracleTextSplitter,\n",
|
||||
")\n",
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_oracledb.vectorstores import oraclevs\n",
|
||||
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_community.vectorstores import oraclevs\n",
|
||||
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
|
||||
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
|
||||
"from langchain_core.documents import Document"
|
||||
]
|
||||
@@ -679,19 +677,19 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query = \"What is Oracle AI Vector Store?\"\n",
|
||||
"db_filter = {\"document_id\": \"1\"}\n",
|
||||
"filter = {\"document_id\": [\"1\"]}\n",
|
||||
"\n",
|
||||
"# Similarity search without a filter\n",
|
||||
"print(vectorstore.similarity_search(query, 1))\n",
|
||||
"\n",
|
||||
"# Similarity search with a filter\n",
|
||||
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
|
||||
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
|
||||
"\n",
|
||||
"# Similarity search with relevance score\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1))\n",
|
||||
"\n",
|
||||
"# Similarity search with relevance score with filter\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
|
||||
"\n",
|
||||
"# Max marginal relevance search\n",
|
||||
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
|
||||
@@ -699,7 +697,7 @@
|
||||
"# Max marginal relevance search with filter\n",
|
||||
"print(\n",
|
||||
" vectorstore.max_marginal_relevance_search(\n",
|
||||
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
|
||||
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
|
||||
@@ -1,455 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3716230e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
|
||||
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
|
||||
"- **Tracing**: Monitor inputs, outputs, retrieved documents, scores, prompts, and timings\n",
|
||||
"- **Evaluation**: Use MLflow's built-in scorers to assess RAG performance\n",
|
||||
"- **Best Practices**: Implement proper configuration management and reproducible experiments\n",
|
||||
"\n",
|
||||
"We'll build a RAG system that can answer questions about academic papers by:\n",
|
||||
"1. Loading and chunking documents from ArXiv\n",
|
||||
"2. Creating embeddings and a vector store\n",
|
||||
"3. Setting up a retrieval-augmented generation chain\n",
|
||||
"4. Tracking all experiments with MLflow\n",
|
||||
"5. Evaluating the system's performance\n",
|
||||
"\n",
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2f7561c4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0814ebe9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain mlflow langchain-community arxiv pymupdf langchain-text-splitters langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "747399b6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import mlflow\n",
|
||||
"from mlflow.genai.scorers import RelevanceToQuery, Correctness, ExpectationsGuidelines\n",
|
||||
"from langchain_community.document_loaders import ArxivLoader\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4141ee05",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\"\n",
|
||||
"\n",
|
||||
"mlflow.set_experiment(\"LangChain-RAG-MLflow\")\n",
|
||||
"mlflow.langchain.autolog()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dd5eb41b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Define all hyperparameters and configuration in a centralized dictionary. This makes it easy to:\n",
|
||||
"- Track different experiment configurations\n",
|
||||
"- Reproduce results\n",
|
||||
"- Perform hyperparameter tuning\n",
|
||||
"\n",
|
||||
"**Key Parameters**:\n",
|
||||
"- `chunk_size`: Size of text chunks for document splitting\n",
|
||||
"- `chunk_overlap`: Overlap between consecutive chunks\n",
|
||||
"- `retriever_k`: Number of documents to retrieve\n",
|
||||
"- `embeddings_model`: OpenAI embedding model\n",
|
||||
"- `llm`: Language model for generation\n",
|
||||
"- `temperature`: Sampling temperature for the LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "6dcdc5d8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"CONFIG = {\n",
|
||||
" \"chunk_size\": 400,\n",
|
||||
" \"chunk_overlap\": 80,\n",
|
||||
" \"retriever_k\": 3,\n",
|
||||
" \"embeddings_model\": \"text-embedding-3-small\",\n",
|
||||
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
|
||||
" \"llm\": \"gpt-5-nano\",\n",
|
||||
" \"temperature\": 0,\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a2985f1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### ArXiv Dcoument Loading and Processing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1f32aa36",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load documents from ArXiv\n",
|
||||
"loader = ArxivLoader(\n",
|
||||
" query=\"1706.03762\",\n",
|
||||
" load_max_docs=1,\n",
|
||||
")\n",
|
||||
"docs = loader.load()\n",
|
||||
"print(docs[0].metadata)\n",
|
||||
"\n",
|
||||
"# Split documents into chunks\n",
|
||||
"splitter = RecursiveCharacterTextSplitter(\n",
|
||||
" chunk_size=CONFIG[\"chunk_size\"],\n",
|
||||
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
|
||||
")\n",
|
||||
"chunks = splitter.split_documents(docs)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Join chunks into a single string\n",
|
||||
"def join_chunks(chunks):\n",
|
||||
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6e194ab4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Vector Store and Retriever Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "26dfbeaa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create embeddings\n",
|
||||
"embeddings = OpenAIEmbeddings(model=CONFIG[\"embeddings_model\"])\n",
|
||||
"\n",
|
||||
"# Create vector store from documents\n",
|
||||
"vectorstore = InMemoryVectorStore.from_documents(\n",
|
||||
" chunks,\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Create retriever\n",
|
||||
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": CONFIG[\"retriever_k\"]})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bc1f181b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### RAG Chain Construction using [LCEL](https://python.langchain.com/docs/concepts/lcel/)\n",
|
||||
"\n",
|
||||
"Flow:\n",
|
||||
"1. Query → Retriever (finds relevant chunks)\n",
|
||||
"2. Chunks → join_chunks (creates context)\n",
|
||||
"3. Context + Query → Prompt Template\n",
|
||||
"4. Prompt → Language Model → Response\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "6a810dc3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Initialize the language model\n",
|
||||
"llm = ChatOpenAI(model=CONFIG[\"llm\"], temperature=CONFIG[\"temperature\"])\n",
|
||||
"\n",
|
||||
"# Create the prompt template\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", CONFIG[\"system_prompt\"] + \"\\n\\nContext:\\n{context}\\n\\n\"),\n",
|
||||
" (\"human\", \"\\n{question}\\n\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Construct the RAG chain\n",
|
||||
"rag_chain = (\n",
|
||||
" {\n",
|
||||
" \"context\": retriever | RunnableLambda(join_chunks),\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c04bd019",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Prediction Function with MLflow Tracing\n",
|
||||
"\n",
|
||||
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
|
||||
"- Input queries\n",
|
||||
"- Retrieved documents\n",
|
||||
"- Generated responses\n",
|
||||
"- Execution time\n",
|
||||
"- Chain intermediate steps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "7b45fc04",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Question: What is the main idea of the paper?\n",
|
||||
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"@mlflow.trace\n",
|
||||
"def predict_fn(question: str) -> str:\n",
|
||||
" return rag_chain.invoke(question)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Test the prediction function\n",
|
||||
"sample_question = \"What is the main idea of the paper?\"\n",
|
||||
"response = predict_fn(sample_question)\n",
|
||||
"print(f\"Question: {sample_question}\")\n",
|
||||
"print(f\"Response: {response}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "421469de",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Evaluation Dataset and Scoring\n",
|
||||
"\n",
|
||||
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
|
||||
"\n",
|
||||
"<u>Evaluation Components:</u>\n",
|
||||
"- **Dataset**: Questions with expected concepts and facts\n",
|
||||
"- **Scorers**: \n",
|
||||
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
|
||||
" - `Correctness`: Evaluates factual accuracy of the response\n",
|
||||
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
|
||||
"\n",
|
||||
"<u>Best Practices:</u>\n",
|
||||
"- Create diverse test cases covering different query types\n",
|
||||
"- Include expected concepts to guide evaluation\n",
|
||||
"- Use multiple scoring metrics for comprehensive assessment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "5c1dc4f2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
|
||||
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "2b6c6687efa24796b39c7951d589d481",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Evaluating: 0%| | 0/3 [Elapsed: 00:00, Remaining: ?] "
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"✨ Evaluation completed.\n",
|
||||
"\n",
|
||||
"Metrics and evaluation results are logged to the MLflow run:\n",
|
||||
" Run name: \u001b[94mbaseline_eval\u001b[0m\n",
|
||||
" Run ID: \u001b[94ma2218d9f24c9415f8040d3b77af103a9\u001b[0m\n",
|
||||
"\n",
|
||||
"To view the detailed evaluation results with sample-wise scores,\n",
|
||||
"open the \u001b[93m\u001b[1mTraces\u001b[0m tab in the Run page in the MLflow UI.\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Define evaluation dataset\n",
|
||||
"eval_dataset = [\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\"question\": \"What is the main idea of the paper?\"},\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"attention mechanism\", \"transformer\", \"neural network\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"attention mechanism is a key component of the transformer model\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\"The response must be factual and concise\"],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\n",
|
||||
" \"question\": \"What's the difference between a transformer and a recurrent neural network?\"\n",
|
||||
" },\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"sequential\", \"attention mechanism\", \"hidden state\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"transformer processes data in parallel while RNN processes data sequentially\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\n",
|
||||
" \"The response must be factual and focus on the difference between the two models\"\n",
|
||||
" ],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\"question\": \"What does the attention mechanism do?\"},\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"query\", \"key\", \"value\", \"relationship\", \"similarity\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"attention allows the model to weigh the importance of different parts of the input sequence when processing it\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\n",
|
||||
" \"The response must be factual and explain the concept of attention\"\n",
|
||||
" ],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# Run evaluation with MLflow\n",
|
||||
"with mlflow.start_run(run_name=\"baseline_eval\") as run:\n",
|
||||
" # Log configuration parameters\n",
|
||||
" mlflow.log_params(CONFIG)\n",
|
||||
"\n",
|
||||
" # Run evaluation\n",
|
||||
" results = mlflow.genai.evaluate(\n",
|
||||
" data=eval_dataset,\n",
|
||||
" predict_fn=predict_fn,\n",
|
||||
" scorers=[RelevanceToQuery(), Correctness(), ExpectationsGuidelines()],\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "52b137c7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Launch MLflow UI to check out the results\n",
|
||||
"\n",
|
||||
"<u>What you'll see in the UI:</u>\n",
|
||||
"- **Experiments**: Compare different RAG configurations\n",
|
||||
"- **Runs**: Individual experiment runs with metrics and parameters\n",
|
||||
"- **Traces**: Detailed execution traces showing retrieval and generation steps\n",
|
||||
"- **Evaluation Results**: Scoring metrics and detailed comparisons\n",
|
||||
"- **Artifacts**: Saved models, datasets, and other files\n",
|
||||
"\n",
|
||||
"Navigate to `http://localhost:5000` after running the command below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "817c3799",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!mlflow ui"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c75861e3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You should see something like this\n",
|
||||
"\n",
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.13.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -97,7 +97,7 @@ def skip_private_members(app, what, name, obj, skip, options):
|
||||
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
|
||||
return True
|
||||
if name == "__init__" and obj.__objclass__ is object:
|
||||
# don't document default init
|
||||
# dont document default init
|
||||
return True
|
||||
return None
|
||||
|
||||
|
||||
@@ -217,7 +217,11 @@ def _load_package_modules(
|
||||
# Get the full namespace of the module
|
||||
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
|
||||
# Keep only the top level namespace
|
||||
top_namespace = namespace.split(".")[0]
|
||||
# (but make special exception for content_blocks and v1.messages)
|
||||
if namespace == "messages.content_blocks" or namespace == "v1.messages":
|
||||
top_namespace = namespace # Keep full namespace for content_blocks
|
||||
else:
|
||||
top_namespace = namespace.split(".")[0]
|
||||
|
||||
try:
|
||||
# If submodule is present, we need to construct the paths in a slightly
|
||||
@@ -468,7 +472,7 @@ def _build_rst_file(package_name: str = "langchain") -> None:
|
||||
"""Create a rst file for building of documentation.
|
||||
|
||||
Args:
|
||||
package_name: Can be either "langchain" or "core"
|
||||
package_name: Can be either "langchain" or "core" or "experimental".
|
||||
"""
|
||||
package_dir = _package_dir(package_name)
|
||||
package_members = _load_package_modules(package_dir)
|
||||
@@ -487,7 +491,7 @@ def _package_namespace(package_name: str) -> str:
|
||||
"""Returns the package name used.
|
||||
|
||||
Args:
|
||||
package_name: Can be either "langchain" or "core"
|
||||
package_name: Can be either "langchain" or "core" or "experimental".
|
||||
|
||||
Returns:
|
||||
modified package_name: Can be either "langchain" or "langchain_{package_name}"
|
||||
@@ -545,13 +549,7 @@ def _build_index(dirs: List[str]) -> None:
|
||||
"ai21": "AI21",
|
||||
"ibm": "IBM",
|
||||
}
|
||||
ordered = [
|
||||
"core",
|
||||
"langchain",
|
||||
"text-splitters",
|
||||
"community",
|
||||
"standard-tests",
|
||||
]
|
||||
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
|
||||
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
|
||||
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
|
||||
doc = """# LangChain Python API Reference
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,4 +1,4 @@
|
||||
# Async programming with LangChain
|
||||
# Async programming with langchain
|
||||
|
||||
:::info Prerequisites
|
||||
* [Runnable interface](/docs/concepts/runnables)
|
||||
@@ -12,7 +12,7 @@ You are expected to be familiar with asynchronous programming in Python before r
|
||||
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming.
|
||||
:::
|
||||
|
||||
## LangChain asynchronous APIs
|
||||
## Langchain asynchronous APIs
|
||||
|
||||
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
|
||||
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
|
||||
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
|
||||
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
|
||||
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tools](/docs/concepts/tools).
|
||||
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
|
||||
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
|
||||
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
|
||||
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
|
||||
@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
|
||||
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
|
||||
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
|
||||
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
|
||||
- **[batch](/docs/concepts/runnables)**: Used to execute a runnable with batch inputs.
|
||||
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
|
||||
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
|
||||
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
|
||||
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
|
||||
|
||||
@@ -147,7 +147,7 @@ An `AIMessage` has the following attributes. The attributes which are **standard
|
||||
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
||||
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
||||
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
|
||||
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. See [Message IDs](#message-ids) for details. |
|
||||
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
|
||||
| `response_metadata` | Raw | Response metadata, e.g., response headers, logprobs, token counts. |
|
||||
|
||||
#### content
|
||||
@@ -243,37 +243,3 @@ At the moment, the output of the model will be in terms of LangChain messages, s
|
||||
need OpenAI format for the output as well.
|
||||
|
||||
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
|
||||
|
||||
## Message IDs
|
||||
|
||||
LangChain messages include an optional `id` field that serves as a unique identifier. Understanding when and how these IDs are assigned can be helpful for debugging, tracing, and working with message history.
|
||||
|
||||
### When Messages Get IDs
|
||||
|
||||
Messages receive IDs in the following scenarios:
|
||||
|
||||
**Automatically assigned by LangChain:**
|
||||
- When generated through chat model invocation (`.invoke()`, `.stream()`, `.astream()`) with an active run manager/tracing context
|
||||
- IDs follow the format:
|
||||
- `run-$RUN_ID` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-0`)
|
||||
- `run-$RUN_ID-$IDX` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-1`) when there are multiple generations from a single chat model invocation.
|
||||
|
||||
**Provider-assigned IDs (highest priority):**
|
||||
- When the model provider assigns its own ID to the message
|
||||
- These take precedence over LangChain-generated run IDs
|
||||
- Format varies by provider
|
||||
|
||||
### When Messages Don't Get IDs
|
||||
|
||||
Messages will **not** receive IDs in these situations:
|
||||
|
||||
- **Manual message creation**: Messages created directly (e.g., `AIMessage(content="hello")`) without going through chat models
|
||||
- **No run manager context**: When there's no active callback/tracing infrastructure
|
||||
|
||||
### ID Priority System
|
||||
|
||||
LangChain follows a clear precedence system for message IDs:
|
||||
|
||||
1. **Provider-assigned IDs** (highest priority): IDs from the model provider
|
||||
2. **LangChain run IDs** (medium priority): IDs starting with `run-`
|
||||
3. **Manual IDs** (lowest priority): IDs explicitly set by users
|
||||
|
||||
@@ -53,29 +53,17 @@ This is how you use MessagesPlaceholder.
|
||||
|
||||
```python
|
||||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
from langchain_core.messages import HumanMessage, AIMessage
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
prompt_template = ChatPromptTemplate([
|
||||
("system", "You are a helpful assistant"),
|
||||
MessagesPlaceholder("msgs")
|
||||
])
|
||||
|
||||
# Simple example with one message
|
||||
prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
|
||||
|
||||
# More complex example with conversation history
|
||||
messages_to_pass = [
|
||||
HumanMessage(content="What's the capital of France?"),
|
||||
AIMessage(content="The capital of France is Paris."),
|
||||
HumanMessage(content="And what about Germany?")
|
||||
]
|
||||
|
||||
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
|
||||
print(formatted_prompt)
|
||||
```
|
||||
|
||||
|
||||
This will produce a list of four messages total: the system message plus the three messages we passed in (two HumanMessages and one AIMessage).
|
||||
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.
|
||||
If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).
|
||||
This is useful for letting a list of messages be slotted into a particular spot.
|
||||
|
||||
|
||||
@@ -29,22 +29,6 @@ model_with_structure = model.with_structured_output(schema)
|
||||
structured_output = model_with_structure.invoke(user_input)
|
||||
```
|
||||
|
||||
:::warning[Tool Order Matters]
|
||||
|
||||
When combining structured output with additional tools, bind tools **first**, then apply structured output:
|
||||
|
||||
```python
|
||||
# Correct
|
||||
model_with_tools = model.bind_tools([tool1, tool2])
|
||||
structured_model = model_with_tools.with_structured_output(schema)
|
||||
|
||||
# Incorrect - will cause tool resolution errors
|
||||
structured_model = model.with_structured_output(schema)
|
||||
broken_model = structured_model.bind_tools([tool1, tool2])
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## Schema definition
|
||||
|
||||
The central concept is that the output structure of model responses needs to be represented in some way.
|
||||
|
||||
@@ -31,7 +31,7 @@ The key attributes that correspond to the tool's **schema**:
|
||||
The key methods to execute the function associated with the **tool**:
|
||||
|
||||
- **invoke**: Invokes the tool with the given arguments.
|
||||
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with LangChain](/docs/concepts/async).
|
||||
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async).
|
||||
|
||||
## Create tools using the `@tool` decorator
|
||||
|
||||
@@ -171,26 +171,6 @@ Please see the [InjectedState](https://langchain-ai.github.io/langgraph/referenc
|
||||
|
||||
Please see the [InjectedStore](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedStore) documentation for more details.
|
||||
|
||||
## Tool Artifacts vs. Injected State
|
||||
|
||||
Although similar conceptually, tool artifacts in LangChain and [injected state in LangGraph](https://langchain-ai.github.io/langgraph/reference/agents/#langgraph.prebuilt.tool_node.InjectedState) serve different purposes and operate at different levels of abstraction.
|
||||
|
||||
**Tool Artifacts**
|
||||
|
||||
- **Purpose:** Store and pass data between tool executions within a single chain/workflow
|
||||
- **Scope:** Limited to tool-to-tool communication
|
||||
- **Lifecycle:** Tied to individual tool calls and their immediate context
|
||||
- **Usage:** Temporary storage for intermediate results that tools need to share
|
||||
|
||||
**Injected State (LangGraph)**
|
||||
|
||||
- **Purpose:** Maintain persistent state across the entire graph execution
|
||||
- **Scope:** Global to the entire graph workflow
|
||||
- **Lifecycle:** Persists throughout the entire graph execution and can be saved/restored
|
||||
- **Usage:** Long-term state management, conversation memory, user context, workflow checkpointing
|
||||
|
||||
Tool artifacts are ephemeral data passed between tools, while injected state is persistent workflow-level state that survives across multiple steps, tool calls, and even execution sessions in LangGraph.
|
||||
|
||||
## Best practices
|
||||
|
||||
When designing tools to be used by models, keep the following in mind:
|
||||
|
||||
@@ -7,4 +7,4 @@ Traces contain individual steps called `runs`. These can be individual calls fro
|
||||
tool, or sub-chains.
|
||||
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
|
||||
|
||||
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.langchain.com/langsmith/observability-quickstart).
|
||||
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing).
|
||||
|
||||
@@ -3,9 +3,9 @@
|
||||
Here are some things to keep in mind for all types of contributions:
|
||||
|
||||
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
|
||||
- Fill out the checked-in pull request template when opening pull requests. Note related issues.
|
||||
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
|
||||
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
|
||||
- If you would like comments or feedback on your current progress, please open an issue or discussion.
|
||||
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
|
||||
- See the sections on [Testing](setup.mdx#testing) and [Formatting and Linting](setup.mdx#formatting-and-linting) for how to run these checks locally.
|
||||
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
|
||||
- Look for duplicate PRs or issues that have already been opened before opening a new one.
|
||||
|
||||
@@ -223,49 +223,6 @@ If codespell is incorrectly flagging a word, you can skip spellcheck for that wo
|
||||
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
|
||||
```
|
||||
|
||||
### Pre-commit
|
||||
|
||||
We use [pre-commit](https://pre-commit.com/) to ensure commits are formatted/linted.
|
||||
|
||||
#### Installing Pre-commit
|
||||
|
||||
First, install pre-commit:
|
||||
|
||||
```bash
|
||||
# Option 1: Using uv (recommended)
|
||||
uv tool install pre-commit
|
||||
|
||||
# Option 2: Using Homebrew (globally for macOS/Linux)
|
||||
brew install pre-commit
|
||||
|
||||
# Option 3: Using pip
|
||||
pip install pre-commit
|
||||
```
|
||||
|
||||
Then install the git hook scripts:
|
||||
|
||||
```bash
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
#### How Pre-commit Works
|
||||
|
||||
Once installed, pre-commit will automatically run on every `git commit`. Hooks are specified in `.pre-commit-config.yaml` and will:
|
||||
|
||||
- Format code using `ruff` for the specific library/package you're modifying
|
||||
- Only run on files that have changed
|
||||
- Prevent commits if formatting fails
|
||||
|
||||
#### Skipping Pre-commit
|
||||
|
||||
In exceptional cases, you can skip pre-commit hooks with:
|
||||
|
||||
```bash
|
||||
git commit --no-verify
|
||||
```
|
||||
|
||||
However, this is discouraged as the CI system will still enforce the same formatting rules.
|
||||
|
||||
## Working with optional dependencies
|
||||
|
||||
`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.
|
||||
|
||||
@@ -79,7 +79,7 @@ Here are some high-level tips on writing a good how-to guide:
|
||||
|
||||
### Conceptual guide
|
||||
|
||||
LangChain's conceptual guides fall under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
|
||||
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
|
||||
in a more abstract way than how-to guides or tutorials, targeting curious users interested in
|
||||
gaining a deeper understanding and insights of the framework. Try to avoid excessively large code examples as the primary goal is to
|
||||
provide perspective to the user rather than to finish a practical project. These guides should cover **why** things work the way they do.
|
||||
@@ -105,7 +105,7 @@ Here are some high-level tips on writing a good conceptual guide:
|
||||
### References
|
||||
|
||||
References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
|
||||
In LangChain, these are mainly our API reference pages, which are populated from docstrings within code.
|
||||
In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
|
||||
References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
|
||||
how to use something specific.
|
||||
|
||||
@@ -119,7 +119,7 @@ but here are some high-level tips on writing a good docstring:
|
||||
- Be concise
|
||||
- Discuss special cases and deviations from a user's expectations
|
||||
- Go into detail on required inputs and outputs
|
||||
- Light details on when one might use the feature are fine, but in-depth details belong in other sections
|
||||
- Light details on when one might use the feature are fine, but in-depth details belong in other sections.
|
||||
|
||||
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
|
||||
|
||||
@@ -127,17 +127,17 @@ Each category serves a distinct purpose and requires a specific approach to writ
|
||||
|
||||
Here are some other guidelines you should think about when writing and organizing documentation.
|
||||
|
||||
We generally do not merge new tutorials from outside contributors without an acute need.
|
||||
We generally do not merge new tutorials from outside contributors without an actue need.
|
||||
We welcome updates as well as new integration docs, how-tos, and references.
|
||||
|
||||
### Avoid duplication
|
||||
|
||||
Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
|
||||
be only one (very rarely two) canonical pages for a given concept or feature. Instead, you should link to other guides.
|
||||
be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
|
||||
|
||||
### Link to other sections
|
||||
|
||||
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently
|
||||
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently,
|
||||
to allow a developer to learn more about an unfamiliar topic within the flow of reading.
|
||||
|
||||
This includes linking to the API references and conceptual sections!
|
||||
|
||||
@@ -33,7 +33,7 @@ Sometimes you want to make a small change, like fixing a typo, and the easiest w
|
||||
- Click the "Commit changes..." button at the top-right corner of the page.
|
||||
- Give your commit a title like "Fix typo in X section."
|
||||
- Optionally, write an extended commit description.
|
||||
- Click "Propose changes".
|
||||
- Click "Propose changes"
|
||||
|
||||
5. **Submit a pull request (PR):**
|
||||
- GitHub will redirect you to a page where you can create a pull request.
|
||||
|
||||
@@ -159,7 +159,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 8,
|
||||
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -183,7 +183,7 @@
|
||||
],
|
||||
"source": [
|
||||
"configurable_model.invoke(\n",
|
||||
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}}\n",
|
||||
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -234,7 +234,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 7,
|
||||
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -261,7 +261,7 @@
|
||||
" \"what's your name\",\n",
|
||||
" config={\n",
|
||||
" \"configurable\": {\n",
|
||||
" \"first_model\": \"claude-3-5-sonnet-latest\",\n",
|
||||
" \"first_model\": \"claude-3-5-sonnet-20240620\",\n",
|
||||
" \"first_temperature\": 0.5,\n",
|
||||
" \"first_max_tokens\": 100,\n",
|
||||
" }\n",
|
||||
@@ -336,7 +336,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -368,14 +368,14 @@
|
||||
"source": [
|
||||
"llm_with_tools.invoke(\n",
|
||||
" \"what's bigger in 2024 LA or NYC\",\n",
|
||||
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}},\n",
|
||||
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}},\n",
|
||||
").tool_calls"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain-monorepo",
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -389,7 +389,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.11"
|
||||
"version": "3.10.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -741,13 +741,13 @@
|
||||
"\n",
|
||||
"If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n",
|
||||
"\n",
|
||||
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_errors`. \n",
|
||||
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n",
|
||||
"\n",
|
||||
"When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n",
|
||||
"\n",
|
||||
"You can set `handle_tool_errors` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
|
||||
"You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
|
||||
"\n",
|
||||
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_errors` of the tool because its default value is `False`."
|
||||
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -777,7 +777,7 @@
|
||||
"id": "9d93b217-1d44-4d31-8956-db9ea680ff4f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here's an example with the default `handle_tool_errors=True` behavior."
|
||||
"Here's an example with the default `handle_tool_error=True` behavior."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -807,7 +807,7 @@
|
||||
"source": [
|
||||
"get_weather_tool = StructuredTool.from_function(\n",
|
||||
" func=get_weather,\n",
|
||||
" handle_tool_errors=True,\n",
|
||||
" handle_tool_error=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"get_weather_tool.invoke({\"city\": \"foobar\"})"
|
||||
@@ -818,7 +818,7 @@
|
||||
"id": "f91d6dc0-3271-4adc-a155-21f2e62ffa56",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can set `handle_tool_errors` to a string that will always be returned."
|
||||
"We can set `handle_tool_error` to a string that will always be returned."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -848,7 +848,7 @@
|
||||
"source": [
|
||||
"get_weather_tool = StructuredTool.from_function(\n",
|
||||
" func=get_weather,\n",
|
||||
" handle_tool_errors=\"There is no such city, but it's probably above 0K there!\",\n",
|
||||
" handle_tool_error=\"There is no such city, but it's probably above 0K there!\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"get_weather_tool.invoke({\"city\": \"foobar\"})"
|
||||
@@ -893,7 +893,7 @@
|
||||
"\n",
|
||||
"get_weather_tool = StructuredTool.from_function(\n",
|
||||
" func=get_weather,\n",
|
||||
" handle_tool_errors=_handle_error,\n",
|
||||
" handle_tool_error=_handle_error,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"get_weather_tool.invoke({\"city\": \"foobar\"})"
|
||||
|
||||
@@ -565,7 +565,7 @@
|
||||
"id": "3ac2c37a-06a1-40d3-a192-9078eb83994b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table><thead><tr><th colspan=\"3\">Table 1: Current layout detection models in the LayoutParser model zoo</th></tr><tr><th>Dataset</th><th>Base Model1</th><th>Large Model Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
|
||||
"<table><thead><tr><th colspan=\"3\">able 1. LUllclll 1ayoul actCCLloll 1110AdCs 111 L1C LayoOulralsel 1110U4cl 200</th></tr><tr><th>Dataset</th><th>| Base Model\\'|</th><th>Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F/M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA</td><td>M</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank [18]</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset</td><td>F/M</td><td>Layouts of history Japanese documents</td></tr></tbody></table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -122,13 +122,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4-turbo\")\n",
|
||||
|
||||
@@ -5,7 +5,7 @@ sidebar_class_name: hidden
|
||||
|
||||
# How-to guides
|
||||
|
||||
Here you’ll find answers to "How do I….?" types of questions.
|
||||
Here you’ll find answers to “How do I….?” types of questions.
|
||||
These guides are *goal-oriented* and *concrete*; they're meant to help you complete a specific task.
|
||||
For conceptual explanations see the [Conceptual guide](/docs/concepts/).
|
||||
For end-to-end walkthroughs see [Tutorials](/docs/tutorials).
|
||||
@@ -34,7 +34,7 @@ These are the core building blocks you can use when building applications.
|
||||
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
|
||||
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
|
||||
|
||||
- [How to: initialize any model in one line](/docs/how_to/chat_models_universal_init/)
|
||||
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
|
||||
- [How to: work with local models](/docs/how_to/local_llms)
|
||||
- [How to: do function/tool calling](/docs/how_to/tool_calling)
|
||||
- [How to: get models to return structured output](/docs/how_to/structured_output)
|
||||
@@ -47,7 +47,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
|
||||
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
|
||||
- [How to: stream tool calls](/docs/how_to/tool_streaming)
|
||||
- [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting)
|
||||
- [How to: few-shot prompt tool behavior](/docs/how_to/tools_few_shot)
|
||||
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
|
||||
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
|
||||
- [How to: force a specific tool call](/docs/how_to/tool_choice)
|
||||
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
|
||||
@@ -64,8 +64,8 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
|
||||
|
||||
[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model.
|
||||
|
||||
- [How to: use few-shot examples](/docs/how_to/few_shot_examples)
|
||||
- [How to: use few-shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
|
||||
- [How to: use few shot examples](/docs/how_to/few_shot_examples)
|
||||
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
|
||||
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
|
||||
- [How to: compose prompts together](/docs/how_to/prompts_composition)
|
||||
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
|
||||
@@ -168,7 +168,7 @@ See [supported integrations](/docs/integrations/vectorstores/) for details on ge
|
||||
|
||||
Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
|
||||
|
||||
- [How to: reindex data to keep your vectorstore in sync with the underlying data source](/docs/how_to/indexing)
|
||||
- [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing)
|
||||
|
||||
### Tools
|
||||
|
||||
@@ -178,7 +178,7 @@ LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pa
|
||||
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
|
||||
- [How to: use chat models to call tools](/docs/how_to/tool_calling)
|
||||
- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
|
||||
- [How to: pass runtime values to tools](/docs/how_to/tool_runtime)
|
||||
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
|
||||
- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
|
||||
- [How to: handle tool errors](/docs/how_to/tools_error)
|
||||
- [How to: force models to call a tool](/docs/how_to/tool_choice)
|
||||
@@ -297,7 +297,7 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
|
||||
You can use an LLM to do question answering over graph databases.
|
||||
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
|
||||
|
||||
- [How to: add a semantic layer over a database](/docs/how_to/graph_semantic)
|
||||
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
|
||||
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
|
||||
|
||||
### Summarization
|
||||
@@ -345,7 +345,7 @@ LangGraph is an extension of LangChain aimed at
|
||||
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
|
||||
|
||||
LangGraph documentation is currently hosted on a separate site.
|
||||
You can find the [LangGraph guides here](https://langchain-ai.github.io/langgraph/guides/).
|
||||
You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
|
||||
|
||||
## [LangSmith](https://docs.smith.langchain.com/)
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@
|
||||
" * document addition by id (`add_documents` method with `ids` argument)\n",
|
||||
" * delete by id (`delete` method with `ids` argument)\n",
|
||||
"\n",
|
||||
"Compatible Vectorstores: `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `AzureSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
|
||||
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `AzureSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
|
||||
" \n",
|
||||
"## Caution\n",
|
||||
"\n",
|
||||
|
||||
@@ -46,7 +46,7 @@
|
||||
"\n",
|
||||
"1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n",
|
||||
"2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n",
|
||||
"3. [`ollama`](https://github.com/ollama/ollama): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
|
||||
"3. [`Ollama`](https://ollama.ai/): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
|
||||
"4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n",
|
||||
"\n",
|
||||
"In general, these frameworks will do a few things:\n",
|
||||
@@ -74,12 +74,12 @@
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n",
|
||||
"[Ollama](https://ollama.com/) is one way to easily run inference on macOS.\n",
|
||||
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
|
||||
" \n",
|
||||
"The instructions [here](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
|
||||
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
|
||||
" \n",
|
||||
"* [Download and run](https://ollama.ai/download) the app\n",
|
||||
"* From command line, fetch a model from this [list of options](https://ollama.com/search): e.g., `ollama pull gpt-oss:20b`\n",
|
||||
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama3.1:8b`\n",
|
||||
"* When the app is running, all models are automatically served on `localhost:11434`\n"
|
||||
]
|
||||
},
|
||||
@@ -95,7 +95,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "86178adb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -111,11 +111,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_ollama import ChatOllama\n",
|
||||
"from langchain_ollama import OllamaLLM\n",
|
||||
"\n",
|
||||
"llm = ChatOllama(model=\"gpt-oss:20b\", validate_model_on_init=True)\n",
|
||||
"llm = OllamaLLM(model=\"llama3.1:8b\")\n",
|
||||
"\n",
|
||||
"llm.invoke(\"The first man on the moon was ...\").content"
|
||||
"llm.invoke(\"The first man on the moon was ...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -200,7 +200,7 @@
|
||||
"\n",
|
||||
"### Running Apple silicon GPU\n",
|
||||
"\n",
|
||||
"`ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n",
|
||||
"`Ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n",
|
||||
" \n",
|
||||
"Other frameworks require the user to set up the environment to utilize the Apple GPU.\n",
|
||||
"\n",
|
||||
@@ -212,15 +212,15 @@
|
||||
"\n",
|
||||
"In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n",
|
||||
"\n",
|
||||
"e.g., for me:\n",
|
||||
"E.g., for me:\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"```\n",
|
||||
"conda activate /Users/rlm/miniforge3/envs/llama\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"With the above confirmed, then:\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"```\n",
|
||||
"CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir\n",
|
||||
"```"
|
||||
]
|
||||
@@ -236,16 +236,20 @@
|
||||
"\n",
|
||||
"1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n",
|
||||
"2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n",
|
||||
"3. [`ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
|
||||
"3. [`Ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
|
||||
"\n",
|
||||
"### Ollama\n",
|
||||
"\n",
|
||||
"With [Ollama](https://github.com/ollama/ollama), fetch a model via `ollama pull <model family>:<tag>`."
|
||||
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
|
||||
"\n",
|
||||
"* E.g., for Llama 2 7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
|
||||
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
|
||||
"* See the full set of parameters on the [API reference page](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.ollama.Ollama.html)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 42,
|
||||
"id": "8ecd2f78",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -261,7 +265,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = ChatOllama(model=\"gpt-oss:20b\")\n",
|
||||
"llm = OllamaLLM(model=\"llama2:13b\")\n",
|
||||
"llm.invoke(\"The first man on the moon was ... think step by step\")"
|
||||
]
|
||||
},
|
||||
@@ -690,7 +694,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -704,7 +708,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.11"
|
||||
"version": "3.10.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# How to deal with high-cardinality categoricals when doing query analysis\n",
|
||||
"# How deal with high cardinality categoricals when doing query analysis\n",
|
||||
"\n",
|
||||
"You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.\n",
|
||||
"\n",
|
||||
|
||||
@@ -74,12 +74,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "a88ff70c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.text_splitter import SemanticChunker\n",
|
||||
"# from langchain_experimental.text_splitter import SemanticChunker\n",
|
||||
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"text_splitter = SemanticChunker(OpenAIEmbeddings())"
|
||||
|
||||
@@ -612,56 +612,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": null,
|
||||
"id": "35ea904e-795f-411b-bef8-6484dbb6e35c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `python_repl_ast` with `{'query': \"df[['Age', 'Fare']].corr().iloc[0,1]\"}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m0.11232863699941621\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `python_repl_ast` with `{'query': \"df[['Fare', 'Survived']].corr().iloc[0,1]\"}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m0.2561785496289603\u001b[0m\u001b[32;1m\u001b[1;3mThe correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\n",
|
||||
"\n",
|
||||
"Therefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\",\n",
|
||||
" 'output': 'The correlation between Age and Fare is approximately 0.112, and the correlation between Fare and Survival is approximately 0.256.\\n\\nTherefore, the correlation between Fare and Survival (0.256) is greater than the correlation between Age and Fare (0.112).'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.agents import create_pandas_dataframe_agent\n",
|
||||
"\n",
|
||||
"agent = create_pandas_dataframe_agent(\n",
|
||||
" llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n",
|
||||
")\n",
|
||||
"agent.invoke(\n",
|
||||
" {\n",
|
||||
" \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
"outputs": [],
|
||||
"source": "from langchain_experimental.agents import create_pandas_dataframe_agent\n\nagent = create_pandas_dataframe_agent(\n llm, df, agent_type=\"openai-tools\", verbose=True, allow_dangerous_code=True\n)\nagent.invoke(\n {\n \"input\": \"What's the correlation between age and fare? is that greater than the correlation between fare and survival?\"\n }\n)"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -786,4 +741,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
@@ -614,7 +614,6 @@
|
||||
" HumanMessage(\"Now about caterpillars\", name=\"example_user\"),\n",
|
||||
" AIMessage(\n",
|
||||
" \"\",\n",
|
||||
" name=\"example_assistant\",\n",
|
||||
" tool_calls=[\n",
|
||||
" {\n",
|
||||
" \"name\": \"joke\",\n",
|
||||
@@ -668,7 +667,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 14,
|
||||
"id": "df0370e3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -685,7 +684,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"structured_llm = llm.with_structured_output(None, method=\"json_schema\")\n",
|
||||
"structured_llm = llm.with_structured_output(None, method=\"json_mode\")\n",
|
||||
"\n",
|
||||
"structured_llm.invoke(\n",
|
||||
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
|
||||
@@ -910,7 +909,7 @@
|
||||
" ),\n",
|
||||
" (\"human\", \"{query}\"),\n",
|
||||
" ]\n",
|
||||
").partial(schema=People.model_json_schema())\n",
|
||||
").partial(schema=People.schema())\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Custom parser\n",
|
||||
@@ -998,91 +997,6 @@
|
||||
"\n",
|
||||
"chain.invoke({\"query\": query})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "xfejabhtn2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Combining with Additional Tools\n",
|
||||
"\n",
|
||||
"When you need to use both structured output and additional tools (like web search), note the order of operations:\n",
|
||||
"\n",
|
||||
"**Correct Order**:\n",
|
||||
"```python\n",
|
||||
"# 1. Bind tools first\n",
|
||||
"llm_with_tools = llm.bind_tools([web_search_tool, calculator_tool])\n",
|
||||
"\n",
|
||||
"# 2. Apply structured output\n",
|
||||
"structured_llm = llm_with_tools.with_structured_output(MySchema)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"**Incorrect Order**:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# This will fail with \"Tool 'MySchema' not found\" error\n",
|
||||
"structured_llm = llm.with_structured_output(MySchema)\n",
|
||||
"broken_llm = structured_llm.bind_tools([web_search_tool])\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "653798ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Why Order Matters:**\n",
|
||||
"`with_structured_output()` internally uses tool calling to enforce the schema. When you bind additional tools afterward, it creates a conflict in the tool resolution system."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1345f4a4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Complete Example:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0835637b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class SearchResult(BaseModel):\n",
|
||||
" \"\"\"Structured search result.\"\"\"\n",
|
||||
"\n",
|
||||
" query: str = Field(description=\"The search query\")\n",
|
||||
" findings: str = Field(description=\"Summary of findings\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Define tools\n",
|
||||
"search_tool = {\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"web_search\",\n",
|
||||
" \"description\": \"Search the web for information\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\"query\": {\"type\": \"string\", \"description\": \"Search query\"}},\n",
|
||||
" \"required\": [\"query\"],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"# Correct approach\n",
|
||||
"llm = ChatOpenAI()\n",
|
||||
"llm_with_search = llm.bind_tools([search_tool])\n",
|
||||
"structured_search_llm = llm_with_search.with_structured_output(SearchResult)\n",
|
||||
"\n",
|
||||
"# Now you can use both search and get structured output\n",
|
||||
"result = structured_search_llm.invoke(\"Search for latest AI research and summarize\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -39,16 +39,6 @@
|
||||
"/>\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ecc06359",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See also: [How to summarize through parallelization](/docs/how_to/summarize_map_reduce/) and\n",
|
||||
"[How to summarize through iterative refinement](/docs/how_to/summarize_refine/).\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
|
||||
@@ -147,7 +147,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"id": "74de0286-b003-4b48-9cdd-ecab435515ca",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -157,7 +157,7 @@
|
||||
"\n",
|
||||
"from langchain_anthropic import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
|
||||
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
"source": [
|
||||
"## Defining tool schemas\n",
|
||||
"\n",
|
||||
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what its arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
|
||||
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
|
||||
"\n",
|
||||
"### Python functions\n",
|
||||
"Our tool schemas can be Python functions:"
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -53,7 +53,7 @@
|
||||
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
|
||||
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
|
||||
"\n",
|
||||
"model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
|
||||
"model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -53,7 +53,7 @@
|
||||
"\n",
|
||||
"To keep the most recent messages, we set `strategy=\"last\"`. We'll also set `include_system=True` to include the `SystemMessage`, and `start_on=\"human\"` to make sure the resulting chat history is valid. \n",
|
||||
"\n",
|
||||
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case. Keep in mind that new queries added to the chat history will be included in the token count unless you trim prior to adding the new query.\n",
|
||||
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case.\n",
|
||||
"\n",
|
||||
"Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:"
|
||||
]
|
||||
@@ -525,7 +525,7 @@
|
||||
"id": "4d91d390-e7f7-467b-ad87-d100411d7a21",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Looking at [the LangSmith trace](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r) we can see that before the messages are passed to the model they are first trimmed.\n",
|
||||
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r\n",
|
||||
"\n",
|
||||
"Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:"
|
||||
]
|
||||
@@ -620,7 +620,7 @@
|
||||
"id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Looking at [the LangSmith trace](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r) we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message."
|
||||
"Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -630,7 +630,7 @@
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For a complete description of all arguments head to the [API reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html)."
|
||||
"For a complete description of all arguments head to the API reference: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -7,7 +7,10 @@
|
||||
"source": [
|
||||
"# Confident\n",
|
||||
"\n",
|
||||
">[DeepEval](https://confident-ai.com) package for unit testing LLMs."
|
||||
">[DeepEval](https://confident-ai.com) package for unit testing LLMs.\n",
|
||||
"> Using Confident, everyone can build robust language models through faster iterations\n",
|
||||
"> using both unit testing and integration testing. We provide support for each step in the iteration\n",
|
||||
"> from synthetic data creation to testing.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -39,7 +42,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install deepeval langchain langchain-openai"
|
||||
"%pip install --upgrade --quiet langchain langchain-openai langchain-community deepeval langchain-chroma"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -61,29 +64,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">🎉🥳 Congratulations! You've successfully logged in! 🙌 \n",
|
||||
"</pre>\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"🎉🥳 Congratulations! You've successfully logged in! 🙌 \n"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import deepeval\n",
|
||||
"\n",
|
||||
"api_key = os.getenv(\"DEEPEVAL_API_KEY\")\n",
|
||||
"deepeval.login(api_key)"
|
||||
"!deepeval login"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -91,9 +76,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup Confident AI Callback (Modern)\n",
|
||||
"### Setup DeepEval\n",
|
||||
"\n",
|
||||
"The previous DeepEvalCallbackHandler and metric tracking are deprecated. Please use the new integration below."
|
||||
"You can, by default, use the `DeepEvalCallbackHandler` to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:\n",
|
||||
"- [Answer Relevancy](https://docs.confident-ai.com/docs/measuring_llm_performance/answer_relevancy)\n",
|
||||
"- [Bias](https://docs.confident-ai.com/docs/measuring_llm_performance/debias)\n",
|
||||
"- [Toxicness](https://docs.confident-ai.com/docs/measuring_llm_performance/non_toxic)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -102,15 +90,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from deepeval.integrations.langchain import CallbackHandler\n",
|
||||
"from deepeval.metrics.answer_relevancy import AnswerRelevancy\n",
|
||||
"\n",
|
||||
"handler = CallbackHandler(\n",
|
||||
" name=\"My Trace\",\n",
|
||||
" tags=[\"production\", \"v1\"],\n",
|
||||
" metadata={\"experiment\": \"A/B\"},\n",
|
||||
" thread_id=\"thread-123\",\n",
|
||||
" user_id=\"user-456\",\n",
|
||||
")"
|
||||
"# Here we want to make sure the answer is minimally relevant\n",
|
||||
"answer_relevancy_metric = AnswerRelevancy(minimum_score=0.5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -120,11 +103,186 @@
|
||||
"source": [
|
||||
"## Get Started"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To use the `DeepEvalCallbackHandler`, we need the `implementation_name`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.callbacks.confident_callback import DeepEvalCallbackHandler\n",
|
||||
"\n",
|
||||
"deepeval_callback = DeepEvalCallbackHandler(\n",
|
||||
" implementation_name=\"langchainQuickstart\", metrics=[answer_relevancy_metric]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 1: Feeding into LLM\n",
|
||||
"\n",
|
||||
"You can then feed it into your LLM with OpenAI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[Generation(text='\\n\\nQ: What did the fish say when he hit the wall? \\nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThe Moon \\n\\nThe moon is high in the midnight sky,\\nSparkling like a star above.\\nThe night so peaceful, so serene,\\nFilling up the air with love.\\n\\nEver changing and renewing,\\nA never-ending light of grace.\\nThe moon remains a constant view,\\nA reminder of life’s gentle pace.\\n\\nThrough time and space it guides us on,\\nA never-fading beacon of hope.\\nThe moon shines down on us all,\\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ. What did one magnet say to the other magnet?\\nA. \"I find you very attractive!\"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nThe world is charged with the grandeur of God.\\nIt will flame out, like shining from shook foil;\\nIt gathers to a greatness, like the ooze of oil\\nCrushed. Why do men then now not reck his rod?\\n\\nGenerations have trod, have trod, have trod;\\nAnd all is seared with trade; bleared, smeared with toil;\\nAnd wears man's smudge and shares man's smell: the soil\\nIs bare now, nor can foot feel, being shod.\\n\\nAnd for all this, nature is never spent;\\nThere lives the dearest freshness deep down things;\\nAnd though the last lights off the black West went\\nOh, morning, at the brown brink eastward, springs —\\n\\nBecause the Holy Ghost over the bent\\nWorld broods with warm breast and with ah! bright wings.\\n\\n~Gerard Manley Hopkins\", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ: What did one ocean say to the other ocean?\\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for you\\n\\nOn a field of green\\n\\nThe sky so blue\\n\\nA gentle breeze, the sun above\\n\\nA beautiful world, for us to love\\n\\nLife is a journey, full of surprise\\n\\nFull of joy and full of surprise\\n\\nBe brave and take small steps\\n\\nThe future will be revealed with depth\\n\\nIn the morning, when dawn arrives\\n\\nA fresh start, no reason to hide\\n\\nSomewhere down the road, there's a heart that beats\\n\\nBelieve in yourself, you'll always succeed.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(\n",
|
||||
" temperature=0,\n",
|
||||
" callbacks=[deepeval_callback],\n",
|
||||
" verbose=True,\n",
|
||||
" openai_api_key=\"<YOUR_API_KEY>\",\n",
|
||||
")\n",
|
||||
"output = llm.generate(\n",
|
||||
" [\n",
|
||||
" \"What is the best evaluation tool out there? (no bias at all)\",\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can then check the metric if it was successful by calling the `is_successful()` method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"answer_relevancy_metric.is_successful()\n",
|
||||
"# returns True/False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Once you have ran that, you should be able to see our dashboard below. \n",
|
||||
"\n",
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 2: Tracking an LLM in a chain without callbacks\n",
|
||||
"\n",
|
||||
"To track an LLM in a chain without callbacks, you can plug into it at the end.\n",
|
||||
"\n",
|
||||
"We can start by defining a simple chain as shown below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain_chroma import Chroma\n",
|
||||
"from langchain_community.document_loaders import TextLoader\n",
|
||||
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
|
||||
"from langchain_text_splitters import CharacterTextSplitter\n",
|
||||
"\n",
|
||||
"text_file_url = \"https://raw.githubusercontent.com/hwchase17/chat-your-data/master/state_of_the_union.txt\"\n",
|
||||
"\n",
|
||||
"openai_api_key = \"sk-XXX\"\n",
|
||||
"\n",
|
||||
"with open(\"state_of_the_union.txt\", \"w\") as f:\n",
|
||||
" response = requests.get(text_file_url)\n",
|
||||
" f.write(response.text)\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)\n",
|
||||
"docsearch = Chroma.from_documents(texts, embeddings)\n",
|
||||
"\n",
|
||||
"qa = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(openai_api_key=openai_api_key),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=docsearch.as_retriever(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Providing a new question-answering pipeline\n",
|
||||
"query = \"Who is the president?\"\n",
|
||||
"result = qa.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After defining a chain, you can then manually check for answer similarity."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"answer_relevancy_metric.measure(result, query)\n",
|
||||
"answer_relevancy_metric.is_successful()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### What's next?\n",
|
||||
"\n",
|
||||
"You can create your own custom metrics [here](https://docs.confident-ai.com/docs/quickstart/custom-metrics). \n",
|
||||
"\n",
|
||||
"DeepEval also offers other features such as being able to [automatically create unit tests](https://docs.confident-ai.com/docs/quickstart/synthetic-data-creation), [tests for hallucination](https://docs.confident-ai.com/docs/measuring_llm_performance/factual_consistency).\n",
|
||||
"\n",
|
||||
"If you are interested, check out our Github repository here [https://github.com/confident-ai/deepeval](https://github.com/confident-ai/deepeval). We welcome any PRs and discussions on how to improve LLM performance."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -138,7 +296,12 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.11"
|
||||
"version": "3.10.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "a53ebf4a859167383b364e7e7521d0add3c2dbbdecce4edf676e8c4634ff3fbb"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fbeb3f1eb129d115",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: AI/ML API\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6051ba9cfc65a60a",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"# ChatAimlapi\n",
|
||||
"\n",
|
||||
"This page will help you get started with AI/ML API [chat models](/docs/concepts/chat_models.mdx). For detailed documentation of all ChatAimlapi features and configurations, head to the [API reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n",
|
||||
"\n",
|
||||
"AI/ML API provides access to **300+ models** (Deepseek, Gemini, ChatGPT, etc.) via high-uptime and high-rate API."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "512f94fa4bea2628",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ChatAimlapi | langchain-aimlapi | ✅ | beta | ❌ |  |  |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7163684608502d37",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"### Model features\n",
|
||||
"| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |\n",
|
||||
"|:------------:|:-----------------:|:---------:|:-----------:|:-----------:|:-----------:|:---------------------:|:------------:|:-----------:|:--------:|\n",
|
||||
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bb9345d5b24a7741",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"To access AI/ML API models, sign up at [aimlapi.com](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration), generate an API key, and set the `AIMLAPI_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "b26280519672f194",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:16:58.837623Z",
|
||||
"start_time": "2025-08-07T07:16:55.346214Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
|
||||
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fa131229e62dfd47",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"Install the `langchain-aimlapi` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "3777dc00d768299e",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:17:11.195741Z",
|
||||
"start_time": "2025-08-07T07:17:02.288142Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-aimlapi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d168108b0c4f9d7",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"Now we can instantiate the `ChatAimlapi` model and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "f29131e65e47bd16",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:17:23.499746Z",
|
||||
"start_time": "2025-08-07T07:17:11.196747Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_aimlapi import ChatAimlapi\n",
|
||||
"\n",
|
||||
"llm = ChatAimlapi(\n",
|
||||
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
|
||||
" temperature=0.7,\n",
|
||||
" max_tokens=512,\n",
|
||||
" timeout=30,\n",
|
||||
" max_retries=3,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "861b87289f8e146d",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Invocation\n",
|
||||
"You can invoke the model with a list of messages:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "430b1cff2e6d77b4",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:17:30.586261Z",
|
||||
"start_time": "2025-08-07T07:17:29.074409Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"J'adore la programmation.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
|
||||
" (\"human\", \"I love programming.\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5463797524a19b2e",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"We can chain the model with a prompt template as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "bf6defc12a0c5d78",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:17:36.368436Z",
|
||||
"start_time": "2025-08-07T07:17:34.770581Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Ich liebe das Programmieren.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"response = chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"German\",\n",
|
||||
" \"input\": \"I love programming.\",\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fcf0bf10a872355c",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all ChatAimlapi features and configurations, visit the [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -19,7 +19,7 @@
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n",
|
||||
"\n",
|
||||
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/about-claude/models/overview).\n",
|
||||
"Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
":::info AWS Bedrock and Google VertexAI\n",
|
||||
@@ -124,7 +124,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -132,7 +132,7 @@
|
||||
"from langchain_anthropic import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(\n",
|
||||
" model=\"claude-3-5-sonnet-latest\",\n",
|
||||
" model=\"claude-3-5-sonnet-20240620\",\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=1024,\n",
|
||||
" timeout=None,\n",
|
||||
@@ -840,7 +840,7 @@
|
||||
"source": [
|
||||
"## Token-efficient tool use\n",
|
||||
"\n",
|
||||
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
|
||||
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -970,8 +970,8 @@
|
||||
"source": [
|
||||
"### In tool results (agentic RAG)\n",
|
||||
"\n",
|
||||
":::info\n",
|
||||
"Requires ``langchain-anthropic>=0.3.17``\n",
|
||||
":::info Requires ``langchain-anthropic>=0.3.17``\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"Claude supports a [search_result](https://docs.anthropic.com/en/docs/build-with-claude/search-results) content block representing citable results from queries against a knowledge base or other custom source. These content blocks can be passed to claude both top-line (as in the above example) and within a tool result. This allows Claude to cite elements of its response using the result of a tool call.\n",
|
||||
@@ -998,6 +998,8 @@
|
||||
" ]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"We also need to specify the `search-results-2025-06-09` beta when instantiating ChatAnthropic. You can see an end-to-end example below.\n",
|
||||
"\n",
|
||||
"<details>\n",
|
||||
"<summary>End to end example with LangGraph</summary>\n",
|
||||
"\n",
|
||||
@@ -1198,7 +1200,7 @@
|
||||
"source": [
|
||||
"## Built-in tools\n",
|
||||
"\n",
|
||||
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
|
||||
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1208,7 +1210,7 @@
|
||||
"source": [
|
||||
"### Web search\n",
|
||||
"\n",
|
||||
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool) to run searches and ground its responses with citations."
|
||||
"Claude can use a [web search tool](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/web-search-tool) to run searches and ground its responses with citations."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1238,110 +1240,6 @@
|
||||
"response = llm_with_tools.invoke(\"How do I update a web app to TypeScript 5.5?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "kloc4rvd1w",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Web search + structured output\n",
|
||||
"\n",
|
||||
"When combining web search tools with structured output, it's important to **bind the tools first and then apply structured output**:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "rjjergy6ef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"from langchain_anthropic import ChatAnthropic\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Define structured output schema\n",
|
||||
"class ResearchResult(BaseModel):\n",
|
||||
" \"\"\"Structured research result from web search.\"\"\"\n",
|
||||
"\n",
|
||||
" topic: str = Field(description=\"The research topic\")\n",
|
||||
" summary: str = Field(description=\"Summary of key findings\")\n",
|
||||
" key_points: list[str] = Field(description=\"List of important points discovered\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Configure web search tool\n",
|
||||
"websearch_tools = [\n",
|
||||
" {\n",
|
||||
" \"type\": \"web_search_20250305\",\n",
|
||||
" \"name\": \"web_search\",\n",
|
||||
" \"max_uses\": 10,\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20241022\")\n",
|
||||
"\n",
|
||||
"# Correct order: bind tools first, then structured output\n",
|
||||
"llm_with_search = llm.bind_tools(websearch_tools)\n",
|
||||
"research_llm = llm_with_search.with_structured_output(ResearchResult)\n",
|
||||
"\n",
|
||||
"# Now you can use both web search and get structured output\n",
|
||||
"result = research_llm.invoke(\"Research the latest developments in quantum computing\")\n",
|
||||
"print(f\"Topic: {result.topic}\")\n",
|
||||
"print(f\"Summary: {result.summary}\")\n",
|
||||
"print(f\"Key Points: {result.key_points}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c580c20a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Web fetching\n",
|
||||
"\n",
|
||||
"Claude can use a [web fetching tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-fetch-tool) to run searches and ground its responses with citations."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5cf6ad08",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::info\n",
|
||||
"Web search tool is supported since ``langchain-anthropic>=0.3.20``\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c4804be1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_anthropic import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(\n",
|
||||
" model=\"claude-3-5-haiku-latest\",\n",
|
||||
" betas=[\"web-fetch-2025-09-10\"], # Enable web fetch beta\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"tool = {\"type\": \"web_fetch_20250910\", \"name\": \"web_fetch\", \"max_uses\": 3}\n",
|
||||
"llm_with_tools = llm.bind_tools([tool])\n",
|
||||
"\n",
|
||||
"response = llm_with_tools.invoke(\n",
|
||||
" \"Please analyze the content at https://example.com/article\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "088c41d0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::warning\n",
|
||||
"Note: you must add the `'web-fetch-2025-09-10'` beta header to use this tool.\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1478cdc6-2e52-4870-80f9-b4ddf88f2db2",
|
||||
@@ -1351,14 +1249,14 @@
|
||||
"\n",
|
||||
"Claude can use a [code execution tool](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/code-execution-tool) to execute Python code in a sandboxed environment.\n",
|
||||
"\n",
|
||||
":::info\n",
|
||||
"Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
|
||||
":::info Code execution is supported since ``langchain-anthropic>=0.3.14``\n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "2ce13632-a2da-439f-a429-f66481501630",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -1367,7 +1265,7 @@
|
||||
"\n",
|
||||
"llm = ChatAnthropic(\n",
|
||||
" model=\"claude-sonnet-4-20250514\",\n",
|
||||
" betas=[\"code-execution-2025-05-22\"], # Enable code execution beta\n",
|
||||
" betas=[\"code-execution-2025-05-22\"],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"tool = {\"type\": \"code_execution_20250522\", \"name\": \"code_execution\"}\n",
|
||||
@@ -1378,16 +1276,6 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a6b5e15a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::warning\n",
|
||||
"Note: you must add the `'code_execution_20250522'` beta header to use this tool.\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "24076f91-3a3d-4e53-9618-429888197061",
|
||||
@@ -1466,14 +1354,14 @@
|
||||
"\n",
|
||||
"Claude can use a [MCP connector tool](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector) for model-generated calls to remote MCP servers.\n",
|
||||
"\n",
|
||||
":::info\n",
|
||||
"Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
|
||||
":::info Remote MCP is supported since ``langchain-anthropic>=0.3.14``\n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "22fc4a89-e6d8-4615-96cb-2e117349aebf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -1485,17 +1373,17 @@
|
||||
" \"type\": \"url\",\n",
|
||||
" \"url\": \"https://mcp.deepwiki.com/mcp\",\n",
|
||||
" \"name\": \"deepwiki\",\n",
|
||||
" \"tool_configuration\": { # Optional configuration\n",
|
||||
" \"tool_configuration\": { # optional configuration\n",
|
||||
" \"enabled\": True,\n",
|
||||
" \"allowed_tools\": [\"ask_question\"],\n",
|
||||
" },\n",
|
||||
" \"authorization_token\": \"PLACEHOLDER\", # Optional authorization\n",
|
||||
" \"authorization_token\": \"PLACEHOLDER\", # optional authorization\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(\n",
|
||||
" model=\"claude-sonnet-4-20250514\",\n",
|
||||
" betas=[\"mcp-client-2025-04-04\"], # Enable MCP beta\n",
|
||||
" betas=[\"mcp-client-2025-04-04\"],\n",
|
||||
" mcp_servers=mcp_servers,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
@@ -1505,16 +1393,6 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0d6d7197",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::warning\n",
|
||||
"Note: you must add the `'mcp-client-2025-04-04'` beta header to use this tool.\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2fd5d545-a40d-42b1-ad0c-0a79e2536c9b",
|
||||
@@ -1522,7 +1400,7 @@
|
||||
"source": [
|
||||
"### Text editor\n",
|
||||
"\n",
|
||||
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/text-editor-tool) for details."
|
||||
"The text editor tool can be used to view and modify text files. See docs [here](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool) for details."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -129,7 +129,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -137,7 +137,7 @@
|
||||
"from langchain_aws import ChatBedrockConverse\n",
|
||||
"\n",
|
||||
"llm = ChatBedrockConverse(\n",
|
||||
" model_id=\"anthropic.claude-3-5-sonnet-latest-v1:0\",\n",
|
||||
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
|
||||
" # region_name=...,\n",
|
||||
" # aws_access_key_id=...,\n",
|
||||
" # aws_secret_access_key=...,\n",
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||
"\n",
|
||||
"### Setup\n",
|
||||
"\n",
|
||||
@@ -653,35 +653,15 @@
|
||||
"\n",
|
||||
"# Initialize the model\n",
|
||||
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
|
||||
"\n",
|
||||
"# Method 1: Default function calling approach\n",
|
||||
"structured_llm_default = llm.with_structured_output(Person)\n",
|
||||
"\n",
|
||||
"# Method 2: Native JSON mode\n",
|
||||
"structured_llm_json = llm.with_structured_output(Person, method=\"json_mode\")\n",
|
||||
"structured_llm = llm.with_structured_output(Person)\n",
|
||||
"\n",
|
||||
"# Invoke the model with a query asking for structured information\n",
|
||||
"result = structured_llm_json.invoke(\n",
|
||||
"result = structured_llm.invoke(\n",
|
||||
" \"Who was the 16th president of the USA, and how tall was he in meters?\"\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "g9w06ld1ggq",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Structured Output Methods\n",
|
||||
"\n",
|
||||
"Two methods are supported for structured output:\n",
|
||||
"\n",
|
||||
"- **`method=\"function_calling\"` (default)**: Uses tool calling to extract structured data. Compatible with all Gemini models.\n",
|
||||
"- **`method=\"json_mode\"`**: Uses Gemini's native structured output with `responseSchema`. More reliable but requires Gemini 1.5+ models.\n",
|
||||
"\n",
|
||||
"The `json_mode` method is **recommended for better reliability** as it constrains the model's generation process directly rather than relying on post-processing tool calls."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "90d4725e",
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"source": [
|
||||
"# ChatOCIGenAI\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://pypi.org/project/langchain-oci/).\n",
|
||||
"This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html).\n",
|
||||
"\n",
|
||||
"Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n",
|
||||
"Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n",
|
||||
@@ -26,9 +26,9 @@
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
|
||||
"| :--- |:---------------------------------------------------------------------------------| :---: | :---: | :---: |\n",
|
||||
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-oci](https://github.com/oracle/langchain-oracle) | ❌ | ❌ | ❌ |\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: |\n",
|
||||
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | [JSON mode](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs) | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
@@ -37,7 +37,7 @@
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access OCIGenAI models you'll need to install the `oci` and `langchain-oci` packages.\n",
|
||||
"To access OCIGenAI models you'll need to install the `oci` and `langchain-community` packages.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
@@ -53,7 +53,7 @@
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain OCIGenAI integration lives in the `langchain-oci` package and you will also need to install the `oci` package:"
|
||||
"The LangChain OCIGenAI integration lives in the `langchain-community` package and you will also need to install the `oci` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -63,7 +63,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-oci"
|
||||
"%pip install -qU langchain-community oci"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -83,16 +83,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oci.chat_models import ChatOCIGenAI\n",
|
||||
"from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI\n",
|
||||
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
|
||||
"\n",
|
||||
"chat = ChatOCIGenAI(\n",
|
||||
" model_id=\"cohere.command-r-plus-08-2024\",\n",
|
||||
" model_id=\"cohere.command-r-16k\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"compartment_id\",\n",
|
||||
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
|
||||
" auth_type=\"SECURITY_TOKEN\",\n",
|
||||
" auth_profile=\"auth_profile_name\",\n",
|
||||
" auth_file_location=\"auth_file_location\",\n",
|
||||
" compartment_id=\"MY_OCID\",\n",
|
||||
" model_kwargs={\"temperature\": 0.7, \"max_tokens\": 500},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -112,7 +110,14 @@
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "response = chat.invoke(\"Tell me one fact about Earth\")"
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" SystemMessage(content=\"your are an AI assistant.\"),\n",
|
||||
" AIMessage(content=\"Hi there human!\"),\n",
|
||||
" HumanMessage(content=\"tell me a joke.\"),\n",
|
||||
"]\n",
|
||||
"response = chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
@@ -141,22 +146,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"from langchain_oci.chat_models import ChatOCIGenAI\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"llm = ChatOCIGenAI(\n",
|
||||
" model_id=\"cohere.command-r-plus-08-2024\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"compartment_id\",\n",
|
||||
" model_kwargs={\"temperature\": 0, \"max_tokens\": 500},\n",
|
||||
" auth_type=\"SECURITY_TOKEN\",\n",
|
||||
" auth_profile=\"auth_profile_name\",\n",
|
||||
" auth_file_location=\"auth_file_location\",\n",
|
||||
")\n",
|
||||
"prompt = PromptTemplate(input_variables=[\"query\"], template=\"{query}\")\n",
|
||||
"llm_chain = prompt | llm\n",
|
||||
"response = llm_chain.invoke(\"what is the capital of france?\")\n",
|
||||
"print(response)"
|
||||
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"\n",
|
||||
"response = chain.invoke({\"topic\": \"dogs\"})\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -166,7 +162,7 @@
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all ChatOCIGenAI features and configurations head to the API reference: https://pypi.org/project/langchain-oci/"
|
||||
"For detailed documentation of all ChatOCIGenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -17,9 +17,9 @@
|
||||
"source": [
|
||||
"# ChatOllama\n",
|
||||
"\n",
|
||||
"[Ollama](https://ollama.com/) allows you to run open-source large language models, such as `gpt-oss`, locally.\n",
|
||||
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
|
||||
"\n",
|
||||
"`ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.\n",
|
||||
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.\n",
|
||||
"\n",
|
||||
"It optimizes setup and configuration details, including GPU usage.\n",
|
||||
"\n",
|
||||
@@ -28,14 +28,14 @@
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/ollama) | Package downloads | Package latest |\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/ollama) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ChatOllama](https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html#chatollama) | [langchain-ollama](https://python.langchain.com/api_reference/ollama/index.html) | ✅ | ❌ | ✅ |  |  |\n",
|
||||
"| [ChatOllama](https://python.langchain.com/v0.2/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html) | [langchain-ollama](https://python.langchain.com/v0.2/api_reference/ollama/index.html) | ✅ | ❌ | ✅ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
"| :---: |:----------------------------------------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n",
|
||||
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -45,17 +45,17 @@
|
||||
" * macOS users can install via Homebrew with `brew install ollama` and start with `brew services start ollama`\n",
|
||||
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
|
||||
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
|
||||
" * e.g., `ollama pull gpt-oss:20b`\n",
|
||||
" * e.g., `ollama pull llama3`\n",
|
||||
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
|
||||
"\n",
|
||||
"> On Mac, the models will be download to `~/.ollama/models`\n",
|
||||
">\n",
|
||||
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
|
||||
"\n",
|
||||
"* Specify the exact version of the model of interest as such `ollama pull gpt-oss:20b` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
|
||||
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
|
||||
"* To view all pulled models, use `ollama list`\n",
|
||||
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
|
||||
"* View the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/README.md) for more commands. You can run `ollama help` in the terminal to see available commands.\n"
|
||||
"* View the [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs) for more commands. You can run `ollama help` in the terminal to see available commands.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -102,11 +102,7 @@
|
||||
"id": "b18bd692076f7cf7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::warning\n",
|
||||
"Make sure you're using the latest Ollama version!\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"Update by running:"
|
||||
"Make sure you're using the latest Ollama version for structured outputs. Update by running:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -261,10 +257,10 @@
|
||||
"source": [
|
||||
"## Tool calling\n",
|
||||
"\n",
|
||||
"We can use [tool calling](/docs/concepts/tool_calling/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `gpt-oss`:\n",
|
||||
"We can use [tool calling](/docs/concepts/tool_calling/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `llama3.1`:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"ollama pull gpt-oss:20b\n",
|
||||
"ollama pull llama3.1\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Details on creating custom tools are available in [this guide](/docs/how_to/custom_tools/). Below, we demonstrate how to create a tool using the `@tool` decorator on a normal python function."
|
||||
@@ -272,7 +268,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 13,
|
||||
"id": "f767015f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -304,8 +300,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"llm = ChatOllama(\n",
|
||||
" model=\"gpt-oss:20b\",\n",
|
||||
" validate_model_on_init=True,\n",
|
||||
" model=\"llama3.1\",\n",
|
||||
" temperature=0,\n",
|
||||
").bind_tools([validate_user])\n",
|
||||
"\n",
|
||||
@@ -326,7 +321,9 @@
|
||||
"source": [
|
||||
"## Multi-modal\n",
|
||||
"\n",
|
||||
"Ollama has limited support for multi-modal LLMs, such as [gemma3](https://ollama.com/library/gemma3)\n",
|
||||
"Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.com/library/bakllava) and [llava](https://ollama.com/library/llava).\n",
|
||||
"\n",
|
||||
" ollama pull bakllava\n",
|
||||
"\n",
|
||||
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
|
||||
]
|
||||
@@ -521,7 +518,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -535,7 +532,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.11"
|
||||
"version": "3.10.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,408 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "raw"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Qwen\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e49f1e0d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatQwen\n",
|
||||
"\n",
|
||||
"This will help you get started with Qwen [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatQwen features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
|
||||
"| [ChatQwen](https://pypi.org/project/langchain-qwq/) | [langchain-qwq](https://pypi.org/project/langchain-qwq/) | ❌ | beta |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
|
||||
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ✅ | ✅ | ✅ |✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Qwen models you'll need to create an Alibaba Cloud account, get an API key, and install the `langchain-qwq` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [Alibaba's API Key page](https://account.alibabacloud.com/login/login.htm?oauth_callback=https%3A%2F%2Fbailian.console.alibabacloud.com%2F%3FapiKey%3D1&lang=en#/api-key) to sign up to Alibaba Cloud and generate an API key. Once you've done this set the `DASHSCOPE_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"DASHSCOPE_API_KEY\"):\n",
|
||||
" os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"Enter your Dashscope API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain QwQ integration lives in the `langchain-qwq` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-qwq"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Hello! How can I assist you today? 😊', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--62798a20-d425-48ab-91fc-8e62e37c6084-0', usage_metadata={'input_tokens': 9, 'output_tokens': 11, 'total_tokens': 20, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_qwq import ChatQwen\n",
|
||||
"\n",
|
||||
"llm = ChatQwen(model=\"qwen-flash\")\n",
|
||||
"response = llm.invoke(\"Hello\")\n",
|
||||
"\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2b4f3e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Invocation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--33f905e0-880a-4a67-ab83-313fd7a06369-0', usage_metadata={'input_tokens': 32, 'output_tokens': 8, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates English to French.\"\n",
|
||||
" \"Translate the user sentence.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"I love programming.\"),\n",
|
||||
"]\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Ich liebe Programmierung.', additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'qwen-flash'}, id='run--9d8bab6d-d6fe-4b9f-95f2-c30c3ff0a50e-0', usage_metadata={'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates\"\n",
|
||||
" \"{input_language} to {output_language}.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"German\",\n",
|
||||
" \"input\": \"I love programming.\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8d1b3ef3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tool Calling\n",
|
||||
"ChatQwen supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6db1a355",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Use with `bind_tools`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "15fb6a6d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_f0c2cc49307f480db78a45', 'function': {'arguments': '{\"first_int\": 5, \"second_int\": 42}', 'name': 'multiply'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls', 'model_name': 'qwen-flash'} id='run--27c5aafb-9710-42f5-ab78-5a2ad1d9050e-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_f0c2cc49307f480db78a45', 'type': 'tool_call'}] usage_metadata={'input_tokens': 166, 'output_tokens': 27, 'total_tokens': 193, 'input_token_details': {}, 'output_token_details': {}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.tools import tool\n",
|
||||
"\n",
|
||||
"from langchain_qwq import ChatQwen\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def multiply(first_int: int, second_int: int) -> int:\n",
|
||||
" \"\"\"Multiply two integers together.\"\"\"\n",
|
||||
" return first_int * second_int\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"llm = ChatQwen(model=\"qwen-flash\")\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools([multiply])\n",
|
||||
"\n",
|
||||
"msg = llm_with_tools.invoke(\"What's 5 times forty two\")\n",
|
||||
"\n",
|
||||
"print(msg)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cc8ffd89-c474-45a7-a123-e0b1d362f54f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### vision Support"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3e8a7d46-d1f6-4ae8-835a-266ca47e4daf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Image"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "54f69db3-fa51-4b9a-885c-1353968066e3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"This image depicts a cozy, rustic Christmas scene set against a wooden backdrop. The arrangement features a variety of festive decorations that evoke a warm, holiday atmosphere:\n",
|
||||
"\n",
|
||||
"- **Centerpiece**: A decorative reindeer figurine with large antlers stands prominently in the background.\n",
|
||||
"- **Miniature Trees**: Two small, snow-dusted artificial Christmas trees flank the reindeer, adding to the wintry feel.\n",
|
||||
"- **Candles**: Three log-shaped candle holders made from birch bark are lit, casting a soft, warm glow. Two are in the foreground, and one is slightly behind them.\n",
|
||||
"- **\"Merry Christmas\" Sign**: A wooden cutout sign spelling \"MERRY CHRISTMAS\" is placed on the left, decorated with a tiny golden gift box and a small reindeer silhouette.\n",
|
||||
"- **Holiday Elements**: Pinecones, red berries, greenery, and fairy lights are scattered throughout, enhancing the natural, festive theme.\n",
|
||||
"- **Other Details**: A white sack with \"SANTA\" written on it is partially visible on the left, along with a large glass ornament and twinkling string lights.\n",
|
||||
"\n",
|
||||
"The overall aesthetic is warm, inviting, and traditional, emphasizing natural materials like wood, pine, and birch bark. It captures the essence of a rustic, homemade Christmas celebration.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"model = ChatQwen(model=\"qwen-vl-max-latest\")\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\n",
|
||||
" \"type\": \"image_url\",\n",
|
||||
" \"image_url\": {\"url\": \"https://example.com/image/image.png\"},\n",
|
||||
" },\n",
|
||||
" {\"type\": \"text\", \"text\": \"What do you see in this image?\"},\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"response = model.invoke(messages)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b1faea19-932f-4dc8-b0af-60e3507eee08",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "59355c38-d3e2-4051-811a-2b99286ea01b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"This video features a young woman with a warm and cheerful expression, standing outdoors in a well-lit environment. She has short, neatly styled brown hair with bangs and is wearing a soft pink knitted cardigan over a white top. A delicate necklace adorns her neck, adding a subtle touch of elegance to her outfit.\n",
|
||||
"\n",
|
||||
"Throughout the video, she maintains eye contact with the camera, smiling gently and occasionally opening her mouth as if speaking or laughing. Her facial expressions are natural and engaging, suggesting a friendly and approachable demeanor. The background is softly blurred, indicating a shallow depth of field, which keeps the focus on her. It appears to be an urban setting with modern buildings, possibly a residential or commercial area.\n",
|
||||
"\n",
|
||||
"The lighting is bright and natural, likely from sunlight, casting a soft glow on her face and highlighting her features. The overall tone of the video is pleasant and inviting, evoking a sense of warmth and positivity.\n",
|
||||
"\n",
|
||||
"In the top right corner of the frames, there is a watermark that reads \"通义·AI合成,\" which indicates that this video was generated using AI technology by Tongyi Lab, a company known for its advancements in artificial intelligence and digital content creation. This suggests that the video may be a demonstration of AI-generated human-like avatars or synthetic media.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"model = ChatQwen(model=\"qwen-vl-max-latest\")\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\n",
|
||||
" \"type\": \"video_url\",\n",
|
||||
" \"video_url\": {\"url\": \"https://example.com/video/1.mp4\"},\n",
|
||||
" },\n",
|
||||
" {\"type\": \"text\", \"text\": \"Can you tell me about this video?\"},\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"response = model.invoke(messages)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all ChatQwen features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ce1026e3",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -34,7 +34,7 @@
|
||||
"### Model features\n",
|
||||
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
|
||||
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ✅ | ✅ | ✅ |✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
|
||||
"| ✅ | ✅ | ✅ |❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
@@ -47,7 +47,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 2,
|
||||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -91,7 +91,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 3,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -117,7 +117,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -126,10 +126,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let me start by recalling the basic translation. The verb \"love\" in French is \"aimer\", and \"programming\" is \"la programmation\". So the literal translation would be \"J\\'aime la programmation.\" But wait, I should check if there\\'s any context or nuances I need to consider. The user mentioned they\\'re a helpful assistant, so maybe they want a more natural or commonly used phrase. Sometimes in French, people might use \"adorer\" instead of \"aimer\" for stronger emphasis, but \"aimer\" is more standard here. Also, the structure \"J\\'aime\" is correct for \"I love\". No need for any articles if it\\'s a general statement, but \"la programmation\" is a feminine noun, so the article is necessary. Let me confirm the gender of \"programmation\"—yes, it\\'s feminine. So \"la\" is correct. I think that\\'s it. The translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run--396edf0f-ab92-4317-99be-cc9f5377c312-0', usage_metadata={'input_tokens': 32, 'output_tokens': 229, 'total_tokens': 261, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let\\'s start by breaking down the sentence. The subject is \"I\", which in French is \"Je\". The verb is \"love\", which in this context is present tense, so \"aime\". The object is \"programming\". Now, \"programming\" in French can be \"la programmation\". \\n\\nWait, should it be \"programmation\" or \"programmation\"? Let me confirm the spelling. Yes, \"programmation\" is correct. Now, putting it all together: \"Je aime la programmation.\" Hmm, but in French, there\\'s a tendency to contract \"je\" and \"aime\". Wait, actually, \"je\" followed by a vowel sound usually takes \"j\\'\". So it should be \"J\\'aime la programmation.\" \\n\\nLet me double-check. \"J\\'aime\" is the correct contraction for \"I love\". The definite article \"la\" is needed because \"programmation\" is a feminine noun. Yes, \"programmation\" is a feminine noun, so \"la\" is correct. \\n\\nIs there any other way to say it? Maybe \"J\\'adore la programmation\" for \"I love\" in a stronger sense, but the user didn\\'t specify the intensity. Since the original is straightforward, \"J\\'aime la programmation.\" is the direct translation. \\n\\nI think that\\'s it. No mistakes there. So the final translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-5045cd6a-edbd-4b2f-bf24-b7bdf3777fb9-0', usage_metadata={'input_tokens': 32, 'output_tokens': 326, 'total_tokens': 358, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -159,17 +159,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants to translate \"I love programming.\" into German. Let\\'s start by breaking down the sentence. The subject is \"I,\" which translates to \"Ich\" in German. The verb \"love\" is \"liebe\" in present tense for the first person singular. Then \"programming\" is a noun. Now, in German, the word for programming, especially in the context of computer programming, is \"Programmierung.\" However, sometimes people might use \"Programmieren\" as well. Wait, but \"Programmierung\" is the noun form, so \"die Programmierung.\" The structure in German would be \"Ich liebe die Programmierung.\" Alternatively, could it be \"Programmieren\" as the verb in a nominalized form? Let me think. If you say \"Ich liebe das Programmieren,\" that\\'s also correct because \"das Programmieren\" is the gerundive form, which is commonly used for activities. So both are possible. Which one is more natural? Hmm. \"Das Programmieren\" might be more common in everyday language. Let me check some examples. For instance, \"I love cooking\" would be \"Ich liebe das Kochen.\" So following that pattern, \"Ich liebe das Programmieren\" would be the equivalent. Therefore, maybe \"Programmieren\" with the article \"das\" is better here. But the user might just want a direct translation without the article. Wait, the original sentence is \"I love programming,\" which is a noun, so in German, you need an article. So the correct translation would include \"das\" before the noun. So the correct sentence is \"Ich liebe das Programmieren.\" Alternatively, if they want to use the noun without an article, maybe in a more abstract sense, but I think \"das\" is necessary here. Let me confirm. Yes, in German, when using the noun form of a verb like this, you need the article. So the best translation is \"Ich liebe das Programmieren.\" I think that\\'s the most natural way to say it. Alternatively, \"Programmierung\" is more formal, but \"Programmieren\" is more commonly used in such contexts. So I\\'ll go with \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run--0ceaba8a-7842-48fb-8bec-eb96d2c83ed4-0', usage_metadata={'input_tokens': 28, 'output_tokens': 466, 'total_tokens': 494, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into German. Let me think. The verb \"love\" is \"lieben\" or \"mögen\" in German, but \"lieben\" is more like love, while \"mögen\" is prefer. Since it\\'s about programming, which is a strong affection, \"lieben\" is better. The subject is \"I\", which is \"ich\". Then \"programming\" is \"Programmierung\" or \"Coding\". But \"Programmierung\" is more formal. Alternatively, sometimes people say \"ich liebe es zu programmieren\" which is \"I love to program\". Hmm, maybe the direct translation would be \"Ich liebe die Programmierung.\" But maybe the more natural way is \"Ich liebe es zu programmieren.\" Let me check. Both are correct, but the second one might sound more natural in everyday speech. The user might prefer the concise version. Alternatively, maybe \"Ich liebe die Programmierung.\" is better. Wait, the original is \"programming\" as a noun. So using the noun form would be appropriate. So \"Ich liebe die Programmierung.\" But sometimes people also use \"Coding\" in German, like \"Ich liebe das Coding.\" But that\\'s more anglicism. Probably better to stick with \"Programmierung\". Alternatively, \"Programmieren\" as a noun. Oh right! \"Programmieren\" can be a noun when used in the accusative case. So \"Ich liebe das Programmieren.\" That\\'s correct and natural. Yes, that\\'s the best translation. So the answer is \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-2c418451-51d8-4319-8269-2ce129363a1a-0', usage_metadata={'input_tokens': 28, 'output_tokens': 341, 'total_tokens': 369, 'input_token_details': {}, 'output_token_details': {}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -217,7 +217,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 6,
|
||||
"id": "15fb6a6d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -225,13 +225,12 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. They want the product of 5 and 42. The function provided is called multiply, which takes two integers. First, I need to parse the numbers from the question. The first integer is 5, straightforward. The second is forty two, which is 42 in numeric form. So I should call the multiply function with first_int=5 and second_int=42. Let me double-check the parameters: both are required and of type integer. Yep, that\\'s correct. No examples given, but the function should handle these numbers. Alright, time to format the tool call.'} response_metadata={'model_name': 'qwq-plus'} id='run--3c5ff46c-3fc8-4caf-a665-2405aeef2948-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_33fb94c6662d44928e56ec', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 173, 'total_tokens': 349, 'input_token_details': {}, 'output_token_details': {}}\n"
|
||||
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. First, I need to identify the numbers involved. The first number is 5, which is straightforward. The second number is forty two, which is 42 in digits. The operation they want is multiplication.\\n\\nLooking at the tools provided, there\\'s a function called multiply that takes two integers. So I should use that. The parameters are first_int and second_int. \\n\\nI need to convert \"forty two\" to 42. Since the function requires integers, both numbers should be in integer form. So 5 and 42. \\n\\nNow, I\\'ll structure the tool call. The function name is multiply, and the arguments should be first_int: 5 and second_int: 42. I\\'ll make sure the JSON is correctly formatted without any syntax errors. Let me double-check the parameters to ensure they\\'re required and of the right type. Yep, both are required and integers. \\n\\nNo examples were provided, but the function\\'s purpose is clear. So the correct tool call should be to multiply those two numbers. I think that\\'s all. No other functions are needed here.'} response_metadata={'model_name': 'qwq-plus'} id='run-638895aa-fdde-4567-bcfa-7d8e5d4f24af-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_d088275851c140529ed2ad', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 277, 'total_tokens': 453, 'input_token_details': {}, 'output_token_details': {}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.tools import tool\n",
|
||||
"\n",
|
||||
"from langchain_qwq import ChatQwQ\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -250,170 +249,6 @@
|
||||
"print(msg)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "88aa9980-1bd6-4cc9-aeac-4c9011e617fc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### vision Support"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79e372e3-7050-4038-bf88-d1e8f5ddae09",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Image"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "c2372365-7208-42f9-a147-deffdc390313",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The image depicts a charming Christmas-themed arrangement set against a rustic wooden backdrop, creating a warm and festive atmosphere. Here's a detailed breakdown:\n",
|
||||
"\n",
|
||||
"### **Background & Setting**\n",
|
||||
"- **Wooden Wall**: A horizontally paneled wooden wall forms the backdrop, giving a cozy, cabin-like feel.\n",
|
||||
"- **Foreground Surface**: The decorations rest on a smooth wooden surface (likely a table or desk), enhancing the natural, earthy tone of the scene.\n",
|
||||
"\n",
|
||||
"### **Key Elements**\n",
|
||||
"1. **Snow-Covered Trees**:\n",
|
||||
" - Two miniature evergreen trees dusted with artificial snow flank the sides of the arrangement, evoking a wintry landscape.\n",
|
||||
"\n",
|
||||
"2. **String Lights**:\n",
|
||||
" - A strand of white bulb lights stretches across the back, weaving through the decor and adding a soft, glowing ambiance.\n",
|
||||
"\n",
|
||||
"3. **Ornamental Sphere**:\n",
|
||||
" - A reflective gold sphere with striped patterns sits near the center-left, catching and dispersing light.\n",
|
||||
"\n",
|
||||
"4. **\"Merry Christmas\" Sign**:\n",
|
||||
" - A wooden cutout spelling \"MERRY CHRISTMAS\" in capital letters serves as the focal point. The letters feature star-shaped cutouts, allowing light to shine through.\n",
|
||||
"\n",
|
||||
"5. **Reindeer Figurine**:\n",
|
||||
" - A brown reindeer with white facial markings and large antlers stands prominently on the right, facing forward and adding a playful touch.\n",
|
||||
"\n",
|
||||
"6. **Candle Holders**:\n",
|
||||
" - Three birch-bark candle holders are arranged in front of the reindeer. Two hold lit tealights, casting a warm glow, while the third remains unlit.\n",
|
||||
"\n",
|
||||
"7. **Natural Accents**:\n",
|
||||
" - **Pinecones**: Scattered throughout, adding texture and a woodland feel.\n",
|
||||
" - **Berry Branches**: Red-berried greenery (likely holly) weaves behind the sign, introducing vibrant color.\n",
|
||||
" - **Pine Branches**: Fresh-looking branches enhance the seasonal authenticity.\n",
|
||||
"\n",
|
||||
"8. **Gift Box**:\n",
|
||||
" - A small golden gift box with a bow sits near the left, symbolizing holiday gifting.\n",
|
||||
"\n",
|
||||
"9. **Textile Detail**:\n",
|
||||
" - A fabric piece with \"Christmas\" embroidered on it peeks from the left, partially obscured but contributing to the thematic unity.\n",
|
||||
"\n",
|
||||
"### **Color Palette & Mood**\n",
|
||||
"- **Warm Tones**: Browns (wood, reindeer), golds (ornament, gift box), and whites (snow, lights) dominate, creating a inviting glow.\n",
|
||||
"- **Cool Accents**: Greens (trees, branches) and reds (berries) provide contrast, balancing the warmth.\n",
|
||||
"- **Lighting**: The lit candles and string lights cast a soft, flickering illumination, enhancing the intimate, celebratory vibe.\n",
|
||||
"\n",
|
||||
"### **Composition**\n",
|
||||
"- **Balance**: The arrangement is symmetrical, with trees and candles on either side framing the central sign and reindeer.\n",
|
||||
"- **Depth**: Layered elements (trees, lights, branches) create visual interest, drawing the eye inward.\n",
|
||||
"\n",
|
||||
"This image beautifully captures the essence of a cozy, handmade Christmas display, blending traditional symbols with natural textures to evoke nostalgia and joy.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"model = ChatQwQ(model=\"qvq-max-latest\")\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\n",
|
||||
" \"type\": \"image_url\",\n",
|
||||
" \"image_url\": {\"url\": \"https://example.com/image/image.png\"},\n",
|
||||
" },\n",
|
||||
" {\"type\": \"text\", \"text\": \"What do you see in this image?\"},\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"response = model.invoke(messages)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9242acf7-9a66-40b1-98b5-b113d28fc6ec",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "f0a9e542-7a85-44d2-8576-14314a50d948",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The image provided is a still frame from a video featuring a young woman with short brown hair and bangs, smiling brightly at the camera. Here's a detailed breakdown:\n",
|
||||
"\n",
|
||||
"### **Description of the Image:**\n",
|
||||
"- **Subject:** A youthful female with a cheerful expression, showcasing a wide smile with visible teeth.\n",
|
||||
"- **Appearance:** \n",
|
||||
" - Short, neatly styled brown hair with blunt bangs.\n",
|
||||
" - Natural makeup emphasizing clear skin and subtle eye makeup.\n",
|
||||
" - Wearing a white round-neck shirt layered under a light pink knitted cardigan.\n",
|
||||
" - Accessories include a delicate necklace with a small pendant and small earrings.\n",
|
||||
"- **Background:** An outdoor setting with blurred architectural elements (e.g., buildings with columns), suggesting a campus, park, or residential area.\n",
|
||||
"- **Lighting:** Soft, natural daylight, enhancing the warm and inviting atmosphere.\n",
|
||||
"\n",
|
||||
"### **Key Details About the Video:**\n",
|
||||
"1. **AI-Generated Content:** The watermark (\"通义·AI合成\" / \"Tongyi AI Synthesis\") indicates this image was created using Alibaba's Tongyi AI model, known for generating hyper-realistic visuals.\n",
|
||||
"2. **Style & Purpose:** The high-quality, photorealistic style suggests the video may demonstrate AI imaging capabilities, potentially for advertising, entertainment, or educational purposes.\n",
|
||||
"3. **Context Clues:** The subject's casual yet polished look and the pleasant outdoor setting imply a positive, approachable theme (e.g., lifestyle, technology promotion, or social media content).\n",
|
||||
"\n",
|
||||
"### **What We Can Infer About the Video:**\n",
|
||||
"- Likely showcases dynamic AI-generated scenes featuring the same character in various poses or interactions.\n",
|
||||
"- May highlight realism in digital avatars or synthetic media.\n",
|
||||
"- Could be part of a demo reel, tutorial, or creative project emphasizing AI artistry.\n",
|
||||
"\n",
|
||||
"### **Limitations:**\n",
|
||||
"- As only a single frame is provided, specifics about the video's length, narrative, or additional scenes cannot be determined.\n",
|
||||
"\n",
|
||||
"If you have more frames or context, feel free to share! 😊\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"model = ChatQwQ(model=\"qvq-max-latest\")\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\n",
|
||||
" \"type\": \"video_url\",\n",
|
||||
" \"video_url\": {\"url\": \"https://example.com/video/1.mp4\"},\n",
|
||||
" },\n",
|
||||
" {\"type\": \"text\", \"text\": \"Can you tell me about this video?\"},\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"response = model.invoke(messages)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||
@@ -423,19 +258,11 @@
|
||||
"\n",
|
||||
"For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "824f0c67-5f3b-4079-bc17-2cf92755bdd5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -449,7 +276,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
"version": "3.13.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Azure AI Data\n",
|
||||
"\n",
|
||||
">[Azure AI Foundry (formerly Azure AI Studio)](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:\n",
|
||||
">[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets to cloud storage and register existing data assets from the following sources:\n",
|
||||
">\n",
|
||||
">- `Microsoft OneLake`\n",
|
||||
">- `Azure Blob Storage`\n",
|
||||
|
||||
@@ -2,94 +2,67 @@
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"# Oracle Autonomous Database\n",
|
||||
"\n",
|
||||
"Oracle Autonomous Database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs.\n",
|
||||
"Oracle autonomous database is a cloud database that uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks traditionally performed by DBAs.\n",
|
||||
"\n",
|
||||
"This notebook covers how to load documents from Oracle Autonomous Database.\n",
|
||||
"This notebook covers how to load documents from oracle autonomous database, the loader supports connection with connection string or tns configuration.\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"1. A database that python-oracledb's default 'Thin' mode can connected to. This is true of Oracle Autonomous Database, see [python-oracledb Architecture](https://python-oracledb.readthedocs.io/en/latest/user_guide/introduction.html#architecture).\n"
|
||||
]
|
||||
"1. Database runs in a 'Thin' mode:\n",
|
||||
" https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html\n",
|
||||
"2. `pip install oracledb`:\n",
|
||||
" https://python-oracledb.readthedocs.io/en/latest/user_guide/installation.html"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Instructions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
|
||||
"\n",
|
||||
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb."
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# python -m pip install -U langchain-oracledb"
|
||||
]
|
||||
"pip install oracledb"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders import OracleAutonomousDatabaseLoader\n",
|
||||
"from langchain_community.document_loaders import OracleAutonomousDatabaseLoader\n",
|
||||
"from settings import s"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"With mutual TLS authentication (mTLS), wallet_location and wallet_password parameters are required to create the connection. See python-oracledb documentation [Connecting to Oracle Cloud Autonomous Databases](https://python-oracledb.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-to-oracle-cloud-autonomous-databases)."
|
||||
]
|
||||
"With mutual TLS authentication (mTLS), wallet_location and wallet_password are required to create the connection, user can create connection by providing either connection string or tns configuration details."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"SQL_QUERY = \"select prod_id, time_id from sh.costs fetch first 5 rows only\"\n",
|
||||
@@ -102,7 +75,7 @@
|
||||
" config_dir=s.CONFIG_DIR,\n",
|
||||
" wallet_location=s.WALLET_LOCATION,\n",
|
||||
" wallet_password=s.PASSWORD,\n",
|
||||
" dsn=s.DSN,\n",
|
||||
" tns_name=s.TNS_NAME,\n",
|
||||
")\n",
|
||||
"doc_1 = doc_loader_1.load()\n",
|
||||
"\n",
|
||||
@@ -111,35 +84,29 @@
|
||||
" user=s.USERNAME,\n",
|
||||
" password=s.PASSWORD,\n",
|
||||
" schema=s.SCHEMA,\n",
|
||||
" dsn=s.DSN,\n",
|
||||
" connection_string=s.CONNECTION_STRING,\n",
|
||||
" wallet_location=s.WALLET_LOCATION,\n",
|
||||
" wallet_password=s.PASSWORD,\n",
|
||||
")\n",
|
||||
"doc_2 = doc_loader_2.load()"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"With 1-way TLS authentication, only the database credentials and connection string are required to establish a connection.\n",
|
||||
"The example below also shows passing bind variable values with the argument \"parameters\"."
|
||||
]
|
||||
"With TLS authentication, wallet_location and wallet_password are not required.\n",
|
||||
"Bind variable option is provided by argument \"parameters\"."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"SQL_QUERY = \"select channel_id, channel_desc from sh.channels where channel_desc = :1 fetch first 5 rows only\"\n",
|
||||
@@ -150,7 +117,7 @@
|
||||
" password=s.PASSWORD,\n",
|
||||
" schema=s.SCHEMA,\n",
|
||||
" config_dir=s.CONFIG_DIR,\n",
|
||||
" dsn=s.DSN,\n",
|
||||
" tns_name=s.TNS_NAME,\n",
|
||||
" parameters=[\"Direct Sales\"],\n",
|
||||
")\n",
|
||||
"doc_3 = doc_loader_3.load()\n",
|
||||
@@ -160,32 +127,35 @@
|
||||
" user=s.USERNAME,\n",
|
||||
" password=s.PASSWORD,\n",
|
||||
" schema=s.SCHEMA,\n",
|
||||
" dsn=s.DSN,\n",
|
||||
" connection_string=s.CONNECTION_STRING,\n",
|
||||
" parameters=[\"Direct Sales\"],\n",
|
||||
")\n",
|
||||
"doc_4 = doc_loader_4.load()"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.11"
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
|
||||
@@ -42,9 +42,7 @@
|
||||
"source": [
|
||||
"### Prerequisites\n",
|
||||
"\n",
|
||||
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
|
||||
"\n",
|
||||
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb."
|
||||
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -53,7 +51,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# python -m pip install -U langchain-oracledb"
|
||||
"# pip install oracledb"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -156,7 +154,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
@@ -201,7 +199,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
|
||||
@@ -1,334 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Oxylabs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Oxylabs](https://oxylabs.io/) is a web intelligence collection platform that enables companies worldwide to unlock data-driven insights.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"Oxylabs document loader allows to load data from search engines, e-commerce sites, travel platforms, and any other website. It supports geolocation, browser rendering, data parsing, multiple user agents and many more parameters. Check out [Oxylabs documentation](https://developers.oxylabs.io/scraping-solutions/web-scraper-api) for more information.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | Pricing |\n",
|
||||
"|:--------------|:------------------------------------------------------------------|:-----:|:------------:|:-----------------------------:|\n",
|
||||
"| OxylabsLoader | [langchain-oxylabs](https://github.com/oxylabs/langchain-oxylabs) | ✅ | ❌ | Free 5,000 results for 1 week |\n",
|
||||
"\n",
|
||||
"### Loader features\n",
|
||||
"| Document Lazy Loading |\n",
|
||||
"|:---------------------:|\n",
|
||||
"| ✅ |\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Install the required dependencies.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain-oxylabs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Credentials\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Set up the proper API keys and environment variables.\n",
|
||||
"Create your API user credentials: Sign up for a free trial or purchase the product\n",
|
||||
"in the [Oxylabs dashboard](https://dashboard.oxylabs.io/en/registration)\n",
|
||||
"to create your API user credentials (OXYLABS_USERNAME and OXYLABS_PASSWORD)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OXYLABS_USERNAME\"] = getpass.getpass(\"Enter your Oxylabs username: \")\n",
|
||||
"os.environ[\"OXYLABS_PASSWORD\"] = getpass.getpass(\"Enter your Oxylabs password: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialization"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T10:57:51.630011Z",
|
||||
"start_time": "2025-08-06T10:57:51.623814Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oxylabs import OxylabsLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T10:57:53.685413Z",
|
||||
"start_time": "2025-08-06T10:57:53.628859Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OxylabsLoader(\n",
|
||||
" urls=[\n",
|
||||
" \"https://sandbox.oxylabs.io/products/1\",\n",
|
||||
" \"https://sandbox.oxylabs.io/products/2\",\n",
|
||||
" ],\n",
|
||||
" params={\"markdown\": True},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": "## Load"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T10:59:51.487327Z",
|
||||
"start_time": "2025-08-06T10:59:48.592743Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2751\n",
|
||||
"[](/)\n",
|
||||
"\n",
|
||||
"Game platforms:\n",
|
||||
"\n",
|
||||
"* **All**\n",
|
||||
"\n",
|
||||
"* [Nintendo platform](/products/category/nintendo)\n",
|
||||
"\n",
|
||||
"+ wii\n",
|
||||
"+ wii-u\n",
|
||||
"+ nintendo-64\n",
|
||||
"+ switch\n",
|
||||
"+ gamecube\n",
|
||||
"+ game-boy-advance\n",
|
||||
"+ 3ds\n",
|
||||
"+ ds\n",
|
||||
"\n",
|
||||
"* [Xbox platform](/products/category/xbox-platform)\n",
|
||||
"\n",
|
||||
"* **Dreamcast**\n",
|
||||
"\n",
|
||||
"* [Playstation platform](/products/category/playstation-platform)\n",
|
||||
"\n",
|
||||
"* **Pc**\n",
|
||||
"\n",
|
||||
"* **Stadia**\n",
|
||||
"\n",
|
||||
"Go Back\n",
|
||||
"\n",
|
||||
"Note!This is a sandbox website used for web scraping. Information listed in this website does not have any real meaning and should not be associated with the actual products.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## The Legend of Zelda: Ocarina of Time\n",
|
||||
"\n",
|
||||
"**Developer:** Nintendo**Platform:****Type:** singleplayer\n",
|
||||
"\n",
|
||||
"As a young boy, Link is tricked by Ganondorf, the King of the Gerudo Thieves. The evil human uses Link to g\n",
|
||||
"5542\n",
|
||||
"[](/)\n",
|
||||
"\n",
|
||||
"Game platforms:\n",
|
||||
"\n",
|
||||
"* **All**\n",
|
||||
"\n",
|
||||
"* [Nintendo platform](/products/category/nintendo)\n",
|
||||
"\n",
|
||||
"+ wii\n",
|
||||
"+ wii-u\n",
|
||||
"+ nintendo-64\n",
|
||||
"+ switch\n",
|
||||
"+ gamecube\n",
|
||||
"+ game-boy-advance\n",
|
||||
"+ 3ds\n",
|
||||
"+ ds\n",
|
||||
"\n",
|
||||
"* [Xbox platform](/products/category/xbox-platform)\n",
|
||||
"\n",
|
||||
"* **Dreamcast**\n",
|
||||
"\n",
|
||||
"* [Playstation platform](/products/category/playstation-platform)\n",
|
||||
"\n",
|
||||
"* **Pc**\n",
|
||||
"\n",
|
||||
"* **Stadia**\n",
|
||||
"\n",
|
||||
"Go Back\n",
|
||||
"\n",
|
||||
"Note!This is a sandbox website used for web scraping. Information listed in this website does not have any real meaning and should not be associated with the actual products.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Super Mario Galaxy\n",
|
||||
"\n",
|
||||
"**Developer:** Nintendo**Platform:****Type:** singleplayer\n",
|
||||
"\n",
|
||||
"[Metacritic's 2007 Wii Game of the Year] The ultimate Nintendo hero is taking the ultimate step ... out into space. Join Mario as he ushers in a new era of video games, de\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for document in loader.load():\n",
|
||||
" print(document.page_content[:1000])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "markdown",
|
||||
"source": "## Lazy Load"
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"execution_count": null,
|
||||
"source": [
|
||||
"for document in loader.lazy_load():\n",
|
||||
" print(document.page_content[:1000])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Advanced examples\n",
|
||||
"\n",
|
||||
"The following examples show the usage of `OxylabsLoader` with geolocation, currency, pagination and user agent parameters for Amazon Search and Google Search sources."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T11:04:19.901122Z",
|
||||
"start_time": "2025-08-06T11:04:19.838933Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OxylabsLoader(\n",
|
||||
" queries=[\"gaming headset\", \"gaming chair\", \"computer mouse\"],\n",
|
||||
" params={\n",
|
||||
" \"source\": \"amazon_search\",\n",
|
||||
" \"parse\": True,\n",
|
||||
" \"geo_location\": \"DE\",\n",
|
||||
" \"currency\": \"EUR\",\n",
|
||||
" \"pages\": 3,\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T11:07:17.648142Z",
|
||||
"start_time": "2025-08-06T11:07:17.595629Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OxylabsLoader(\n",
|
||||
" queries=[\"europe gdp per capita\", \"us gdp per capita\"],\n",
|
||||
" params={\n",
|
||||
" \"source\": \"google_search\",\n",
|
||||
" \"parse\": True,\n",
|
||||
" \"geo_location\": \"Paris, France\",\n",
|
||||
" \"user_agent_type\": \"mobile\",\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"[More information about this package.](https://github.com/oxylabs/langchain-oxylabs)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -132,12 +132,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"\n",
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"# Define the LLMGraphTransformer\n",
|
||||
|
||||
@@ -548,12 +548,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_experimental.graph_transformers import LLMGraphTransformer"
|
||||
"# from langchain_experimental.graph_transformers import LLMGraphTransformer"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,357 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: AI/ML API\n",
|
||||
"---"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "c74887ead73c5eb4"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# AimlapiLLM\n",
|
||||
"\n",
|
||||
"This page will help you get started with AI/ML API [text completion models](/docs/concepts/text_llms). For detailed documentation of all AimlapiLLM features and configurations, head to the [API reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n",
|
||||
"\n",
|
||||
"AI/ML API provides access to **300+ models** (Deepseek, Gemini, ChatGPT, etc.) via high-uptime and high-rate API."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "c1895707cde83d90"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| AimlapiLLM | langchain-aimlapi | ✅ | beta | ❌ |  |  |"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "72b0a510b6eac641"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### Model features\n",
|
||||
"| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |\n",
|
||||
"|:------------:|:-----------------:|:---------:|:-----------:|:-----------:|:-----------:|:---------------------:|:------------:|:-----------:|:--------:|\n",
|
||||
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |\n"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "4b87089494d8877d"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"To access AI/ML API models, sign up at [aimlapi.com](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration), generate an API key, and set the `AIMLAPI_API_KEY` environment variable:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "2c45017efcc36569"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
|
||||
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:24:48.681319Z",
|
||||
"start_time": "2025-08-07T07:24:47.490206Z"
|
||||
}
|
||||
},
|
||||
"id": "86b05af725c45941",
|
||||
"execution_count": 1
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"Install the `langchain-aimlapi` package:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "51171ba92cb2b382"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-aimlapi"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:18:08.606708Z",
|
||||
"start_time": "2025-08-07T07:17:59.901457Z"
|
||||
}
|
||||
},
|
||||
"id": "2b15cbaf7d5e1560",
|
||||
"execution_count": 2
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"Now we can instantiate the `AimlapiLLM` model and generate text completions:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "e94379f9d37fe6b3"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_aimlapi import AimlapiLLM\n",
|
||||
"\n",
|
||||
"llm = AimlapiLLM(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\",\n",
|
||||
" temperature=0.5,\n",
|
||||
" max_tokens=256,\n",
|
||||
")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:46:52.875867Z",
|
||||
"start_time": "2025-08-07T07:46:52.869961Z"
|
||||
}
|
||||
},
|
||||
"id": "8a3af681997723b0",
|
||||
"execution_count": 23
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Invocation\n",
|
||||
"You can invoke the model with a prompt:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "c983ab1d95887e8f"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Bubble sort is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. This process is repeated until the entire list is sorted.\n",
|
||||
"\n",
|
||||
"The algorithm gets its name from the way smaller elements \"bubble\" to the top of the list. It is commonly used for educational purposes due to its simplicity, but it is not a very efficient sorting algorithm for large data sets.\n",
|
||||
"\n",
|
||||
"Here is an implementation of the bubble sort algorithm in Python:\n",
|
||||
"\n",
|
||||
"1. Start by defining a function that takes in a list as its argument.\n",
|
||||
"2. Set a variable \"swapped\" to True, indicating that a swap has occurred.\n",
|
||||
"3. Create a while loop that runs as long as the \"swapped\" variable is True.\n",
|
||||
"4. Inside the loop, set the \"swapped\" variable to False.\n",
|
||||
"5. Create a for loop that iterates through the list, starting from the first element and ending at the second to last element.\n",
|
||||
"6. Inside the for loop, compare the current element with the next element. If the current element is larger than the next element, swap them and set the \"swapped\" variable to True.\n",
|
||||
"7. After the for loop, if the \"swapped\" variable\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = llm.invoke(\"Explain the bubble sort algorithm in Python.\")\n",
|
||||
"print(response)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:46:57.209950Z",
|
||||
"start_time": "2025-08-07T07:46:53.935975Z"
|
||||
}
|
||||
},
|
||||
"id": "9a193081f431a42a",
|
||||
"execution_count": 24
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Streaming Invocation\n",
|
||||
"You can also stream responses token-by-token:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "1afedb28f556c7bd"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" \n",
|
||||
"\n",
|
||||
"1. Python\n",
|
||||
"Python has been consistently growing in popularity and has become one of the most widely used programming languages in recent years. It is used for a wide range of applications such as web development, data analysis, machine learning, and artificial intelligence. Its simple syntax and readability make it an attractive choice for beginners and experienced programmers alike. With the rise of data-driven technology and automation, Python is projected to be the most in-demand language in 2025.\n",
|
||||
"\n",
|
||||
"2. JavaScript\n",
|
||||
"JavaScript continues to dominate the web development scene and is expected to maintain its position as a top programming language in 2025. With the increasing use of front-end frameworks like React and Angular, JavaScript is crucial for building dynamic and interactive user interfaces. Additionally, the rise of serverless architecture and the popularity of Node.js make JavaScript an essential language for both front-end and back-end development.\n",
|
||||
"\n",
|
||||
"3. Go\n",
|
||||
"Go, also known as Golang, is a relatively new programming language developed by Google. It is designed for"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = AimlapiLLM(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for chunk in llm.stream(\"List top 5 programming languages in 2025 with reasons.\"):\n",
|
||||
" print(chunk, end=\"\", flush=True)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:49:25.223233Z",
|
||||
"start_time": "2025-08-07T07:49:22.101498Z"
|
||||
}
|
||||
},
|
||||
"id": "a132c9183f648fb4",
|
||||
"execution_count": 26
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all AimlapiLLM features and configurations, visit the [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration).\n"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "7b4ab33058dc0974"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "900f36a35477c8ae"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
|
||||
"chain = prompt | llm"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:49:34.857042Z",
|
||||
"start_time": "2025-08-07T07:49:34.853032Z"
|
||||
}
|
||||
},
|
||||
"id": "d7f10052eb4ff249",
|
||||
"execution_count": 27
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "\"\\n\\nWhy do bears have fur coats?\\n\\nBecause they'd look silly in sweaters! \""
|
||||
},
|
||||
"execution_count": 28,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"topic\": \"bears\"})"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:49:48.565804Z",
|
||||
"start_time": "2025-08-07T07:49:35.558426Z"
|
||||
}
|
||||
},
|
||||
"id": "184c333c60f94b05",
|
||||
"execution_count": 28
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all `AI/ML API` llm features and configurations head to the API reference: [API Reference](https://docs.aimlapi.com/?utm_source=langchain&utm_medium=github&utm_campaign=integration)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "804f3a79a8046ec1"
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -44,7 +44,9 @@
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "%pip install --upgrade --quiet llama-cpp-python"
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet llama-cpp-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -62,7 +64,9 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||
"source": [
|
||||
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -76,7 +80,9 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
||||
"source": [
|
||||
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -94,7 +100,9 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||
"source": [
|
||||
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -108,7 +116,9 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --no-binary :all: --no-cache-dir"
|
||||
"source": [
|
||||
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -164,7 +174,9 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "!python -m pip install -e . --force-reinstall --no-cache-dir"
|
||||
"source": [
|
||||
"!python -m pip install -e . --force-reinstall --no-cache-dir"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -706,4 +718,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,30 +22,32 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"Ensure that the oci sdk and the langchain-community package are installed\n",
|
||||
"\n",
|
||||
":::caution You are currently on a page documenting the use of Oracle's text generation models. Which are deprecated."
|
||||
"Ensure that the oci sdk and the langchain-community package are installed"
|
||||
]
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"execution_count": null,
|
||||
"source": "!pip install -U langchain-oci"
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "markdown",
|
||||
"source": "## Usage"
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"execution_count": null,
|
||||
"source": [
|
||||
"from langchain_oci.llms import OCIGenAI\n",
|
||||
"!pip install -U oci langchain-community"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms.oci_generative_ai import OCIGenAI\n",
|
||||
"\n",
|
||||
"llm = OCIGenAI(\n",
|
||||
" model_id=\"cohere.command\",\n",
|
||||
|
||||
@@ -1,215 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RecallioMemory + LangChain Integration Demo\n",
|
||||
"A minimal notebook to show drop-in usage of RecallioMemory in LangChain (with scoped writes and recall)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install recallio langchain langchain-recallio openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup: API Keys & Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_recallio.memory import RecallioMemory\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# Set your keys here or use environment variables\n",
|
||||
"RECALLIO_API_KEY = os.getenv(\"RECALLIO_API_KEY\", \"YOUR_RECALLIO_API_KEY\")\n",
|
||||
"OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\", \"YOUR_OPENAI_API_KEY\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize RecallioMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = RecallioMemory(\n",
|
||||
" project_id=\"project_abc\",\n",
|
||||
" api_key=RECALLIO_API_KEY,\n",
|
||||
" session_id=\"demo-session-001\",\n",
|
||||
" user_id=\"demo-user-42\",\n",
|
||||
" default_tags=[\"test\", \"langchain\"],\n",
|
||||
" return_messages=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Build a LangChain ConversationChain with RecallioMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# You can swap in any supported LLM here\n",
|
||||
"llm = ChatOpenAI(api_key=OPENAI_API_KEY, temperature=0)\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"The following is a friendly conversation between a human and an AI. \"\n",
|
||||
" \"The AI is talkative and provides lots of specific details from its context. \"\n",
|
||||
" \"If the AI does not know the answer to a question, it truthfully says it does not know.\",\n",
|
||||
" ),\n",
|
||||
" (\"placeholder\", \"{history}\"), # RecallioMemory will fill this slot\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# LCEL chain that returns an AIMessage\n",
|
||||
"base_chain = prompt | llm\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Create a stateful chain using RecallioMemory\n",
|
||||
"def chat_with_memory(user_input: str):\n",
|
||||
" # Load conversation history from memory\n",
|
||||
" memory_vars = memory.load_memory_variables({\"input\": user_input})\n",
|
||||
"\n",
|
||||
" # Run the chain with history and user input\n",
|
||||
" response = base_chain.invoke(\n",
|
||||
" {\"input\": user_input, \"history\": memory_vars.get(\"history\", \"\")}\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" # Save the conversation to memory\n",
|
||||
" memory.save_context({\"input\": user_input}, {\"output\": response.content})\n",
|
||||
"\n",
|
||||
" return response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example: Chat with Memory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Bot: Hello Guillaume! It's nice to meet you. How can I assist you today?\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# First user message – note the AI remembers the name\n",
|
||||
"resp1 = chat_with_memory(\"Hi! My name is Guillaume. Remember that.\")\n",
|
||||
"print(\"Bot:\", resp1.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Bot: Your name is Guillaume.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Second user message – AI should recall the name from memory\n",
|
||||
"resp2 = chat_with_memory(\"What is my name?\")\n",
|
||||
"print(\"Bot:\", resp2.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## See What Is Stored in Recallio\n",
|
||||
"This is for debugging/demo only; in production, you wouldn't do this on every run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Current memory variables: {'history': [HumanMessage(content='Name is Guillaume', additional_kwargs={}, response_metadata={})]}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\"Current memory variables:\", memory.load_memory_variables({}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Clear Memory (Optional Cleanup - Requires Manager level Key)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# memory.clear()\n",
|
||||
"# print(\"Memory cleared.\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
26
docs/docs/integrations/providers/aerospike.mdx
Normal file
26
docs/docs/integrations/providers/aerospike.mdx
Normal file
@@ -0,0 +1,26 @@
|
||||
# Aerospike
|
||||
|
||||
>[Aerospike](https://aerospike.com/docs/vector) is a high-performance, distributed database known for its speed and scalability, now with support for vector storage and search, enabling retrieval and search of embedding vectors for machine learning and AI applications.
|
||||
> See the documentation for Aerospike Vector Search (AVS) [here](https://aerospike.com/docs/vector).
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Install the AVS Python SDK and AVS langchain vector store:
|
||||
|
||||
```bash
|
||||
pip install aerospike-vector-search langchain-aerospike
|
||||
```
|
||||
|
||||
See the documentation for the Python SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
|
||||
The documentation for the AVS langchain vector store is [here](https://langchain-aerospike.readthedocs.io/en/latest/).
|
||||
|
||||
## Vector Store
|
||||
|
||||
To import this vectorstore:
|
||||
|
||||
```python
|
||||
from langchain_aerospike.vectorstores import Aerospike
|
||||
```
|
||||
|
||||
See a usage example [here](https://python.langchain.com/docs/integrations/vectorstores/aerospike/).
|
||||
|
||||
@@ -1,272 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# AI/ML API LLM\n",
|
||||
"\n",
|
||||
"[AI/ML API](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration) provides an API to query **300+ leading AI models** (Deepseek, Gemini, ChatGPT, etc.) with enterprise-grade performance.\n",
|
||||
"\n",
|
||||
"This example demonstrates how to use LangChain to interact with AI/ML API models."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "bb9dcd1ba7b0f560"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Installation"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "e4c35f60c565d369"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Requirement already satisfied: langchain-aimlapi in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (0.1.0)\n",
|
||||
"Requirement already satisfied: langchain-core<0.4.0,>=0.3.15 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-aimlapi) (0.3.67)\n",
|
||||
"Requirement already satisfied: langsmith>=0.3.45 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.4.4)\n",
|
||||
"Requirement already satisfied: tenacity!=8.4.0,<10.0.0,>=8.1.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (9.1.2)\n",
|
||||
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.33)\n",
|
||||
"Requirement already satisfied: PyYAML>=5.3 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (6.0.2)\n",
|
||||
"Requirement already satisfied: packaging<25,>=23.2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (24.2)\n",
|
||||
"Requirement already satisfied: typing-extensions>=4.7 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (4.14.0)\n",
|
||||
"Requirement already satisfied: pydantic>=2.7.4 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.11.7)\n",
|
||||
"Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.0.0)\n",
|
||||
"Requirement already satisfied: httpx<1,>=0.23.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.28.1)\n",
|
||||
"Requirement already satisfied: orjson<4.0.0,>=3.9.14 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.10.18)\n",
|
||||
"Requirement already satisfied: requests<3,>=2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.32.4)\n",
|
||||
"Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.0.0)\n",
|
||||
"Requirement already satisfied: zstandard<0.24.0,>=0.23.0 in c:\\users\\tuman\\appdata\\roaming\\python\\python312\\site-packages (from langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.23.0)\n",
|
||||
"Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.7.0)\n",
|
||||
"Requirement already satisfied: pydantic-core==2.33.2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.33.2)\n",
|
||||
"Requirement already satisfied: typing-inspection>=0.4.0 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.7.4->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.4.1)\n",
|
||||
"Requirement already satisfied: anyio in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (4.9.0)\n",
|
||||
"Requirement already satisfied: certifi in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2025.6.15)\n",
|
||||
"Requirement already satisfied: httpcore==1.* in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.0.9)\n",
|
||||
"Requirement already satisfied: idna in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.10)\n",
|
||||
"Requirement already satisfied: h11>=0.16 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpcore==1.*->httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (0.16.0)\n",
|
||||
"Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests<3,>=2->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (3.4.2)\n",
|
||||
"Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests<3,>=2->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (2.5.0)\n",
|
||||
"Requirement already satisfied: sniffio>=1.1 in c:\\users\\tuman\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from anyio->httpx<1,>=0.23.0->langsmith>=0.3.45->langchain-core<0.4.0,>=0.3.15->langchain-aimlapi) (1.3.1)\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"[notice] A new release of pip is available: 25.0.1 -> 25.2\n",
|
||||
"[notice] To update, run: python.exe -m pip install --upgrade pip\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install --upgrade langchain-aimlapi"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-06T15:22:02.570792Z",
|
||||
"start_time": "2025-08-06T15:21:32.377131Z"
|
||||
}
|
||||
},
|
||||
"id": "77d4a44909effc3c",
|
||||
"execution_count": 4
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Environment\n",
|
||||
"\n",
|
||||
"To use AI/ML API, you'll need an API key which you can generate at:\n",
|
||||
"[https://aimlapi.com/app/](https://aimlapi.com/app/?utm_source=langchain&utm_medium=github&utm_campaign=integration)\n",
|
||||
"\n",
|
||||
"You can pass it via `aimlapi_api_key` parameter or set as environment variable `AIMLAPI_API_KEY`."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "c41eaf364c0b414f"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
"\n",
|
||||
"if \"AIMLAPI_API_KEY\" not in os.environ:\n",
|
||||
" os.environ[\"AIMLAPI_API_KEY\"] = getpass.getpass(\"Enter your AI/ML API key: \")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:15:37.147559Z",
|
||||
"start_time": "2025-08-07T07:15:30.919160Z"
|
||||
}
|
||||
},
|
||||
"id": "421cd40d4e54de62",
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Example: Chat Model"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "d9cbe98904f4c5e4"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The city that never sleeps! New York City is a treasure trove of excitement, entertainment, and adventure. Here are some fun things to do in NYC:\n",
|
||||
"\n",
|
||||
"**Iconic Attractions:**\n",
|
||||
"\n",
|
||||
"1. **Statue of Liberty and Ellis Island**: Take a ferry to Liberty Island to see the iconic statue up close and visit the Ellis Island Immigration Museum.\n",
|
||||
"2. **Central Park**: A tranquil oasis in the middle of Manhattan, perfect for a stroll, picnic, or bike ride.\n",
|
||||
"3. **Empire State Building**: For a panoramic view of the city, head to the observation deck of this iconic skyscraper.\n",
|
||||
"4. **The Metropolitan Museum of Art**: One of the world's largest and most famous museums, with a collection that spans over 5,000 years of human history.\n",
|
||||
"\n",
|
||||
"**Neighborhood Explorations:**\n",
|
||||
"\n",
|
||||
"1. **SoHo**: Known for its trendy boutiques, art galleries, and cast-iron buildings.\n",
|
||||
"2. **Greenwich Village**: A charming neighborhood with a rich history, known for its bohemian vibe, jazz clubs, and historic brownstones.\n",
|
||||
"3. **Chinatown and Little Italy**: Experience the vibrant cultures of these two iconic neighborhoods, with delicious food, street festivals, and unique shops.\n",
|
||||
"4. **Williamsburg, Brooklyn**: A hip neighborhood with a thriving arts scene, trendy bars, and some of the best restaurants in the city.\n",
|
||||
"\n",
|
||||
"**Food and Drink:**\n",
|
||||
"\n",
|
||||
"1. **Try a classic NYC slice of pizza**: Visit Lombardi's, Joe's Pizza, or Patsy's Pizzeria for a taste of the city's famous pizza.\n",
|
||||
"2. **Bagels with lox and cream cheese**: A classic NYC breakfast at a Jewish deli like Russ & Daughters Cafe or Ess-a-Bagel.\n",
|
||||
"3. **Food markets**: Visit Smorgasburg in Brooklyn or Chelsea Market for a variety of artisanal foods and drinks.\n",
|
||||
"4. **Rooftop bars**: Enjoy a drink with a view at 230 Fifth, the Top of the Strand, or the Roof at The Viceroy Central Park.\n",
|
||||
"**Performing Arts:**\n",
|
||||
"\n",
|
||||
"1. **Broadway shows**: Catch a musical or play on the Great White Way, like Hamilton, The Lion King, or Wicked.\n",
|
||||
"2. **Jazz clubs**: Visit Blue Note Jazz Club, the Village Vanguard, or the Jazz Standard for live music performances.\n",
|
||||
"3. **Lincoln Center**: Home to the New York City Ballet, the Metropolitan Opera, and the Juilliard School.\n",
|
||||
"4. **"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_aimlapi import ChatAimlapi\n",
|
||||
"\n",
|
||||
"chat = ChatAimlapi(\n",
|
||||
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Stream response\n",
|
||||
"for chunk in chat.stream(\"Tell me fun things to do in NYC\"):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)\n",
|
||||
"\n",
|
||||
"# Or use invoke()\n",
|
||||
"# response = chat.invoke(\"Tell me fun things to do in NYC\")\n",
|
||||
"# print(response)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:15:59.612289Z",
|
||||
"start_time": "2025-08-07T07:15:47.864231Z"
|
||||
}
|
||||
},
|
||||
"id": "3f73a8e113a58e9b",
|
||||
"execution_count": 4
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Example: Text Completion Model"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "7aca59af5cadce80"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" # Funkcja ponownie zwraca nową listę, bez zmienienia listy przekazanej jako argument w funkcji\n",
|
||||
" my_list = [16, 12, 16, 3, 2, 6]\n",
|
||||
" new_list = my_list[:]\n",
|
||||
" for x in range(len(new_list)):\n",
|
||||
" for y in range(len(new_list) - 1):\n",
|
||||
" if new_list[y] > new_list[y + 1]:\n",
|
||||
" new_list[y], new_list[y + 1] = new_list[y + 1], new_list[y]\n",
|
||||
" return new_list, my_list\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def bubble_sort_lib3(list): # Sortowanie z wykorzystaniem zewnętrznej biblioteki poza pętlą\n",
|
||||
" from itertools import permutations\n",
|
||||
" y = len(list)\n",
|
||||
" perms = []\n",
|
||||
" for a in range(0, y + 1):\n",
|
||||
" for subset in permutations(list, a):\n",
|
||||
" \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_aimlapi import AimlapiLLM\n",
|
||||
"\n",
|
||||
"llm = AimlapiLLM(\n",
|
||||
" model=\"gpt-3.5-turbo-instruct\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(llm.invoke(\"def bubble_sort(): \"))"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2025-08-07T07:16:22.595703Z",
|
||||
"start_time": "2025-08-07T07:16:19.410881Z"
|
||||
}
|
||||
},
|
||||
"id": "2af3be417769efc3",
|
||||
"execution_count": 6
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,38 +0,0 @@
|
||||
# Anchor Browser
|
||||
|
||||
[Anchor](https://anchorbrowser.io?utm=langchain) is the platform for AI Agentic browser automation, which solves the challenge of automating workflows for web applications that lack APIs or have limited API coverage. It simplifies the creation, deployment, and management of browser-based automations, transforming complex web interactions into simple API endpoints.
|
||||
|
||||
`langchain-anchorbrowser` provides 3 main tools:
|
||||
- `AnchorContentTool` - For web content extractions in Markdown or HTML format.
|
||||
- `AnchorScreenshotTool` - For web page screenshots.
|
||||
- `AnchorWebTaskTools` - To perform web tasks.
|
||||
|
||||
## Quickstart
|
||||
|
||||
### Installation
|
||||
|
||||
Install the package:
|
||||
|
||||
```bash
|
||||
pip install langchain-anchorbrowser
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
Import and utilize your intended tool. The full list of Anchor Browser available tools see **Tool Features** table in [Anchor Browser tool page](/docs/integrations/tools/anchor_browser)
|
||||
|
||||
```python
|
||||
from langchain_anchorbrowser import AnchorContentTool
|
||||
|
||||
# Get Markdown Content for https://www.anchorbrowser.io
|
||||
AnchorContentTool().invoke(
|
||||
{"url": "https://www.anchorbrowser.io", "format": "markdown"}
|
||||
)
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [PyPi](https://pypi.org/project/langchain-anchorbrowser)
|
||||
- [Github](https://github.com/anchorbrowser/langchain-anchorbrowser)
|
||||
- [Anchor Browser Docs](https://docs.anchorbrowser.io/introduction?utm=langchain)
|
||||
- [Anchor Browser API Reference](https://docs.anchorbrowser.io/api-reference/ai-tools/perform-web-task?utm=langchain)
|
||||
@@ -21,38 +21,6 @@ pip install deepeval
|
||||
|
||||
See an [example](/docs/integrations/callbacks/confident).
|
||||
|
||||
|
||||
## Modern Integration Example
|
||||
|
||||
Install the required packages:
|
||||
|
||||
```bash
|
||||
pip install deepeval langchain langchain-openai
|
||||
```
|
||||
|
||||
Authenticate with your API key:
|
||||
|
||||
```python
|
||||
import os
|
||||
import deepeval
|
||||
|
||||
# Load API key from environment variable for security
|
||||
api_key = os.environ.get("DEEPEVAL_API_KEY")
|
||||
deepeval.login(api_key)
|
||||
from langchain.callbacks.confident_callback import DeepEvalCallbackHandler
|
||||
```
|
||||
|
||||
Use the new callback handler:
|
||||
|
||||
```python
|
||||
from deepeval.integrations.langchain import CallbackHandler
|
||||
|
||||
handler = CallbackHandler(
|
||||
name="My Trace",
|
||||
tags=["production", "v1"],
|
||||
metadata={"experiment": "A/B"},
|
||||
thread_id="thread-123",
|
||||
user_id="user-456"
|
||||
)
|
||||
```
|
||||
|
||||
See the [full example](/docs/integrations/callbacks/confident).
|
||||
@@ -36,7 +36,7 @@ For end-to-end usage check out
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [LangChain Docling integration GitHub](https://github.com/docling-project/docling-langchain)
|
||||
- [LangChain Docling integration GitHub](https://github.com/DS4SD/docling-langchain)
|
||||
- [LangChain Docling integration PyPI package](https://pypi.org/project/langchain-docling/)
|
||||
- [Docling GitHub](https://github.com/docling-project/docling)
|
||||
- [Docling docs](https://docling-project.github.io/docling/)
|
||||
- [Docling GitHub](https://github.com/DS4SD/docling)
|
||||
- [Docling docs](https://ds4sd.github.io/docling/)
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
Install the python SDK:
|
||||
|
||||
```bash
|
||||
pip install firecrawl-py
|
||||
pip install firecrawl-py==0.0.20
|
||||
```
|
||||
|
||||
## Document loader
|
||||
|
||||
@@ -1,129 +0,0 @@
|
||||
# Bigtable
|
||||
|
||||
Bigtable is a scalable, fully managed key-value and wide-column store ideal for fast access to structured, semi-structured, or unstructured data. This page provides an overview of Bigtable's LangChain integrations.
|
||||
|
||||
**Client Library Documentation:** [cloud.google.com/python/docs/reference/langchain-google-bigtable/latest](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest)
|
||||
|
||||
**Product Documentation:** [cloud.google.com/bigtable](https://cloud.google.com/bigtable)
|
||||
|
||||
## Quick Start
|
||||
|
||||
To use this library, you first need to:
|
||||
|
||||
1. Select or create a Cloud Platform project.
|
||||
2. Enable billing for your project.
|
||||
3. Enable the Google Cloud Bigtable API.
|
||||
4. Set up Authentication.
|
||||
|
||||
## Installation
|
||||
|
||||
The main package for this integration is `langchain-google-bigtable`.
|
||||
|
||||
```bash
|
||||
pip install -U langchain-google-bigtable
|
||||
```
|
||||
|
||||
## Integrations
|
||||
|
||||
The `langchain-google-bigtable` package provides the following integrations:
|
||||
|
||||
### Vector Store
|
||||
|
||||
With `BigtableVectorStore`, you can store documents and their vector embeddings to find the most similar or relevant information in your database.
|
||||
|
||||
* **Full `VectorStore` Implementation:** Supports all methods from the LangChain `VectorStore` abstract class.
|
||||
* **Async/Sync Support:** All methods are available in both asynchronous and synchronous versions.
|
||||
* **Metadata Filtering:** Powerful filtering on metadata fields, including logical AND/OR combinations.
|
||||
* **Multiple Distance Strategies:** Supports both Cosine and Euclidean distance for similarity search.
|
||||
* **Customizable Storage:** Full control over how content, embeddings, and metadata are stored in Bigtable columns.
|
||||
|
||||
```python
|
||||
from langchain_google_bigtable import BigtableVectorStore
|
||||
|
||||
# Your embedding service and other configurations
|
||||
# embedding_service = ...
|
||||
|
||||
engine = await BigtableEngine.async_initialize(project_id="your-project-id")
|
||||
vector_store = await BigtableVectorStore.create(
|
||||
engine=engine,
|
||||
instance_id="your-instance-id",
|
||||
table_id="your-table-id",
|
||||
embedding_service=embedding_service,
|
||||
collection="your_collection_name",
|
||||
)
|
||||
await vector_store.aadd_documents([your_documents])
|
||||
results = await vector_store.asimilarity_search("your query")
|
||||
```
|
||||
|
||||
|
||||
Learn more in the [Vector Store how-to guide](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/vector_store.ipynb).
|
||||
|
||||
### Key-value Store
|
||||
|
||||
Use `BigtableByteStore` as a persistent, scalable key-value store for caching, session management, or other storage needs. It supports both synchronous and asynchronous operations.
|
||||
|
||||
```python
|
||||
from langchain_google_bigtable import BigtableByteStore
|
||||
|
||||
# Initialize the store
|
||||
store = await BigtableByteStore.create(
|
||||
project_id="your-project-id",
|
||||
instance_id="your-instance-id",
|
||||
table_id="your-table-id",
|
||||
)
|
||||
|
||||
# Set and get values
|
||||
await store.amset([("key1", b"value1")])
|
||||
retrieved = await store.amget(["key1"])
|
||||
```
|
||||
|
||||
Learn more in the [Key-value Store how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/key-value-store).
|
||||
|
||||
### Document Loader
|
||||
|
||||
Use the `BigtableLoader` to load data from a Bigtable table and represent it as LangChain `Document` objects.
|
||||
|
||||
```python
|
||||
from langchain_google_bigtable import BigtableLoader
|
||||
|
||||
loader = BigtableLoader(
|
||||
project_id="your-project-id",
|
||||
instance_id="your-instance-id",
|
||||
table_id="your-table-name"
|
||||
)
|
||||
docs = loader.load()
|
||||
```
|
||||
|
||||
Learn more in the [Document Loader how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/document-loader).
|
||||
|
||||
### Chat Message History
|
||||
|
||||
Use `BigtableChatMessageHistory` to store conversation histories, enabling stateful chains and agents.
|
||||
|
||||
```python
|
||||
from langchain_google_bigtable import BigtableChatMessageHistory
|
||||
|
||||
history = BigtableChatMessageHistory(
|
||||
project_id="your-project-id",
|
||||
instance_id="your-instance-id",
|
||||
table_id="your-message-store",
|
||||
session_id="user-session-123"
|
||||
)
|
||||
|
||||
history.add_user_message("Hello!")
|
||||
history.add_ai_message("Hi there!")
|
||||
```
|
||||
|
||||
Learn more in the [Chat Message History how-to guide](https://cloud.google.com/python/docs/reference/langchain-google-bigtable/latest/chat-message-history).
|
||||
|
||||
## Contributions
|
||||
|
||||
Contributions to this library are welcome. Please see the CONTRIBUTING guide in the [package repo](https://github.com/googleapis/langchain-google-bigtable-python/) for more details
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the Apache 2.0 License - see the LICENSE file in the [package repo](https://github.com/googleapis/langchain-google-bigtable-python/blob/main/LICENSE) for details.
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This is not an officially supported Google product.
|
||||
@@ -929,41 +929,6 @@ from langchain_google_community.gmail.search import GmailSearch
|
||||
from langchain_google_community.gmail.send_message import GmailSendMessage
|
||||
```
|
||||
|
||||
### MCP Toolbox
|
||||
|
||||
[MCP Toolbox](https://github.com/googleapis/genai-toolbox) provides a simple and efficient way to connect to your databases, including those on Google Cloud like [Cloud SQL](https://cloud.google.com/sql/docs) and [AlloyDB](https://cloud.google.com/alloydb/docs/overview). With MCP Toolbox, you can seamlessly integrate your database with LangChain to build powerful, data-driven applications.
|
||||
|
||||
#### Installation
|
||||
|
||||
To get started, [install the Toolbox server and client](https://github.com/googleapis/genai-toolbox/releases/).
|
||||
|
||||
|
||||
[Configure](https://googleapis.github.io/genai-toolbox/getting-started/configure/) a `tools.yaml` to define your tools, and then execute toolbox to start the server:
|
||||
|
||||
```bash
|
||||
toolbox --tools-file "tools.yaml"
|
||||
```
|
||||
|
||||
Then, install the Toolbox client:
|
||||
|
||||
```bash
|
||||
pip install toolbox-langchain
|
||||
```
|
||||
|
||||
#### Getting Started
|
||||
|
||||
Here is a quick example of how to use MCP Toolbox to connect to your database:
|
||||
|
||||
```python
|
||||
from toolbox_langchain import ToolboxClient
|
||||
|
||||
async with ToolboxClient("http://127.0.0.1:5000") as client:
|
||||
|
||||
tools = client.load_toolset()
|
||||
```
|
||||
|
||||
See [usage example and setup instructions](/docs/integrations/tools/toolbox).
|
||||
|
||||
### Memory
|
||||
|
||||
Store conversation history using Google Cloud databases.
|
||||
|
||||
@@ -1,11 +1,18 @@
|
||||
# DigitalOcean Gradient
|
||||
# ChatGradient
|
||||
|
||||
This will help you getting started with DigitalOcean Gradient [chat models](/docs/concepts/chat_models).
|
||||
|
||||
## Overview
|
||||
### Integration details
|
||||
|
||||
| Class | Package | Package downloads | Package latest |
|
||||
| :--- | :--- | :---: | :---: |
|
||||
| [ChatGradient](https://python.langchain.com/api_reference/langchain-gradient/chat_models/langchain_gradient.chat_models.ChatGradient.html) | [langchain-gradient](https://python.langchain.com/api_reference/langchain-gradient/) |  |  |
|
||||
|
||||
|
||||
## Setup
|
||||
|
||||
langchain-gradient uses DigitalOcean's Gradient™ AI Platform.
|
||||
langchain-gradient uses DigitalOcean Gradient Platform.
|
||||
|
||||
Create an account on DigitalOcean, acquire a `DIGITALOCEAN_INFERENCE_KEY` API key from the Gradient Platform, and install the `langchain-gradient` integration package.
|
||||
|
||||
|
||||
@@ -77,7 +77,7 @@ from langchain_ibm import WatsonxRerank
|
||||
See a [usage example](/docs/integrations/tools/ibm_watsonx).
|
||||
|
||||
```python
|
||||
from langchain_ibm.agent_toolkits.utility import WatsonxToolkit
|
||||
from langchain_ibm import WatsonxToolkit
|
||||
```
|
||||
|
||||
## DB2
|
||||
|
||||
@@ -40,11 +40,11 @@ embeddings.embed_query("What is the meaning of life?")
|
||||
```
|
||||
|
||||
## LLMs
|
||||
`ModelScopeEndpoint` class exposes LLMs from ModelScope.
|
||||
`ModelScopeLLM` class exposes LLMs from ModelScope.
|
||||
|
||||
```python
|
||||
from langchain_modelscope import ModelScopeEndpoint
|
||||
from langchain_modelscope import ModelScopeLLM
|
||||
|
||||
llm = ModelScopeEndpoint(model="Qwen/Qwen2.5-Coder-32B-Instruct")
|
||||
llm = ModelScopeLLM(model="Qwen/Qwen2.5-Coder-32B-Instruct")
|
||||
llm.invoke("The meaning of life is")
|
||||
```
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Oracle Cloud Infrastructure (OCI)
|
||||
# Oracle Cloud Infrastructure (OCI)
|
||||
|
||||
The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://www.oracle.com/artificial-intelligence/).
|
||||
|
||||
@@ -11,15 +11,17 @@ The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://ww
|
||||
To use, you should have the latest `oci` python SDK and the langchain_community package installed.
|
||||
|
||||
```bash
|
||||
python -m pip install -U langchain-oci
|
||||
pip install -U oci langchain-community
|
||||
```
|
||||
|
||||
See [chat](/docs/integrations/chat/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples.
|
||||
See [chat](/docs/integrations/llms/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples.
|
||||
|
||||
```python
|
||||
from langchain_oci.chat_models import ChatOCIGenAI
|
||||
from langchain_community.chat_models import ChatOCIGenAI
|
||||
|
||||
from langchain_oci.embeddings import OCIGenAIEmbeddings
|
||||
from langchain_community.llms import OCIGenAI
|
||||
|
||||
from langchain_community.embeddings import OCIGenAIEmbeddings
|
||||
```
|
||||
|
||||
## OCI Data Science Model Deployment Endpoint
|
||||
@@ -40,8 +42,8 @@ See [chat](/docs/integrations/chat/oci_data_science) and [complete](/docs/integr
|
||||
|
||||
|
||||
```python
|
||||
from langchain_oci.chat_models import ChatOCIModelDeployment
|
||||
from langchain_community.chat_models import ChatOCIModelDeployment
|
||||
|
||||
from langchain_oci.llms import OCIModelDeploymentLLM
|
||||
from langchain_community.llms import OCIModelDeploymentLLM
|
||||
```
|
||||
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
# Ollama
|
||||
|
||||
>[Ollama](https://ollama.com/) allows you to run open-source large language models,
|
||||
> such as [gpt-oss](https://ollama.com/library/gpt-oss), locally.
|
||||
> such as [Llama3.1](https://ai.meta.com/blog/meta-llama-3-1/), locally.
|
||||
>
|
||||
>`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile.
|
||||
>It optimizes setup and configuration details, including GPU usage.
|
||||
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
|
||||
|
||||
See [this guide](/docs/how_to/local_llms#ollama) for more details
|
||||
on how to use `ollama` with LangChain.
|
||||
See [this guide](/docs/how_to/local_llms) for more details
|
||||
on how to use `Ollama` with LangChain.
|
||||
|
||||
## Installation and Setup
|
||||
### Ollama installation
|
||||
@@ -26,7 +26,7 @@ ollama serve
|
||||
After starting ollama, run `ollama pull <name-of-model>` to download a model from the [Ollama model library](https://ollama.ai/library):
|
||||
|
||||
```bash
|
||||
ollama pull gpt-oss:20b
|
||||
ollama pull llama3.1
|
||||
```
|
||||
|
||||
- This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Recallio\n",
|
||||
"\n",
|
||||
"[Recallio](https://recallio.ai/) is a powerfull API allowing to store, index, and retrieve application “memories” with built-in fact extraction, dynamic summaries, reranked recall, and a full knowledge-graph layer.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Installation\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install langchain-recallio\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"from langchain_recallio.memory import RecallioMemory\n",
|
||||
"```"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
# Scrapeless
|
||||
|
||||
[Scrapeless](https://scrapeless.com) offers flexible and feature-rich data acquisition services with extensive parameter customization and multi-format export support.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
```bash
|
||||
pip install langchain-scrapeless
|
||||
```
|
||||
|
||||
You'll need to set up your Scrapeless API key:
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ["SCRAPELESS_API_KEY"] = "your-api-key"
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
The Scrapeless integration provides several tools:
|
||||
|
||||
- [ScrapelessDeepSerpGoogleSearchTool](/docs/integrations/tools/scrapeless_scraping_api) - Enables comprehensive extraction of Google SERP data across all result types.
|
||||
- [ScrapelessDeepSerpGoogleTrendsTool](/docs/integrations/tools/scrapeless_scraping_api) - Retrieves keyword trend data from Google, including popularity over time, regional interest, and related searches.
|
||||
- [ScrapelessUniversalScrapingTool](/docs/integrations/tools/scrapeless_universal_scraping) - Access and extract data from JS-Render websites that typically block bots.
|
||||
- [ScrapelessCrawlerCrawlTool](/docs/integrations/tools/scrapeless_crawl) - Crawl a website and its linked pages to extract comprehensive data.
|
||||
- [ScrapelessCrawlerScrapeTool](/docs/integrations/tools/scrapeless_crawl) - Extract information from a single webpage.
|
||||
@@ -1,72 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ScraperAPI\n",
|
||||
"\n",
|
||||
"[ScraperAPI](https://www.scraperapi.com/) enables data collection from any public website with its web scraping API, without worrying about proxies, browsers, or CAPTCHA handling. [langchain-scraperapi](https://github.com/scraperapi/langchain-scraperapi) wraps this service, making it easy for AI agents to browse the web and scrape data from it.\n",
|
||||
"\n",
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"- Install the Python package with `pip install langchain-scraperapi`.\n",
|
||||
"- Obtain an API key from [ScraperAPI](https://www.scraperapi.com/) and set the environment variable `SCRAPERAPI_API_KEY`.\n",
|
||||
"\n",
|
||||
"### Tools\n",
|
||||
"\n",
|
||||
"The package offers 3 tools to scrape any website, get structured Google search results, and get structured Amazon search results respectively.\n",
|
||||
"\n",
|
||||
"To import them:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install langchain_scraperapi\n",
|
||||
"\n",
|
||||
"from langchain_scraperapi.tools import (\n",
|
||||
" ScraperAPIAmazonSearchTool,\n",
|
||||
" ScraperAPIGoogleSearchTool,\n",
|
||||
" ScraperAPITool,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Example use:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool = ScraperAPITool()\n",
|
||||
"\n",
|
||||
"result = tool.invoke({\"url\": \"https://example.com\", \"output_format\": \"markdown\"})\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For a more detailed walkthrough of how to use these tools, visit the [official repository](https://github.com/scraperapi/langchain-scraperapi)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user