mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-10 19:20:24 +00:00
Compare commits
2 Commits
langchain-
...
copilot/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0ecdd6a174 | ||
|
|
940ad63c63 |
@@ -15,12 +15,12 @@ You may use the button above, or follow these steps to open this repo in a Codes
|
||||
1. Click **Create codespace on master**.
|
||||
|
||||
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
||||
|
||||
|
||||
## VS Code Dev Containers
|
||||
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
|
||||
|
||||
```txt
|
||||
|
||||
@@ -4,7 +4,7 @@ services:
|
||||
build:
|
||||
dockerfile: libs/langchain/dev.Dockerfile
|
||||
context: ..
|
||||
|
||||
|
||||
networks:
|
||||
- langchain-network
|
||||
|
||||
|
||||
2
.github/CODE_OF_CONDUCT.md
vendored
2
.github/CODE_OF_CONDUCT.md
vendored
@@ -129,4 +129,4 @@ For answers to common questions about this code of conduct, see the FAQ at
|
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
6
.github/CONTRIBUTING.md
vendored
6
.github/CONTRIBUTING.md
vendored
@@ -3,4 +3,8 @@
|
||||
Hi there! Thank you for even being interested in contributing to LangChain.
|
||||
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
|
||||
|
||||
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).
|
||||
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
|
||||
|
||||
## New features
|
||||
|
||||
For new features, please start a new [discussion](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.
|
||||
|
||||
17
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
17
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,12 +1,11 @@
|
||||
name: "\U0001F41B Bug Report"
|
||||
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
|
||||
labels: ["bug"]
|
||||
type: bug
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to file a bug report.
|
||||
Thank you for taking the time to file a bug report.
|
||||
|
||||
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
@@ -14,7 +13,9 @@ body:
|
||||
if there's another way to solve your problem:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
|
||||
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
@@ -24,7 +25,7 @@ body:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: This is a bug, not a usage question.
|
||||
- label: This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
|
||||
required: true
|
||||
- label: I added a clear and descriptive title that summarizes this issue.
|
||||
required: true
|
||||
@@ -34,8 +35,6 @@ body:
|
||||
required: true
|
||||
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
|
||||
required: true
|
||||
- label: This is not related to the langchain-community package.
|
||||
required: true
|
||||
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
|
||||
required: true
|
||||
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
|
||||
@@ -51,7 +50,7 @@ body:
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
**Important!**
|
||||
**Important!**
|
||||
|
||||
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
||||
@@ -59,14 +58,14 @@ body:
|
||||
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
||||
|
||||
placeholder: |
|
||||
The following code:
|
||||
The following code:
|
||||
|
||||
```python
|
||||
from langchain_core.runnables import RunnableLambda
|
||||
|
||||
def bad_code(inputs) -> int:
|
||||
raise NotImplementedError('For demo purpose')
|
||||
|
||||
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
```
|
||||
|
||||
9
.github/ISSUE_TEMPLATE/config.yml
vendored
9
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,9 +1,6 @@
|
||||
blank_issues_enabled: false
|
||||
version: 2.1
|
||||
contact_links:
|
||||
- name: 📚 Documentation
|
||||
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
|
||||
about: Report an issue related to the LangChain documentation
|
||||
- name: 💬 LangChain Forum
|
||||
url: https://forum.langchain.com/
|
||||
about: General community discussions and support
|
||||
- name: LangChain Forum
|
||||
url: https://forum.langchain.com/
|
||||
about: General community discussions, support, and feature requests
|
||||
|
||||
59
.github/ISSUE_TEMPLATE/documentation.yml
vendored
Normal file
59
.github/ISSUE_TEMPLATE/documentation.yml
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
name: Documentation
|
||||
description: Report an issue related to the LangChain documentation.
|
||||
title: "docs: <Please write a comprehensive title after the 'docs: ' prefix>"
|
||||
labels: [documentation]
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to report an issue in the documentation.
|
||||
|
||||
Only report issues with documentation here, explain if there are
|
||||
any missing topics or if you found a mistake in the documentation.
|
||||
|
||||
Do **NOT** use this to ask usage questions or reporting issues with your code.
|
||||
|
||||
If you have usage questions or need help solving some problem,
|
||||
please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
If you're in the wrong place, here are some helpful links to find a better
|
||||
place to ask your question:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
- type: input
|
||||
id: url
|
||||
attributes:
|
||||
label: URL
|
||||
description: URL to documentation
|
||||
validations:
|
||||
required: false
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checklist
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I added a very descriptive title to this issue.
|
||||
required: true
|
||||
- label: I included a link to the documentation page I am referring to (if applicable).
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue with current documentation:"
|
||||
description: >
|
||||
Please make sure to leave a reference to the document/code you're
|
||||
referring to. Feel free to include names of classes, functions, methods
|
||||
or concepts you'd like to see documented more.
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Idea or request for content:"
|
||||
description: >
|
||||
Please describe as clearly as possible what topics you think are missing
|
||||
from the current documentation.
|
||||
118
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
118
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
@@ -1,118 +0,0 @@
|
||||
name: "✨ Feature Request"
|
||||
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
|
||||
labels: ["feature request"]
|
||||
type: feature
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to request a new feature.
|
||||
|
||||
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
|
||||
|
||||
Relevant links to check before filing a feature request to see if your request has already been made or
|
||||
if there's another way to achieve what you want:
|
||||
|
||||
* [LangChain Forum](https://forum.langchain.com/),
|
||||
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
|
||||
* [API Reference](https://python.langchain.com/api_reference/),
|
||||
* [LangChain ChatBot](https://chat.langchain.com/)
|
||||
* [GitHub search](https://github.com/langchain-ai/langchain),
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: This is a feature request, not a bug report or usage question.
|
||||
required: true
|
||||
- label: I added a clear and descriptive title that summarizes the feature request.
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar feature request and didn't find it.
|
||||
required: true
|
||||
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
|
||||
required: true
|
||||
- label: This is not related to the langchain-community package.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Feature Description
|
||||
description: |
|
||||
Please provide a clear and concise description of the feature you would like to see added to LangChain.
|
||||
|
||||
What specific functionality are you requesting? Be as detailed as possible.
|
||||
placeholder: |
|
||||
I would like LangChain to support...
|
||||
|
||||
This feature would allow users to...
|
||||
- type: textarea
|
||||
id: use-case
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Use Case
|
||||
description: |
|
||||
Describe the specific use case or problem this feature would solve.
|
||||
|
||||
Why do you need this feature? What problem does it solve for you or other users?
|
||||
placeholder: |
|
||||
I'm trying to build an application that...
|
||||
|
||||
Currently, I have to work around this by...
|
||||
|
||||
This feature would help me/users to...
|
||||
- type: textarea
|
||||
id: proposed-solution
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Proposed Solution
|
||||
description: |
|
||||
If you have ideas about how this feature could be implemented, please describe them here.
|
||||
|
||||
This is optional but can be helpful for maintainers to understand your vision.
|
||||
placeholder: |
|
||||
I think this could be implemented by...
|
||||
|
||||
The API could look like...
|
||||
|
||||
```python
|
||||
# Example of how the feature might work
|
||||
```
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Alternatives Considered
|
||||
description: |
|
||||
Have you considered any alternative solutions or workarounds?
|
||||
|
||||
What other approaches have you tried or considered?
|
||||
placeholder: |
|
||||
I've tried using...
|
||||
|
||||
Alternative approaches I considered:
|
||||
1. ...
|
||||
2. ...
|
||||
|
||||
But these don't work because...
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Additional Context
|
||||
description: |
|
||||
Add any other context, screenshots, examples, or references that would help explain your feature request.
|
||||
placeholder: |
|
||||
Related issues: #...
|
||||
|
||||
Similar features in other libraries:
|
||||
- ...
|
||||
|
||||
Additional context or examples:
|
||||
- ...
|
||||
7
.github/ISSUE_TEMPLATE/privileged.yml
vendored
7
.github/ISSUE_TEMPLATE/privileged.yml
vendored
@@ -4,7 +4,12 @@ body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
|
||||
Thanks for your interest in LangChain! 🚀
|
||||
|
||||
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
|
||||
|
||||
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
|
||||
or are a regular contributor to LangChain with previous merged pull requests.
|
||||
- type: checkboxes
|
||||
id: privileged
|
||||
attributes:
|
||||
|
||||
91
.github/ISSUE_TEMPLATE/task.yml
vendored
91
.github/ISSUE_TEMPLATE/task.yml
vendored
@@ -1,91 +0,0 @@
|
||||
name: "📋 Task"
|
||||
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
|
||||
labels: ["task"]
|
||||
type: task
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for creating a task to help organize LangChain development.
|
||||
|
||||
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
|
||||
|
||||
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
|
||||
- type: checkboxes
|
||||
id: maintainer
|
||||
attributes:
|
||||
label: Maintainer task
|
||||
description: Confirm that you are allowed to create a task here.
|
||||
options:
|
||||
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: task-description
|
||||
attributes:
|
||||
label: Task Description
|
||||
description: |
|
||||
Provide a clear and detailed description of the task.
|
||||
|
||||
What needs to be done? Be specific about the scope and requirements.
|
||||
placeholder: |
|
||||
This task involves...
|
||||
|
||||
The goal is to...
|
||||
|
||||
Specific requirements:
|
||||
- ...
|
||||
- ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: acceptance-criteria
|
||||
attributes:
|
||||
label: Acceptance Criteria
|
||||
description: |
|
||||
Define the criteria that must be met for this task to be considered complete.
|
||||
|
||||
What are the specific deliverables or outcomes expected?
|
||||
placeholder: |
|
||||
This task will be complete when:
|
||||
- [ ] ...
|
||||
- [ ] ...
|
||||
- [ ] ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Context and Background
|
||||
description: |
|
||||
Provide any relevant context, background information, or links to related issues/PRs.
|
||||
|
||||
Why is this task needed? What problem does it solve?
|
||||
placeholder: |
|
||||
Background:
|
||||
- ...
|
||||
|
||||
Related issues/PRs:
|
||||
- #...
|
||||
|
||||
Additional context:
|
||||
- ...
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: dependencies
|
||||
attributes:
|
||||
label: Dependencies
|
||||
description: |
|
||||
List any dependencies or blockers for this task.
|
||||
|
||||
Are there other tasks, issues, or external factors that need to be completed first?
|
||||
placeholder: |
|
||||
This task depends on:
|
||||
- [ ] Issue #...
|
||||
- [ ] PR #...
|
||||
- [ ] External dependency: ...
|
||||
|
||||
Blocked by:
|
||||
- ...
|
||||
validations:
|
||||
required: false
|
||||
11
.github/PULL_REQUEST_TEMPLATE.md
vendored
11
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,5 +1,3 @@
|
||||
(Replace this entire block of text)
|
||||
|
||||
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
|
||||
|
||||
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
|
||||
@@ -11,13 +9,14 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
|
||||
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
|
||||
- Allowed `{SCOPE}` values (optional):
|
||||
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
|
||||
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
|
||||
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
|
||||
- Once you've written the title, please delete this checklist item; do not include it in the PR.
|
||||
|
||||
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
|
||||
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
|
||||
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
|
||||
- **Dependencies:** any dependencies required for this change
|
||||
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
|
||||
1. A test for the integration, preferably unit tests that do not rely on network access,
|
||||
@@ -27,7 +26,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
|
||||
|
||||
Additional guidelines:
|
||||
|
||||
- Most PRs should not touch more than one package.
|
||||
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
|
||||
- Changes should be backwards compatible.
|
||||
- Make sure optional dependencies are imported within a function.
|
||||
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
|
||||
- Most PRs should not touch more than one package.
|
||||
- Changes should be backwards compatible.
|
||||
|
||||
2
.github/actions/people/Dockerfile
vendored
2
.github/actions/people/Dockerfile
vendored
@@ -4,4 +4,4 @@ RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.
|
||||
|
||||
COPY ./app /app
|
||||
|
||||
CMD ["python", "/app/main.py"]
|
||||
CMD ["python", "/app/main.py"]
|
||||
8
.github/actions/people/action.yml
vendored
8
.github/actions/people/action.yml
vendored
@@ -1,13 +1,11 @@
|
||||
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
|
||||
# TODO: fix this, migrate to new docs repo?
|
||||
|
||||
name: "Generate LangChain People"
|
||||
description: "Generate the data for the LangChain People page"
|
||||
author: "Jacob Lee <jacob@langchain.dev>"
|
||||
inputs:
|
||||
token:
|
||||
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
|
||||
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
|
||||
required: true
|
||||
runs:
|
||||
using: "docker"
|
||||
image: "Dockerfile"
|
||||
using: 'docker'
|
||||
image: 'Dockerfile'
|
||||
24
.github/actions/uv_setup/action.yml
vendored
24
.github/actions/uv_setup/action.yml
vendored
@@ -1,24 +1,12 @@
|
||||
# Helper to set up Python and uv with caching
|
||||
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
|
||||
|
||||
name: uv-install
|
||||
description: Set up Python and uv with caching
|
||||
description: Set up Python and uv
|
||||
|
||||
inputs:
|
||||
python-version:
|
||||
description: Python version, supporting MAJOR.MINOR only
|
||||
required: true
|
||||
enable-cache:
|
||||
description: Enable caching for uv dependencies
|
||||
required: false
|
||||
default: "true"
|
||||
cache-suffix:
|
||||
description: Custom cache key suffix for cache invalidation
|
||||
required: false
|
||||
default: ""
|
||||
working-directory:
|
||||
description: Working directory for cache glob scoping
|
||||
required: false
|
||||
default: "**"
|
||||
|
||||
env:
|
||||
UV_VERSION: "0.5.25"
|
||||
@@ -27,13 +15,7 @@ runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Install uv and set the python version
|
||||
uses: astral-sh/setup-uv@v6
|
||||
uses: astral-sh/setup-uv@v5
|
||||
with:
|
||||
version: ${{ env.UV_VERSION }}
|
||||
python-version: ${{ inputs.python-version }}
|
||||
enable-cache: ${{ inputs.enable-cache }}
|
||||
cache-dependency-glob: |
|
||||
${{ inputs.working-directory }}/pyproject.toml
|
||||
${{ inputs.working-directory }}/uv.lock
|
||||
${{ inputs.working-directory }}/requirements*.txt
|
||||
cache-suffix: ${{ inputs.cache-suffix }}
|
||||
|
||||
80
.github/pr-file-labeler.yml
vendored
80
.github/pr-file-labeler.yml
vendored
@@ -1,80 +0,0 @@
|
||||
# Label PRs (config)
|
||||
# Automatically applies labels based on changed files and branch patterns
|
||||
|
||||
# Core packages
|
||||
core:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/core/**/*"
|
||||
|
||||
langchain:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/langchain/**/*"
|
||||
- "libs/langchain_v1/**/*"
|
||||
|
||||
v1:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/langchain_v1/**/*"
|
||||
|
||||
cli:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/cli/**/*"
|
||||
|
||||
standard-tests:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/standard-tests/**/*"
|
||||
|
||||
# Partner integrations
|
||||
integration:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "libs/partners/**/*"
|
||||
|
||||
# Infrastructure and DevOps
|
||||
infra:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- ".github/**/*"
|
||||
- "Makefile"
|
||||
- ".pre-commit-config.yaml"
|
||||
- "scripts/**/*"
|
||||
- "docker/**/*"
|
||||
- "Dockerfile*"
|
||||
|
||||
github_actions:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- ".github/workflows/**/*"
|
||||
- ".github/actions/**/*"
|
||||
|
||||
dependencies:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "**/pyproject.toml"
|
||||
- "uv.lock"
|
||||
- "**/requirements*.txt"
|
||||
- "**/poetry.lock"
|
||||
|
||||
# Documentation
|
||||
documentation:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "docs/**/*"
|
||||
- "**/*.md"
|
||||
- "**/*.rst"
|
||||
- "**/README*"
|
||||
|
||||
# Security related changes
|
||||
security:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file:
|
||||
- "**/*security*"
|
||||
- "**/*auth*"
|
||||
- "**/*credential*"
|
||||
- "**/*secret*"
|
||||
- "**/*token*"
|
||||
- ".github/workflows/security*"
|
||||
41
.github/pr-title-labeler.yml
vendored
41
.github/pr-title-labeler.yml
vendored
@@ -1,41 +0,0 @@
|
||||
# PR title labeler config
|
||||
#
|
||||
# Labels PRs based on conventional commit patterns in titles
|
||||
#
|
||||
# Format: type(scope): description or type!: description (breaking)
|
||||
|
||||
add-missing-labels: true
|
||||
clear-prexisting: false
|
||||
include-commits: false
|
||||
include-title: true
|
||||
label-for-breaking-changes: breaking
|
||||
|
||||
label-mapping:
|
||||
documentation: ["docs"]
|
||||
feature: ["feat"]
|
||||
fix: ["fix"]
|
||||
infra: ["build", "ci", "chore"]
|
||||
integration:
|
||||
[
|
||||
"anthropic",
|
||||
"chroma",
|
||||
"deepseek",
|
||||
"exa",
|
||||
"fireworks",
|
||||
"groq",
|
||||
"huggingface",
|
||||
"mistralai",
|
||||
"nomic",
|
||||
"ollama",
|
||||
"openai",
|
||||
"perplexity",
|
||||
"prompty",
|
||||
"qdrant",
|
||||
"xai",
|
||||
]
|
||||
linting: ["style"]
|
||||
performance: ["perf"]
|
||||
refactor: ["refactor"]
|
||||
release: ["release"]
|
||||
revert: ["revert"]
|
||||
tests: ["test"]
|
||||
55
.github/scripts/check_diff.py
vendored
55
.github/scripts/check_diff.py
vendored
@@ -1,30 +1,17 @@
|
||||
"""Analyze git diffs to determine which directories need to be tested.
|
||||
|
||||
Intelligently determines which LangChain packages and directories need to be tested,
|
||||
linted, or built based on the changes. Handles dependency relationships between
|
||||
packages, maps file changes to appropriate CI job configurations, and outputs JSON
|
||||
configurations for GitHub Actions.
|
||||
|
||||
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
|
||||
- Builds dependency graph to include dependent packages when core components change
|
||||
- Generates test matrix configurations with appropriate Python versions
|
||||
- Handles special cases for Pydantic version testing and performance benchmarks
|
||||
|
||||
Used as part of the check_diffs workflow.
|
||||
"""
|
||||
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Set
|
||||
|
||||
from pathlib import Path
|
||||
import tomllib
|
||||
from get_min_versions import get_min_version_from_toml
|
||||
|
||||
from packaging.requirements import Requirement
|
||||
|
||||
from get_min_versions import get_min_version_from_toml
|
||||
|
||||
|
||||
LANGCHAIN_DIRS = [
|
||||
"libs/core",
|
||||
"libs/text-splitters",
|
||||
@@ -32,7 +19,7 @@ LANGCHAIN_DIRS = [
|
||||
"libs/langchain_v1",
|
||||
]
|
||||
|
||||
# When set to True, we are ignoring core dependents
|
||||
# when set to True, we are ignoring core dependents
|
||||
# in order to be able to get CI to pass for each individual
|
||||
# package that depends on core
|
||||
# e.g. if you touch core, we don't then add textsplitters/etc to CI
|
||||
@@ -51,7 +38,7 @@ IGNORED_PARTNERS = [
|
||||
]
|
||||
|
||||
PY_312_MAX_PACKAGES = [
|
||||
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
|
||||
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
|
||||
]
|
||||
|
||||
|
||||
@@ -64,9 +51,9 @@ def all_package_dirs() -> Set[str]:
|
||||
|
||||
|
||||
def dependents_graph() -> dict:
|
||||
"""Construct a mapping of package -> dependents
|
||||
|
||||
Done such that we can run tests on all dependents of a package when a change is made.
|
||||
"""
|
||||
Construct a mapping of package -> dependents, such that we can
|
||||
run tests on all dependents of a package when a change is made.
|
||||
"""
|
||||
dependents = defaultdict(set)
|
||||
|
||||
@@ -98,9 +85,9 @@ def dependents_graph() -> dict:
|
||||
for depline in extended_deps:
|
||||
if depline.startswith("-e "):
|
||||
# editable dependency
|
||||
assert depline.startswith("-e ../partners/"), (
|
||||
"Extended test deps should only editable install partner packages"
|
||||
)
|
||||
assert depline.startswith(
|
||||
"-e ../partners/"
|
||||
), "Extended test deps should only editable install partner packages"
|
||||
partner = depline.split("partners/")[1]
|
||||
dep = f"langchain-{partner}"
|
||||
else:
|
||||
@@ -138,16 +125,15 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
|
||||
elif dir_ == "libs/core":
|
||||
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
|
||||
# custom logic for specific directories
|
||||
elif dir_ == "libs/partners/milvus":
|
||||
# milvus doesn't allow 3.12 because they declare deps in funny way
|
||||
py_versions = ["3.9", "3.11"]
|
||||
|
||||
elif dir_ in PY_312_MAX_PACKAGES:
|
||||
py_versions = ["3.9", "3.12"]
|
||||
|
||||
elif dir_ == "libs/langchain" and job == "extended-tests":
|
||||
py_versions = ["3.9", "3.13"]
|
||||
elif dir_ == "libs/langchain_v1":
|
||||
py_versions = ["3.10", "3.13"]
|
||||
elif dir_ in {"libs/cli"}:
|
||||
py_versions = ["3.10", "3.13"]
|
||||
|
||||
elif dir_ == ".":
|
||||
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
|
||||
@@ -285,7 +271,7 @@ if __name__ == "__main__":
|
||||
dirs_to_run["extended-test"].add(dir_)
|
||||
elif file.startswith("libs/standard-tests"):
|
||||
# TODO: update to include all packages that rely on standard-tests (all partner packages)
|
||||
# Note: won't run on external repo partners
|
||||
# note: won't run on external repo partners
|
||||
dirs_to_run["lint"].add("libs/standard-tests")
|
||||
dirs_to_run["test"].add("libs/standard-tests")
|
||||
dirs_to_run["lint"].add("libs/cli")
|
||||
@@ -299,7 +285,7 @@ if __name__ == "__main__":
|
||||
elif file.startswith("libs/cli"):
|
||||
dirs_to_run["lint"].add("libs/cli")
|
||||
dirs_to_run["test"].add("libs/cli")
|
||||
|
||||
|
||||
elif file.startswith("libs/partners"):
|
||||
partner_dir = file.split("/")[2]
|
||||
if os.path.isdir(f"libs/partners/{partner_dir}") and [
|
||||
@@ -317,10 +303,7 @@ if __name__ == "__main__":
|
||||
f"Unknown lib: {file}. check_diff.py likely needs "
|
||||
"an update for this new library!"
|
||||
)
|
||||
elif file.startswith("docs/") or file in [
|
||||
"pyproject.toml",
|
||||
"uv.lock",
|
||||
]: # docs or root uv files
|
||||
elif file.startswith("docs/") or file in ["pyproject.toml", "uv.lock"]: # docs or root uv files
|
||||
docs_edited = True
|
||||
dirs_to_run["lint"].add(".")
|
||||
|
||||
|
||||
@@ -1,21 +1,19 @@
|
||||
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
|
||||
|
||||
import sys
|
||||
|
||||
import tomllib
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Get the TOML file path from the command line argument
|
||||
toml_file = sys.argv[1]
|
||||
|
||||
# read toml file
|
||||
with open(toml_file, "rb") as file:
|
||||
toml_data = tomllib.load(file)
|
||||
|
||||
# See if we're releasing an rc or dev version
|
||||
# see if we're releasing an rc
|
||||
version = toml_data["project"]["version"]
|
||||
releasing_rc = "rc" in version or "dev" in version
|
||||
|
||||
# If not, iterate through dependencies and make sure none allow prereleases
|
||||
# if not, iterate through dependencies and make sure none allow prereleases
|
||||
if not releasing_rc:
|
||||
dependencies = toml_data["project"]["dependencies"]
|
||||
for dep_version in dependencies:
|
||||
|
||||
71
.github/scripts/get_min_versions.py
vendored
71
.github/scripts/get_min_versions.py
vendored
@@ -1,22 +1,24 @@
|
||||
"""Get minimum versions of dependencies from a pyproject.toml file."""
|
||||
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
else:
|
||||
# For Python 3.10 and below, which doesnt have stdlib tomllib
|
||||
# for python 3.10 and below, which doesnt have stdlib tomllib
|
||||
import tomli as tomllib
|
||||
|
||||
import re
|
||||
from typing import List
|
||||
|
||||
import requests
|
||||
from packaging.requirements import Requirement
|
||||
from packaging.specifiers import SpecifierSet
|
||||
from packaging.version import Version, parse
|
||||
from packaging.version import Version
|
||||
|
||||
|
||||
import requests
|
||||
from packaging.version import parse
|
||||
from typing import List
|
||||
|
||||
import re
|
||||
|
||||
|
||||
MIN_VERSION_LIBS = [
|
||||
"langchain-core",
|
||||
@@ -36,13 +38,14 @@ SKIP_IF_PULL_REQUEST = [
|
||||
|
||||
|
||||
def get_pypi_versions(package_name: str) -> List[str]:
|
||||
"""Fetch all available versions for a package from PyPI.
|
||||
"""
|
||||
Fetch all available versions for a package from PyPI.
|
||||
|
||||
Args:
|
||||
package_name: Name of the package
|
||||
package_name (str): Name of the package
|
||||
|
||||
Returns:
|
||||
List of all available versions
|
||||
List[str]: List of all available versions
|
||||
|
||||
Raises:
|
||||
requests.exceptions.RequestException: If PyPI API request fails
|
||||
@@ -55,26 +58,25 @@ def get_pypi_versions(package_name: str) -> List[str]:
|
||||
|
||||
|
||||
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
|
||||
"""Find the minimum published version that satisfies the given constraints.
|
||||
"""
|
||||
Find the minimum published version that satisfies the given constraints.
|
||||
|
||||
Args:
|
||||
package_name: Name of the package
|
||||
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
|
||||
package_name (str): Name of the package
|
||||
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
|
||||
|
||||
Returns:
|
||||
Minimum compatible version or None if no compatible version found
|
||||
Optional[str]: Minimum compatible version or None if no compatible version found
|
||||
"""
|
||||
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
|
||||
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
|
||||
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
|
||||
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
|
||||
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
|
||||
for y in range(1, 10):
|
||||
spec_string = re.sub(
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
|
||||
)
|
||||
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
|
||||
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
for x in range(1, 10):
|
||||
spec_string = re.sub(
|
||||
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
|
||||
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
|
||||
)
|
||||
|
||||
spec_set = SpecifierSet(spec_string)
|
||||
@@ -154,28 +156,25 @@ def get_min_version_from_toml(
|
||||
|
||||
|
||||
def check_python_version(version_string, constraint_string):
|
||||
"""Check if the given Python version matches the given constraints.
|
||||
"""
|
||||
Check if the given Python version matches the given constraints.
|
||||
|
||||
Args:
|
||||
version_string: A string representing the Python version (e.g. "3.8.5").
|
||||
constraint_string: A string representing the package's Python version
|
||||
constraints (e.g. ">=3.6, <4.0").
|
||||
|
||||
Returns:
|
||||
True if the version matches the constraints
|
||||
:param version_string: A string representing the Python version (e.g. "3.8.5").
|
||||
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
|
||||
:return: True if the version matches the constraints, False otherwise.
|
||||
"""
|
||||
|
||||
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
|
||||
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
|
||||
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
|
||||
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
|
||||
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
|
||||
for y in range(1, 10):
|
||||
constraint_string = re.sub(
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
|
||||
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
|
||||
)
|
||||
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
|
||||
for x in range(1, 10):
|
||||
constraint_string = re.sub(
|
||||
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
|
||||
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
67
.github/scripts/prep_api_docs_build.py
vendored
67
.github/scripts/prep_api_docs_build.py
vendored
@@ -1,19 +1,15 @@
|
||||
#!/usr/bin/env python
|
||||
"""Sync libraries from various repositories into this monorepo.
|
||||
|
||||
Moves cloned partner packages into libs/partners structure.
|
||||
"""
|
||||
"""Script to sync libraries from various repositories into the main langchain repository."""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
|
||||
|
||||
def load_packages_yaml() -> Dict[str, Any]:
|
||||
"""Load and parse packages.yml."""
|
||||
"""Load and parse the packages.yml file."""
|
||||
with open("langchain/libs/packages.yml", "r") as f:
|
||||
return yaml.safe_load(f)
|
||||
|
||||
@@ -32,6 +28,7 @@ def get_target_dir(package_name: str) -> Path:
|
||||
def clean_target_directories(packages: list) -> None:
|
||||
"""Remove old directories that will be replaced."""
|
||||
for package in packages:
|
||||
|
||||
target_dir = get_target_dir(package["name"])
|
||||
if target_dir.exists():
|
||||
print(f"Removing {target_dir}")
|
||||
@@ -41,6 +38,7 @@ def clean_target_directories(packages: list) -> None:
|
||||
def move_libraries(packages: list) -> None:
|
||||
"""Move libraries from their source locations to the target directories."""
|
||||
for package in packages:
|
||||
|
||||
repo_name = package["repo"].split("/")[1]
|
||||
source_path = package["path"]
|
||||
target_dir = get_target_dir(package["name"])
|
||||
@@ -64,46 +62,31 @@ def move_libraries(packages: list) -> None:
|
||||
|
||||
|
||||
def main():
|
||||
"""Orchestrate the library sync process."""
|
||||
"""Main function to orchestrate the library sync process."""
|
||||
try:
|
||||
# Load packages configuration
|
||||
package_yaml = load_packages_yaml()
|
||||
|
||||
# Clean/empty target directories in preparation for moving new ones
|
||||
#
|
||||
# Only for packages in the langchain-ai org or explicitly included via
|
||||
# include_in_api_ref, excluding 'langchain' itself and 'langchain-ai21'
|
||||
clean_target_directories(
|
||||
[
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if (
|
||||
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
|
||||
)
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"]
|
||||
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
]
|
||||
)
|
||||
# Clean target directories
|
||||
clean_target_directories([
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
])
|
||||
|
||||
# Move cloned libraries to their new locations, only for packages in the
|
||||
# langchain-ai org or explicitly included via include_in_api_ref,
|
||||
# excluding 'langchain' itself and 'langchain-ai21'
|
||||
move_libraries(
|
||||
[
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if not p.get("disabled", False)
|
||||
and (
|
||||
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
|
||||
)
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"]
|
||||
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
]
|
||||
)
|
||||
# Move libraries to their new locations
|
||||
move_libraries([
|
||||
p
|
||||
for p in package_yaml["packages"]
|
||||
if not p.get("disabled", False)
|
||||
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
|
||||
and p["repo"] != "langchain-ai/langchain"
|
||||
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
|
||||
])
|
||||
|
||||
# Delete partner packages without a pyproject.toml
|
||||
# Delete ones without a pyproject.toml
|
||||
for partner in Path("langchain/libs/partners").iterdir():
|
||||
if partner.is_dir() and not (partner / "pyproject.toml").exists():
|
||||
print(f"Removing {partner} as it does not have a pyproject.toml")
|
||||
|
||||
416
.github/tools/git-restore-mtime
vendored
416
.github/tools/git-restore-mtime
vendored
@@ -81,93 +81,56 @@ import time
|
||||
__version__ = "2022.12+dev"
|
||||
|
||||
# Update symlinks only if the platform supports not following them
|
||||
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
|
||||
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
|
||||
|
||||
# Call os.path.normpath() only if not in a POSIX platform (Windows)
|
||||
NORMALIZE_PATHS = os.path.sep != "/"
|
||||
NORMALIZE_PATHS = (os.path.sep != '/')
|
||||
|
||||
# How many files to process in each batch when re-trying merge commits
|
||||
STEPMISSING = 100
|
||||
|
||||
# (Extra) keywords for the os.utime() call performed by touch()
|
||||
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
|
||||
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
|
||||
|
||||
|
||||
# Command-line interface ######################################################
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
|
||||
parser = argparse.ArgumentParser(
|
||||
description=__doc__.split('\n---')[0])
|
||||
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument(
|
||||
"--quiet",
|
||||
"-q",
|
||||
dest="loglevel",
|
||||
action="store_const",
|
||||
const=logging.WARNING,
|
||||
default=logging.INFO,
|
||||
help="Suppress informative messages and summary statistics.",
|
||||
)
|
||||
group.add_argument(
|
||||
"--verbose",
|
||||
"-v",
|
||||
action="count",
|
||||
help="""
|
||||
group.add_argument('--quiet', '-q', dest='loglevel',
|
||||
action="store_const", const=logging.WARNING, default=logging.INFO,
|
||||
help="Suppress informative messages and summary statistics.")
|
||||
group.add_argument('--verbose', '-v', action="count", help="""
|
||||
Print additional information for each processed file.
|
||||
Specify twice to further increase verbosity.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--cwd",
|
||||
"-C",
|
||||
metavar="DIRECTORY",
|
||||
help="""
|
||||
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
|
||||
Run as if %(prog)s was started in directory %(metavar)s.
|
||||
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
|
||||
See 'man 1 git' or 'git --help' for more information.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--git-dir",
|
||||
dest="gitdir",
|
||||
metavar="GITDIR",
|
||||
help="""
|
||||
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
|
||||
Path to the git repository, by default auto-discovered by searching
|
||||
the current directory and its parents for a .git/ subdirectory.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--work-tree",
|
||||
dest="workdir",
|
||||
metavar="WORKTREE",
|
||||
help="""
|
||||
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
|
||||
Path to the work tree root, by default the parent of GITDIR if it's
|
||||
automatically discovered, or the current directory if GITDIR is set.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--force",
|
||||
"-f",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
|
||||
Force updating files with uncommitted modifications.
|
||||
Untracked files and uncommitted deletions, renames and additions are
|
||||
always ignored.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--merge",
|
||||
"-m",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
|
||||
Include merge commits.
|
||||
Leads to more recent times and more files per commit, thus with the same
|
||||
time, which may or may not be what you want.
|
||||
@@ -175,130 +138,71 @@ def parse_args():
|
||||
are found sooner, which can improve performance, sometimes substantially.
|
||||
But as merge commits are usually huge, processing them may also take longer.
|
||||
By default, merge commits are only used for files missing from regular commits.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--first-parent",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--first-parent', default=False, action="store_true", help="""
|
||||
Consider only the first parent, the "main branch", when evaluating merge commits.
|
||||
Only effective when merge commits are processed, either when --merge is
|
||||
used or when finding missing files after the first regular log search.
|
||||
See --skip-missing.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--skip-missing",
|
||||
"-s",
|
||||
dest="missing",
|
||||
default=True,
|
||||
action="store_false",
|
||||
help="""
|
||||
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
|
||||
action="store_false", help="""
|
||||
Do not try to find missing files.
|
||||
If merge commits were not evaluated with --merge and some files were
|
||||
not found in regular commits, by default %(prog)s searches for these
|
||||
files again in the merge commits.
|
||||
This option disables this retry, so files found only in merge commits
|
||||
will not have their timestamp updated.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--no-directories",
|
||||
"-D",
|
||||
dest="dirs",
|
||||
default=True,
|
||||
action="store_false",
|
||||
help="""
|
||||
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
|
||||
action="store_false", help="""
|
||||
Do not update directory timestamps.
|
||||
By default, use the time of its most recently created, renamed or deleted file.
|
||||
Note that just modifying a file will NOT update its directory time.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--test",
|
||||
"-t",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="Test run: do not actually update any file timestamp.",
|
||||
)
|
||||
parser.add_argument('--test', '-t', default=False, action="store_true",
|
||||
help="Test run: do not actually update any file timestamp.")
|
||||
|
||||
parser.add_argument(
|
||||
"--commit-time",
|
||||
"-c",
|
||||
dest="commit_time",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="Use commit time instead of author time.",
|
||||
)
|
||||
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
|
||||
action='store_true', help="Use commit time instead of author time.")
|
||||
|
||||
parser.add_argument(
|
||||
"--oldest-time",
|
||||
"-o",
|
||||
dest="reverse_order",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
|
||||
action='store_true', help="""
|
||||
Update times based on the oldest, instead of the most recent commit of a file.
|
||||
This reverses the order in which the git log is processed to emulate a
|
||||
file "creation" date. Note this will be inaccurate for files deleted and
|
||||
re-created at later dates.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--skip-older-than",
|
||||
metavar="SECONDS",
|
||||
type=int,
|
||||
help="""
|
||||
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
|
||||
Ignore files that are currently older than %(metavar)s.
|
||||
Useful in workflows that assume such files already have a correct timestamp,
|
||||
as it may improve performance by processing fewer files.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--skip-older-than-commit",
|
||||
"-N",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--skip-older-than-commit', '-N', default=False,
|
||||
action='store_true', help="""
|
||||
Ignore files older than the timestamp it would be updated to.
|
||||
Such files may be considered "original", likely in the author's repository.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--unique-times",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="""
|
||||
parser.add_argument('--unique-times', default=False, action="store_true", help="""
|
||||
Set the microseconds to a unique value per commit.
|
||||
Allows telling apart changes that would otherwise have identical timestamps,
|
||||
as git's time accuracy is in seconds.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"pathspec",
|
||||
nargs="*",
|
||||
metavar="PATHSPEC",
|
||||
help="""
|
||||
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
|
||||
Only modify paths matching %(metavar)s, relative to current directory.
|
||||
By default, update all but untracked files and submodules.
|
||||
""",
|
||||
)
|
||||
""")
|
||||
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
"-V",
|
||||
action="version",
|
||||
version="%(prog)s version {version}".format(version=get_version()),
|
||||
)
|
||||
parser.add_argument('--version', '-V', action='version',
|
||||
version='%(prog)s version {version}'.format(version=get_version()))
|
||||
|
||||
args_ = parser.parse_args()
|
||||
if args_.verbose:
|
||||
@@ -308,18 +212,17 @@ def parse_args():
|
||||
|
||||
|
||||
def get_version(version=__version__):
|
||||
if not version.endswith("+dev"):
|
||||
if not version.endswith('+dev'):
|
||||
return version
|
||||
try:
|
||||
cwd = os.path.dirname(os.path.realpath(__file__))
|
||||
return Git(cwd=cwd, errors=False).describe().lstrip("v")
|
||||
return Git(cwd=cwd, errors=False).describe().lstrip('v')
|
||||
except Git.Error:
|
||||
return "-".join((version, "unknown"))
|
||||
return '-'.join((version, "unknown"))
|
||||
|
||||
|
||||
# Helper functions ############################################################
|
||||
|
||||
|
||||
def setup_logging():
|
||||
"""Add TRACE logging level and corresponding method, return the root logger"""
|
||||
logging.TRACE = TRACE = logging.DEBUG // 2
|
||||
@@ -352,13 +255,11 @@ def normalize(path):
|
||||
if path and path[0] == '"':
|
||||
# Python 2: path = path[1:-1].decode("string-escape")
|
||||
# Python 3: https://stackoverflow.com/a/46650050/624066
|
||||
path = (
|
||||
path[1:-1] # Remove enclosing double quotes
|
||||
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
|
||||
.decode("unicode-escape") # Perform the actual octal-escaping decode
|
||||
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
|
||||
.decode("utf8", "surrogateescape")
|
||||
) # Decode from UTF-8
|
||||
path = (path[1:-1] # Remove enclosing double quotes
|
||||
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
|
||||
.decode('unicode-escape') # Perform the actual octal-escaping decode
|
||||
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
|
||||
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
|
||||
if NORMALIZE_PATHS:
|
||||
# Make sure the slash matches the OS; for Windows we need a backslash
|
||||
path = os.path.normpath(path)
|
||||
@@ -381,12 +282,12 @@ def touch_ns(path, mtime_ns):
|
||||
|
||||
def isodate(secs: int):
|
||||
# time.localtime() accepts floats, but discards fractional part
|
||||
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
|
||||
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
|
||||
|
||||
|
||||
def isodate_ns(ns: int):
|
||||
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
|
||||
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
|
||||
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
|
||||
|
||||
|
||||
def get_mtime_ns(secs: int, idx: int):
|
||||
@@ -404,49 +305,35 @@ def get_mtime_path(path):
|
||||
|
||||
# Git class and parse_log(), the heart of the script ##########################
|
||||
|
||||
|
||||
class Git:
|
||||
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
|
||||
self.gitcmd = ["git"]
|
||||
self.gitcmd = ['git']
|
||||
self.errors = errors
|
||||
self._proc = None
|
||||
if workdir:
|
||||
self.gitcmd.extend(("--work-tree", workdir))
|
||||
if gitdir:
|
||||
self.gitcmd.extend(("--git-dir", gitdir))
|
||||
if cwd:
|
||||
self.gitcmd.extend(("-C", cwd))
|
||||
if workdir: self.gitcmd.extend(('--work-tree', workdir))
|
||||
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
|
||||
if cwd: self.gitcmd.extend(('-C', cwd))
|
||||
self.workdir, self.gitdir = self._get_repo_dirs()
|
||||
|
||||
def ls_files(self, paths: list = None):
|
||||
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
|
||||
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
|
||||
|
||||
def ls_dirty(self, force=False):
|
||||
return (
|
||||
normalize(_[3:].split(" -> ", 1)[-1])
|
||||
for _ in self._run("status --porcelain")
|
||||
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
|
||||
)
|
||||
return (normalize(_[3:].split(' -> ', 1)[-1])
|
||||
for _ in self._run('status --porcelain')
|
||||
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
|
||||
or _[1] == 'D')))
|
||||
|
||||
def log(
|
||||
self,
|
||||
merge=False,
|
||||
first_parent=False,
|
||||
commit_time=False,
|
||||
reverse_order=False,
|
||||
paths: list = None,
|
||||
):
|
||||
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
|
||||
if merge:
|
||||
cmd += " -m"
|
||||
if first_parent:
|
||||
cmd += " --first-parent"
|
||||
if reverse_order:
|
||||
cmd += " --reverse"
|
||||
def log(self, merge=False, first_parent=False, commit_time=False,
|
||||
reverse_order=False, paths: list = None):
|
||||
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
|
||||
if merge: cmd += ' -m'
|
||||
if first_parent: cmd += ' --first-parent'
|
||||
if reverse_order: cmd += ' --reverse'
|
||||
return self._run(cmd, paths)
|
||||
|
||||
def describe(self):
|
||||
return self._run("describe --tags", check=True)[0]
|
||||
return self._run('describe --tags', check=True)[0]
|
||||
|
||||
def terminate(self):
|
||||
if self._proc is None:
|
||||
@@ -458,22 +345,18 @@ class Git:
|
||||
pass
|
||||
|
||||
def _get_repo_dirs(self):
|
||||
return (
|
||||
os.path.normpath(_)
|
||||
for _ in self._run(
|
||||
"rev-parse --show-toplevel --absolute-git-dir", check=True
|
||||
)
|
||||
)
|
||||
return (os.path.normpath(_) for _ in
|
||||
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
|
||||
|
||||
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
|
||||
cmdlist = self.gitcmd + shlex.split(cmdstr)
|
||||
if paths:
|
||||
cmdlist.append("--")
|
||||
cmdlist.append('--')
|
||||
cmdlist.extend(paths)
|
||||
popen_args = dict(universal_newlines=True, encoding="utf8")
|
||||
popen_args = dict(universal_newlines=True, encoding='utf8')
|
||||
if not self.errors:
|
||||
popen_args["stderr"] = subprocess.DEVNULL
|
||||
log.trace("Executing: %s", " ".join(cmdlist))
|
||||
popen_args['stderr'] = subprocess.DEVNULL
|
||||
log.trace("Executing: %s", ' '.join(cmdlist))
|
||||
if not output:
|
||||
return subprocess.call(cmdlist, **popen_args)
|
||||
if check:
|
||||
@@ -496,26 +379,30 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
mtime = 0
|
||||
datestr = isodate(0)
|
||||
for line in git.log(
|
||||
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
|
||||
merge,
|
||||
args.first_parent,
|
||||
args.commit_time,
|
||||
args.reverse_order,
|
||||
filterlist
|
||||
):
|
||||
stats["loglines"] += 1
|
||||
stats['loglines'] += 1
|
||||
|
||||
# Blank line between Date and list of files
|
||||
if not line:
|
||||
continue
|
||||
|
||||
# Date line
|
||||
if line[0] != ":": # Faster than `not line.startswith(':')`
|
||||
stats["commits"] += 1
|
||||
if line[0] != ':': # Faster than `not line.startswith(':')`
|
||||
stats['commits'] += 1
|
||||
mtime = int(line)
|
||||
if args.unique_times:
|
||||
mtime = get_mtime_ns(mtime, stats["commits"])
|
||||
mtime = get_mtime_ns(mtime, stats['commits'])
|
||||
if args.debug:
|
||||
datestr = isodate(mtime)
|
||||
continue
|
||||
|
||||
# File line: three tokens if it describes a renaming, otherwise two
|
||||
tokens = line.split("\t")
|
||||
tokens = line.split('\t')
|
||||
|
||||
# Possible statuses:
|
||||
# M: Modified (content changed)
|
||||
@@ -524,7 +411,7 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
# T: Type changed: to/from regular file, symlinks, submodules
|
||||
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
|
||||
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
|
||||
status = tokens[0].split(" ")[-1]
|
||||
status = tokens[0].split(' ')[-1]
|
||||
file = tokens[-1]
|
||||
|
||||
# Handles non-ASCII chars and OS path separator
|
||||
@@ -532,76 +419,56 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
||||
|
||||
def do_file():
|
||||
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
|
||||
stats["skip"] += 1
|
||||
stats['skip'] += 1
|
||||
return
|
||||
if args.debug:
|
||||
log.debug(
|
||||
"%d\t%d\t%d\t%s\t%s",
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
stats["files"],
|
||||
datestr,
|
||||
file,
|
||||
)
|
||||
log.debug("%d\t%d\t%d\t%s\t%s",
|
||||
stats['loglines'], stats['commits'], stats['files'],
|
||||
datestr, file)
|
||||
try:
|
||||
touch(os.path.join(git.workdir, file), mtime)
|
||||
stats["touches"] += 1
|
||||
stats['touches'] += 1
|
||||
except Exception as e:
|
||||
log.error("ERROR: %s: %s", e, file)
|
||||
stats["errors"] += 1
|
||||
stats['errors'] += 1
|
||||
|
||||
def do_dir():
|
||||
if args.debug:
|
||||
log.debug(
|
||||
"%d\t%d\t-\t%s\t%s",
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
datestr,
|
||||
"{}/".format(dirname or "."),
|
||||
)
|
||||
log.debug("%d\t%d\t-\t%s\t%s",
|
||||
stats['loglines'], stats['commits'],
|
||||
datestr, "{}/".format(dirname or '.'))
|
||||
try:
|
||||
touch(os.path.join(git.workdir, dirname), mtime)
|
||||
stats["dirtouches"] += 1
|
||||
stats['dirtouches'] += 1
|
||||
except Exception as e:
|
||||
log.error("ERROR: %s: %s", e, dirname)
|
||||
stats["direrrors"] += 1
|
||||
stats['direrrors'] += 1
|
||||
|
||||
if file in filelist:
|
||||
stats["files"] -= 1
|
||||
stats['files'] -= 1
|
||||
filelist.remove(file)
|
||||
do_file()
|
||||
|
||||
if args.dirs and status in ("A", "D"):
|
||||
if args.dirs and status in ('A', 'D'):
|
||||
dirname = os.path.dirname(file)
|
||||
if dirname in dirlist:
|
||||
dirlist.remove(dirname)
|
||||
do_dir()
|
||||
|
||||
# All files done?
|
||||
if not stats["files"]:
|
||||
if not stats['files']:
|
||||
git.terminate()
|
||||
return
|
||||
|
||||
|
||||
# Main Logic ##################################################################
|
||||
|
||||
|
||||
def main():
|
||||
start = time.time() # yes, Wall time. CPU time is not realistic for users.
|
||||
stats = {
|
||||
_: 0
|
||||
for _ in (
|
||||
"loglines",
|
||||
"commits",
|
||||
"touches",
|
||||
"skip",
|
||||
"errors",
|
||||
"dirtouches",
|
||||
"direrrors",
|
||||
)
|
||||
}
|
||||
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
|
||||
'dirtouches', 'direrrors')}
|
||||
|
||||
logging.basicConfig(level=args.loglevel, format="%(message)s")
|
||||
logging.basicConfig(level=args.loglevel, format='%(message)s')
|
||||
log.trace("Arguments: %s", args)
|
||||
|
||||
# First things first: Where and Who are we?
|
||||
@@ -632,16 +499,13 @@ def main():
|
||||
|
||||
# Symlink (to file, to dir or broken - git handles the same way)
|
||||
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
|
||||
log.warning(
|
||||
"WARNING: Skipping symlink, no OS support for updates: %s", path
|
||||
)
|
||||
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
|
||||
path)
|
||||
continue
|
||||
|
||||
# skip files which are older than given threshold
|
||||
if (
|
||||
args.skip_older_than
|
||||
and start - get_mtime_path(fullpath) > args.skip_older_than
|
||||
):
|
||||
if (args.skip_older_than
|
||||
and start - get_mtime_path(fullpath) > args.skip_older_than):
|
||||
continue
|
||||
|
||||
# Always add files relative to worktree root
|
||||
@@ -655,17 +519,15 @@ def main():
|
||||
else:
|
||||
dirty = set(git.ls_dirty())
|
||||
if dirty:
|
||||
log.warning(
|
||||
"WARNING: Modified files in the working directory were ignored."
|
||||
"\nTo include such files, commit your changes or use --force."
|
||||
)
|
||||
log.warning("WARNING: Modified files in the working directory were ignored."
|
||||
"\nTo include such files, commit your changes or use --force.")
|
||||
filelist -= dirty
|
||||
|
||||
# Build dir list to be processed
|
||||
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
|
||||
|
||||
stats["totalfiles"] = stats["files"] = len(filelist)
|
||||
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
|
||||
stats['totalfiles'] = stats['files'] = len(filelist)
|
||||
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
|
||||
|
||||
if not filelist:
|
||||
# Nothing to do. Exit silently and without errors, just like git does
|
||||
@@ -682,18 +544,10 @@ def main():
|
||||
if args.missing and not args.merge:
|
||||
filterlist = list(filelist)
|
||||
missing = len(filterlist)
|
||||
log.info(
|
||||
"{0:,} files not found in log, trying merge commits".format(missing)
|
||||
)
|
||||
log.info("{0:,} files not found in log, trying merge commits".format(missing))
|
||||
for i in range(0, missing, STEPMISSING):
|
||||
parse_log(
|
||||
filelist,
|
||||
dirlist,
|
||||
stats,
|
||||
git,
|
||||
merge=True,
|
||||
filterlist=filterlist[i : i + STEPMISSING],
|
||||
)
|
||||
parse_log(filelist, dirlist, stats, git,
|
||||
merge=True, filterlist=filterlist[i:i + STEPMISSING])
|
||||
|
||||
# Still missing some?
|
||||
for file in filelist:
|
||||
@@ -702,33 +556,29 @@ def main():
|
||||
# Final statistics
|
||||
# Suggestion: use git-log --before=mtime to brag about skipped log entries
|
||||
def log_info(msg, *a, width=13):
|
||||
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
|
||||
ffmt = "{:%d,.2f}" % (width,)
|
||||
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
|
||||
ffmt = '{:%d,.2f}' % (width,)
|
||||
# %-formatting lacks a thousand separator, must pre-render with .format()
|
||||
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
|
||||
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
|
||||
|
||||
log_info(
|
||||
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
|
||||
time.time() - start,
|
||||
stats["loglines"],
|
||||
stats["commits"],
|
||||
)
|
||||
"Statistics:\n"
|
||||
"%f seconds\n"
|
||||
"%d log lines processed\n"
|
||||
"%d commits evaluated",
|
||||
time.time() - start, stats['loglines'], stats['commits'])
|
||||
|
||||
if args.dirs:
|
||||
if stats["direrrors"]:
|
||||
log_info("%d directory update errors", stats["direrrors"])
|
||||
log_info("%d directories updated", stats["dirtouches"])
|
||||
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
|
||||
log_info("%d directories updated", stats['dirtouches'])
|
||||
|
||||
if stats["touches"] != stats["totalfiles"]:
|
||||
log_info("%d files", stats["totalfiles"])
|
||||
if stats["skip"]:
|
||||
log_info("%d files skipped", stats["skip"])
|
||||
if stats["files"]:
|
||||
log_info("%d files missing", stats["files"])
|
||||
if stats["errors"]:
|
||||
log_info("%d file update errors", stats["errors"])
|
||||
if stats['touches'] != stats['totalfiles']:
|
||||
log_info("%d files", stats['totalfiles'])
|
||||
if stats['skip']: log_info("%d files skipped", stats['skip'])
|
||||
if stats['files']: log_info("%d files missing", stats['files'])
|
||||
if stats['errors']: log_info("%d file update errors", stats['errors'])
|
||||
|
||||
log_info("%d files updated", stats["touches"])
|
||||
log_info("%d files updated", stats['touches'])
|
||||
|
||||
if args.test:
|
||||
log.info("TEST RUN - No files modified!")
|
||||
|
||||
6
.github/workflows/.codespell-exclude
vendored
Normal file
6
.github/workflows/.codespell-exclude
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
"NotIn": "not in",
|
||||
- `/checkin`: Check-in
|
||||
docs/docs/integrations/providers/trulens.mdx
|
||||
self.assertIn(
|
||||
from trulens_eval import Tru
|
||||
tru = Tru()
|
||||
12
.github/workflows/_compile_integration_test.yml
vendored
12
.github/workflows/_compile_integration_test.yml
vendored
@@ -1,11 +1,3 @@
|
||||
# Validates that a package's integration tests compile without syntax or import errors.
|
||||
#
|
||||
# (If an integration test fails to compile, it won't run.)
|
||||
#
|
||||
# Called as part of check_diffs.yml workflow
|
||||
#
|
||||
# Runs pytest with compile marker to check syntax/imports.
|
||||
|
||||
name: '🔗 Compile Integration Tests'
|
||||
|
||||
on:
|
||||
@@ -35,14 +27,12 @@ jobs:
|
||||
timeout-minutes: 20
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Integration Dependencies'
|
||||
shell: bash
|
||||
|
||||
17
.github/workflows/_integration_test.yml
vendored
17
.github/workflows/_integration_test.yml
vendored
@@ -1,12 +1,4 @@
|
||||
|
||||
# Runs `make integration_tests` on the specified package.
|
||||
#
|
||||
# Manually triggered via workflow_dispatch for testing with real APIs.
|
||||
#
|
||||
# Installs integration test dependencies and executes full test suite.
|
||||
|
||||
name: '🚀 Integration Tests'
|
||||
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
@@ -19,7 +11,6 @@ on:
|
||||
required: true
|
||||
type: string
|
||||
description: "Python version to use"
|
||||
default: "3.11"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -33,16 +24,14 @@ jobs:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
runs-on: ubuntu-latest
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
name: '🚀 Integration Tests (Python ${{ inputs.python-version }})'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: integration-tests-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Integration Dependencies'
|
||||
shell: bash
|
||||
@@ -90,7 +79,7 @@ jobs:
|
||||
run: |
|
||||
make integration_tests
|
||||
|
||||
- name: 'Ensure testing did not create/modify files'
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
36
.github/workflows/_lint.yml
vendored
36
.github/workflows/_lint.yml
vendored
@@ -1,11 +1,6 @@
|
||||
# Runs linting.
|
||||
#
|
||||
# Uses the package's Makefile to run the checks, specifically the
|
||||
# `lint_package` and `lint_tests` targets.
|
||||
#
|
||||
# Called as part of check_diffs.yml workflow.
|
||||
|
||||
name: '🧹 Linting'
|
||||
name: '🧹 Code Linting'
|
||||
# Runs code quality checks using ruff, mypy, and other linting tools
|
||||
# Checks both package code and test code for consistency
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -38,16 +33,22 @@ jobs:
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: lint-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Lint & Typing Dependencies'
|
||||
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
||||
# type hints for as many of our libraries as possible.
|
||||
# This helps catch errors that require dependencies to be spotted, for example:
|
||||
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
|
||||
#
|
||||
# If you change this configuration, make sure to change the `cache-key`
|
||||
# in the `poetry_setup` action above to stop using the old cache.
|
||||
# It doesn't matter how you change it, any change will cause a cache-bust.
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
uv sync --group lint --group typing
|
||||
@@ -57,13 +58,20 @@ jobs:
|
||||
run: |
|
||||
make lint_package
|
||||
|
||||
- name: '📦 Install Test Dependencies (non-partners)'
|
||||
# (For directories NOT starting with libs/partners/)
|
||||
- name: '📦 Install Unit Test Dependencies'
|
||||
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
||||
# type hints for as many of our libraries as possible.
|
||||
# This helps catch errors that require dependencies to be spotted, for example:
|
||||
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
|
||||
#
|
||||
# If you change this configuration, make sure to change the `cache-key`
|
||||
# in the `poetry_setup` action above to stop using the old cache.
|
||||
# It doesn't matter how you change it, any change will cause a cache-bust.
|
||||
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
uv sync --inexact --group test
|
||||
- name: '📦 Install Test Dependencies'
|
||||
- name: '📦 Install Unit + Integration Test Dependencies'
|
||||
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
|
||||
108
.github/workflows/_release.yml
vendored
108
.github/workflows/_release.yml
vendored
@@ -1,11 +1,5 @@
|
||||
# Builds and publishes LangChain packages to PyPI.
|
||||
#
|
||||
# Manually triggered, though can be used as a reusable workflow (workflow_call).
|
||||
#
|
||||
# Handles version bumping, building, and publishing to PyPI with authentication.
|
||||
|
||||
name: '🚀 Package Release'
|
||||
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
|
||||
run-name: '🚀 Release ${{ inputs.working-directory }} by @${{ github.actor }}'
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
@@ -20,11 +14,6 @@ on:
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
default: 'libs/langchain'
|
||||
release-version:
|
||||
required: true
|
||||
type: string
|
||||
default: '0.1.0'
|
||||
description: "New version of package being released"
|
||||
dangerous-nonmaster-release:
|
||||
required: false
|
||||
type: boolean
|
||||
@@ -49,7 +38,7 @@ jobs:
|
||||
version: ${{ steps.check-version.outputs.version }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
@@ -58,8 +47,8 @@ jobs:
|
||||
|
||||
# We want to keep this build stage *separate* from the release stage,
|
||||
# so that there's no sharing of permissions between them.
|
||||
# (Release stage has trusted publishing and GitHub repo contents write access,
|
||||
#
|
||||
# The release stage has trusted publishing and GitHub repo contents write access,
|
||||
# and we want to keep the scope of that access limited just to the release job.
|
||||
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
|
||||
# could get access to our GitHub or PyPI credentials.
|
||||
#
|
||||
@@ -98,7 +87,7 @@ jobs:
|
||||
outputs:
|
||||
release-body: ${{ steps.generate-release-body.outputs.release-body }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain
|
||||
path: langchain
|
||||
@@ -122,7 +111,7 @@ jobs:
|
||||
# Look for the latest release of the same base version
|
||||
REGEX="^$PKG_NAME==$BASE_VERSION\$"
|
||||
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
|
||||
|
||||
|
||||
# If no exact base version match, look for the latest release of any kind
|
||||
if [ -z "$PREV_TAG" ]; then
|
||||
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
|
||||
@@ -133,7 +122,7 @@ jobs:
|
||||
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
|
||||
|
||||
# backup case if releasing e.g. 0.3.0, looks up last release
|
||||
# note if last release (chronologically) was e.g. 0.1.47 it will get
|
||||
# note if last release (chronologically) was e.g. 0.1.47 it will get
|
||||
# that instead of the last 0.2 release
|
||||
if [ -z "$PREV_TAG" ]; then
|
||||
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
|
||||
@@ -189,36 +178,13 @@ jobs:
|
||||
needs:
|
||||
- build
|
||||
- release-notes
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: Publish to test PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
# We overwrite any existing distributions with the same name and version.
|
||||
# This is *only for CI use* and is *extremely dangerous* otherwise!
|
||||
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
|
||||
skip-existing: true
|
||||
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
|
||||
attestations: false
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
permissions: write-all
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
|
||||
secrets: inherit
|
||||
|
||||
pre-release-checks:
|
||||
needs:
|
||||
@@ -228,7 +194,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We explicitly *don't* set up caching here. This ensures our tests are
|
||||
# maximally sensitive to catching breakage.
|
||||
@@ -249,7 +215,7 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -294,19 +260,16 @@ jobs:
|
||||
run: |
|
||||
VIRTUAL_ENV=.venv uv pip install dist/*.whl
|
||||
|
||||
- name: Check for prerelease versions
|
||||
# Block release if any dependencies allow prerelease versions
|
||||
# (unless this is itself a prerelease version)
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
|
||||
|
||||
- name: Run unit tests
|
||||
run: make tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: Check for prerelease versions
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
|
||||
|
||||
- name: Get minimum versions
|
||||
# Find the minimum published versions that satisfies the given constraints
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
id: min-version
|
||||
run: |
|
||||
@@ -321,8 +284,7 @@ jobs:
|
||||
env:
|
||||
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
|
||||
run: |
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
|
||||
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
|
||||
make tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
@@ -331,7 +293,6 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: Run integration tests
|
||||
# Uses the Makefile's `integration_tests` target for the specified package
|
||||
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
|
||||
env:
|
||||
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
|
||||
@@ -372,10 +333,7 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
# Test select published packages against new core
|
||||
# Done when code changes are made to langchain-core
|
||||
test-prior-published-packages-against-new-core:
|
||||
# Installs the new core with old partners: Installs the new unreleased core
|
||||
# alongside the previously published partner packages and runs integration tests
|
||||
needs:
|
||||
- build
|
||||
- release-notes
|
||||
@@ -399,11 +357,10 @@ jobs:
|
||||
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We implement this conditional as Github Actions does not have good support
|
||||
# for conditionally needing steps. https://github.com/actions/runner/issues/491
|
||||
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
|
||||
- name: Check if libs/core
|
||||
run: |
|
||||
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
|
||||
@@ -417,7 +374,7 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
if: startsWith(inputs.working-directory, 'libs/core')
|
||||
with:
|
||||
name: dist
|
||||
@@ -426,12 +383,11 @@ jobs:
|
||||
- name: Test against ${{ matrix.partner }}
|
||||
if: startsWith(inputs.working-directory, 'libs/core')
|
||||
run: |
|
||||
# Identify latest tag, excluding pre-releases
|
||||
# Identify latest tag
|
||||
LATEST_PACKAGE_TAG="$(
|
||||
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
|
||||
| awk '{print $2}' \
|
||||
| sed 's|refs/tags/||' \
|
||||
| grep -E '==0\.3\.[0-9]+$' \
|
||||
| sort -Vr \
|
||||
| head -n 1
|
||||
)"
|
||||
@@ -458,7 +414,6 @@ jobs:
|
||||
make integration_tests
|
||||
|
||||
publish:
|
||||
# Publishes the package to PyPI
|
||||
needs:
|
||||
- build
|
||||
- release-notes
|
||||
@@ -479,14 +434,14 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -501,7 +456,6 @@ jobs:
|
||||
attestations: false
|
||||
|
||||
mark-release:
|
||||
# Marks the GitHub release with the new version tag
|
||||
needs:
|
||||
- build
|
||||
- release-notes
|
||||
@@ -511,7 +465,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is needed by `ncipollo/release-action` to
|
||||
# create the GitHub release/tag
|
||||
# create the GitHub release.
|
||||
contents: write
|
||||
|
||||
defaults:
|
||||
@@ -519,18 +473,18 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + uv
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- uses: actions/download-artifact@v5
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
|
||||
- name: Create Tag
|
||||
uses: ncipollo/release-action@v1
|
||||
with:
|
||||
|
||||
12
.github/workflows/_test.yml
vendored
12
.github/workflows/_test.yml
vendored
@@ -1,7 +1,6 @@
|
||||
# Runs unit tests with both current and minimum supported dependency versions
|
||||
# to ensure compatibility across the supported range.
|
||||
|
||||
name: '🧪 Unit Testing'
|
||||
# Runs unit tests with both current and minimum supported dependency versions
|
||||
# to ensure compatibility across the supported range
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -33,16 +32,13 @@ jobs:
|
||||
name: 'Python ${{ inputs.python-version }}'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
id: setup-python
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
run: uv sync --group test --dev
|
||||
@@ -83,4 +79,4 @@ jobs:
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
|
||||
|
||||
11
.github/workflows/_test_doc_imports.yml
vendored
11
.github/workflows/_test_doc_imports.yml
vendored
@@ -1,10 +1,3 @@
|
||||
# Validates that all import statements in `.ipynb` notebooks are correct and functional.
|
||||
#
|
||||
# Called as part of check_diffs.yml.
|
||||
#
|
||||
# Installs test dependencies and LangChain packages in editable mode and
|
||||
# runs check_imports.py.
|
||||
|
||||
name: '📑 Documentation Import Testing'
|
||||
|
||||
on:
|
||||
@@ -28,14 +21,12 @@ jobs:
|
||||
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
|
||||
8
.github/workflows/_test_pydantic.yml
vendored
8
.github/workflows/_test_pydantic.yml
vendored
@@ -1,5 +1,3 @@
|
||||
# Facilitate unit testing against different Pydantic versions for a provided package.
|
||||
|
||||
name: '🐍 Pydantic Version Testing'
|
||||
|
||||
on:
|
||||
@@ -36,14 +34,12 @@ jobs:
|
||||
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
cache-suffix: test-pydantic-${{ inputs.working-directory }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
shell: bash
|
||||
@@ -68,4 +64,4 @@ jobs:
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
106
.github/workflows/_test_release.yml
vendored
Normal file
106
.github/workflows/_test_release.yml
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
name: '🧪 Test Release Package'
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
dangerous-nonmaster-release:
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
description: "Release from a non-master branch (danger!)"
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: "3.11"
|
||||
UV_FROZEN: "true"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
outputs:
|
||||
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
|
||||
version: ${{ steps.check-version.outputs.version }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
# We want to keep this build stage *separate* from the release stage,
|
||||
# so that there's no sharing of permissions between them.
|
||||
# The release stage has trusted publishing and GitHub repo contents write access,
|
||||
# and we want to keep the scope of that access limited just to the release job.
|
||||
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
|
||||
# could get access to our GitHub or PyPI credentials.
|
||||
#
|
||||
# Per the trusted publishing GitHub Action:
|
||||
# > It is strongly advised to separate jobs for building [...]
|
||||
# > from the publish job.
|
||||
# https://github.com/pypa/gh-action-pypi-publish#non-goals
|
||||
- name: '📦 Build Project for Distribution'
|
||||
run: uv build
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: '⬆️ Upload Build Artifacts'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: '🔍 Extract Version Information'
|
||||
id: check-version
|
||||
shell: python
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
import os
|
||||
import tomllib
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
pkg_name = data["project"]["name"]
|
||||
version = data["project"]["version"]
|
||||
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
|
||||
f.write(f"pkg-name={pkg_name}\n")
|
||||
f.write(f"version={version}\n")
|
||||
|
||||
publish:
|
||||
needs:
|
||||
- build
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
- name: Publish to test PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
|
||||
# We overwrite any existing distributions with the same name and version.
|
||||
# This is *only for CI use* and is *extremely dangerous* otherwise!
|
||||
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
|
||||
skip-existing: true
|
||||
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
|
||||
attestations: false
|
||||
61
.github/workflows/api_doc_build.yml
vendored
61
.github/workflows/api_doc_build.yml
vendored
@@ -1,19 +1,10 @@
|
||||
# Build the API reference documentation.
|
||||
#
|
||||
# Runs daily. Can also be triggered manually for immediate updates.
|
||||
#
|
||||
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
|
||||
#
|
||||
# Looks for langchain-ai org repos in packages.yml and checks them out.
|
||||
# Calls prep_api_docs_build.py.
|
||||
|
||||
name: '📚 API Docs'
|
||||
run-name: 'Build & Deploy API Reference'
|
||||
name: '📚 API Documentation Build'
|
||||
# Runs daily or can be triggered manually for immediate updates
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
|
||||
- cron: '0 13 * * *' # Daily at 1PM UTC
|
||||
env:
|
||||
PYTHON_VERSION: "3.11"
|
||||
|
||||
@@ -25,10 +16,10 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: langchain
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-api-docs-html
|
||||
path: langchain-api-docs-html
|
||||
@@ -39,8 +30,6 @@ jobs:
|
||||
uses: mikefarah/yq@master
|
||||
with:
|
||||
cmd: |
|
||||
# Extract repos from packages.yml that are in the langchain-ai org
|
||||
# (excluding 'langchain' itself)
|
||||
yq '
|
||||
.packages[]
|
||||
| select(
|
||||
@@ -62,6 +51,7 @@ jobs:
|
||||
run: |
|
||||
# Get unique repositories
|
||||
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
|
||||
|
||||
# Checkout each unique repository
|
||||
for repo in $REPOS; do
|
||||
# Validate repository format (allow any org with proper format)
|
||||
@@ -69,49 +59,43 @@ jobs:
|
||||
echo "Error: Invalid repository format: $repo"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
REPO_NAME=$(echo $repo | cut -d'/' -f2)
|
||||
|
||||
|
||||
# Additional validation for repo name
|
||||
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
|
||||
echo "Error: Invalid repository name: $REPO_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Checking out $repo to $REPO_NAME"
|
||||
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
|
||||
done
|
||||
|
||||
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
|
||||
uses: actions/setup-python@v6
|
||||
uses: actions/setup-python@v5
|
||||
id: setup-python
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: '📦 Install Initial Python Dependencies using uv'
|
||||
- name: '📦 Install Initial Python Dependencies'
|
||||
working-directory: langchain
|
||||
run: |
|
||||
python -m pip install -U uv
|
||||
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
|
||||
|
||||
- name: '📦 Organize Library Directories'
|
||||
# Places cloned partner packages into libs/partners structure
|
||||
run: python langchain/.github/scripts/prep_api_docs_build.py
|
||||
|
||||
- name: '🧹 Clear Prior Build'
|
||||
- name: '🧹 Remove Old HTML Files'
|
||||
run:
|
||||
# Remove artifacts from prior docs build
|
||||
rm -rf langchain-api-docs-html/api_reference_build/html
|
||||
|
||||
- name: '📦 Install Documentation Dependencies using uv'
|
||||
- name: '📦 Install Documentation Dependencies'
|
||||
working-directory: langchain
|
||||
run: |
|
||||
# Install all partner packages in editable mode with overrides
|
||||
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt --prerelease=allow
|
||||
|
||||
# Install core langchain and other main packages
|
||||
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
|
||||
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
|
||||
|
||||
# Install Sphinx and related packages for building docs
|
||||
python -m uv pip install -r docs/api_reference/requirements.txt
|
||||
|
||||
- name: '🔧 Configure Git Settings'
|
||||
@@ -123,29 +107,14 @@ jobs:
|
||||
- name: '📚 Build API Documentation'
|
||||
working-directory: langchain
|
||||
run: |
|
||||
# Generate the API reference RST files
|
||||
python docs/api_reference/create_api_rst.py
|
||||
|
||||
# Build the HTML documentation using Sphinx
|
||||
# -T: show full traceback on exception
|
||||
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
|
||||
# -b html: build HTML docs (vs PDS, etc.)
|
||||
# -d: path for the cached environment (parsed document trees / doctrees)
|
||||
# - Separate from output dir for faster incremental builds
|
||||
# -c: path to conf.py
|
||||
# -j auto: parallel build using all available CPU cores
|
||||
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
|
||||
|
||||
# Post-process the generated HTML
|
||||
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
|
||||
|
||||
# Default index page is blank so we copy in the actual home page.
|
||||
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
|
||||
|
||||
# Removes Sphinx's intermediate build artifacts after the build is complete.
|
||||
rm -rf ../langchain-api-docs-html/_build/
|
||||
|
||||
# Commit and push changes to langchain-api-docs-html repo
|
||||
# https://github.com/marketplace/actions/add-commit
|
||||
- uses: EndBug/add-and-commit@v9
|
||||
with:
|
||||
cwd: langchain-api-docs-html
|
||||
|
||||
8
.github/workflows/check-broken-links.yml
vendored
8
.github/workflows/check-broken-links.yml
vendored
@@ -1,11 +1,9 @@
|
||||
# Runs broken link checker in /docs on a daily schedule.
|
||||
|
||||
name: '🔗 Check Broken Links'
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
|
||||
- cron: '0 13 * * *'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -15,9 +13,9 @@ jobs:
|
||||
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
- name: '🟢 Setup Node.js 18.x'
|
||||
uses: actions/setup-node@v5
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 18.x
|
||||
cache: "yarn"
|
||||
|
||||
39
.github/workflows/check_core_versions.yml
vendored
39
.github/workflows/check_core_versions.yml
vendored
@@ -1,8 +1,6 @@
|
||||
# Ensures version numbers in pyproject.toml and version.py stay in sync.
|
||||
#
|
||||
# (Prevents releases with mismatched version numbers)
|
||||
|
||||
name: '🔍 Check Version Equality'
|
||||
name: '🔍 Check `core` Version Equality'
|
||||
# Ensures version numbers in pyproject.toml and version.py stay in sync
|
||||
# Prevents releases with mismatched version numbers
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
@@ -18,34 +16,19 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '✅ Verify pyproject.toml & version.py Match'
|
||||
run: |
|
||||
# Check core versions
|
||||
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
|
||||
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
|
||||
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
|
||||
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
|
||||
|
||||
# Compare core versions
|
||||
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
|
||||
# Compare the two versions
|
||||
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
|
||||
echo "langchain-core versions in pyproject.toml and version.py do not match!"
|
||||
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
|
||||
echo "version.py version: $CORE_VERSION_PY_VERSION"
|
||||
echo "pyproject.toml version: $PYPROJECT_VERSION"
|
||||
echo "version.py version: $VERSION_PY_VERSION"
|
||||
exit 1
|
||||
else
|
||||
echo "Core versions match: $CORE_PYPROJECT_VERSION"
|
||||
fi
|
||||
|
||||
# Check langchain_v1 versions
|
||||
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
|
||||
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
|
||||
|
||||
# Compare langchain_v1 versions
|
||||
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
|
||||
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
|
||||
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
|
||||
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
|
||||
exit 1
|
||||
else
|
||||
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
|
||||
echo "Versions match: $PYPROJECT_VERSION"
|
||||
fi
|
||||
|
||||
94
.github/workflows/check_diffs.yml
vendored
94
.github/workflows/check_diffs.yml
vendored
@@ -1,18 +1,3 @@
|
||||
# Primary CI workflow.
|
||||
#
|
||||
# Only runs against packages that have changed files.
|
||||
#
|
||||
# Runs:
|
||||
# - Linting (_lint.yml)
|
||||
# - Unit Tests (_test.yml)
|
||||
# - Pydantic compatibility tests (_test_pydantic.yml)
|
||||
# - Documentation import tests (_test_doc_imports.yml)
|
||||
# - Integration test compilation checks (_compile_integration_test.yml)
|
||||
# - Extended test suites that require additional dependencies
|
||||
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
|
||||
#
|
||||
# Reports status to GitHub checks and PR status.
|
||||
|
||||
name: '🔧 CI'
|
||||
|
||||
on:
|
||||
@@ -26,8 +11,8 @@ on:
|
||||
# cancel the earlier run in favor of the next run.
|
||||
#
|
||||
# There's no point in testing an outdated version of the code. GitHub only allows
|
||||
# a limited number of job runners to be active at the same time, so it's better to
|
||||
# cancel pointless jobs early so that more useful jobs can run sooner.
|
||||
# a limited number of job runners to be active at the same time, so it's better to cancel
|
||||
# pointless jobs early so that more useful jobs can run sooner.
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
@@ -45,12 +30,11 @@ jobs:
|
||||
build:
|
||||
name: 'Detect Changes & Set Matrix'
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
|
||||
steps:
|
||||
- name: '📋 Checkout Code'
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v4
|
||||
- name: '🐍 Setup Python 3.11'
|
||||
uses: actions/setup-python@v6
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: '📂 Get Changed Files'
|
||||
@@ -69,7 +53,6 @@ jobs:
|
||||
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
|
||||
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
|
||||
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
|
||||
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
|
||||
# Run linting only on packages that have changed files
|
||||
lint:
|
||||
needs: [ build ]
|
||||
@@ -126,7 +109,6 @@ jobs:
|
||||
|
||||
# Verify integration tests compile without actually running them (faster feedback)
|
||||
compile-integration-tests:
|
||||
name: 'Compile Integration Tests'
|
||||
needs: [ build ]
|
||||
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
|
||||
strategy:
|
||||
@@ -155,14 +137,12 @@ jobs:
|
||||
run:
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.job-configs.python-version }}
|
||||
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
|
||||
- name: '📦 Install Dependencies & Run Extended Tests'
|
||||
shell: bash
|
||||
@@ -185,72 +165,10 @@ jobs:
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
# Run codspeed benchmarks only on packages that have changed files
|
||||
codspeed:
|
||||
name: '⚡ CodSpeed Benchmarks'
|
||||
needs: [ build ]
|
||||
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
|
||||
fail-fast: false
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
# We have to use 3.12 as 3.13 is not yet supported
|
||||
- name: '📦 Install UV Package Manager'
|
||||
uses: astral-sh/setup-uv@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
run: uv sync --group test
|
||||
working-directory: ${{ matrix.job-configs.working-directory }}
|
||||
|
||||
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
|
||||
uses: CodSpeedHQ/action@v4
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
|
||||
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
||||
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
|
||||
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
||||
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
|
||||
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
|
||||
with:
|
||||
token: ${{ secrets.CODSPEED_TOKEN }}
|
||||
run: |
|
||||
cd ${{ matrix.job-configs.working-directory }}
|
||||
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
|
||||
uv run --no-sync pytest ./tests/benchmarks --codspeed
|
||||
else
|
||||
uv run --no-sync pytest ./tests/ --codspeed
|
||||
fi
|
||||
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
|
||||
|
||||
# Final status check - ensures all required jobs passed before allowing merge
|
||||
ci_success:
|
||||
name: '✅ CI Success'
|
||||
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
|
||||
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
|
||||
if: |
|
||||
always()
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
7
.github/workflows/check_new_docs.yml
vendored
7
.github/workflows/check_new_docs.yml
vendored
@@ -1,6 +1,3 @@
|
||||
# For integrations, we run check_templates.py to ensure that new docs use the correct
|
||||
# templates based on their type. See the script for more details.
|
||||
|
||||
name: '📑 Integration Docs Lint'
|
||||
|
||||
on:
|
||||
@@ -25,8 +22,8 @@ jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/setup-python@v6
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
- id: files
|
||||
|
||||
65
.github/workflows/codspeed.yml
vendored
Normal file
65
.github/workflows/codspeed.yml
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
name: '⚡ CodSpeed'
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
|
||||
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
|
||||
DEEPSEEK_API_KEY: foo
|
||||
FIREWORKS_API_KEY: foo
|
||||
|
||||
jobs:
|
||||
codspeed:
|
||||
name: 'Benchmark'
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- working-directory: libs/core
|
||||
mode: walltime
|
||||
- working-directory: libs/partners/openai
|
||||
- working-directory: libs/partners/anthropic
|
||||
- working-directory: libs/partners/deepseek
|
||||
- working-directory: libs/partners/fireworks
|
||||
- working-directory: libs/partners/xai
|
||||
- working-directory: libs/partners/mistralai
|
||||
- working-directory: libs/partners/groq
|
||||
fail-fast: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# We have to use 3.12 as 3.13 is not yet supported
|
||||
- name: '📦 Install UV Package Manager'
|
||||
uses: astral-sh/setup-uv@v6
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: '📦 Install Test Dependencies'
|
||||
run: uv sync --group test
|
||||
working-directory: ${{ matrix.working-directory }}
|
||||
|
||||
- name: '⚡ Run Benchmarks: ${{ matrix.working-directory }}'
|
||||
uses: CodSpeedHQ/action@v3
|
||||
with:
|
||||
token: ${{ secrets.CODSPEED_TOKEN }}
|
||||
run: |
|
||||
cd ${{ matrix.working-directory }}
|
||||
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
|
||||
uv run --no-sync pytest ./tests/benchmarks --codspeed
|
||||
else
|
||||
uv run --no-sync pytest ./tests/ --codspeed
|
||||
fi
|
||||
mode: ${{ matrix.mode || 'instrumentation' }}
|
||||
10
.github/workflows/extract_ignored_words_list.py
vendored
Normal file
10
.github/workflows/extract_ignored_words_list.py
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
import toml
|
||||
|
||||
pyproject_toml = toml.load("pyproject.toml")
|
||||
|
||||
# Extract the ignore words list (adjust the key as per your TOML structure)
|
||||
ignore_words_list = (
|
||||
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
|
||||
)
|
||||
|
||||
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
|
||||
9
.github/workflows/people.yml
vendored
9
.github/workflows/people.yml
vendored
@@ -1,11 +1,8 @@
|
||||
# Updates the LangChain People data by fetching the latest info from the LangChain Git.
|
||||
# TODO: broken/not used
|
||||
|
||||
name: '👥 LangChain People'
|
||||
run-name: 'Update People Data'
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 14 1 * *" # Runs at 14:00 UTC on the 1st of every month (10AM EDT/7AM PDT)
|
||||
- cron: "0 14 1 * *"
|
||||
push:
|
||||
branches: [jacob/people]
|
||||
workflow_dispatch:
|
||||
@@ -21,7 +18,7 @@ jobs:
|
||||
env:
|
||||
GITHUB_CONTEXT: ${{ toJson(github) }}
|
||||
run: echo "$GITHUB_CONTEXT"
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
# Ref: https://github.com/actions/runner/issues/2033
|
||||
- name: '🔧 Fix Git Safe Directory in Container'
|
||||
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
|
||||
|
||||
28
.github/workflows/pr_labeler_file.yml
vendored
28
.github/workflows/pr_labeler_file.yml
vendored
@@ -1,28 +0,0 @@
|
||||
# Label PRs based on changed files.
|
||||
#
|
||||
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
|
||||
|
||||
name: "🏷️ Pull Request Labeler"
|
||||
|
||||
on:
|
||||
# Safe since we're not checking out or running the PR's code
|
||||
# Never check out the PR's head in a pull_request_target job
|
||||
pull_request_target:
|
||||
types: [opened, synchronize, reopened, edited]
|
||||
|
||||
jobs:
|
||||
labeler:
|
||||
name: 'label'
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Label Pull Request
|
||||
uses: actions/labeler@v6
|
||||
with:
|
||||
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
||||
configuration-path: .github/pr-file-labeler.yml
|
||||
sync-labels: false
|
||||
28
.github/workflows/pr_labeler_title.yml
vendored
28
.github/workflows/pr_labeler_title.yml
vendored
@@ -1,28 +0,0 @@
|
||||
# Label PRs based on their titles.
|
||||
#
|
||||
# See `.github/pr-title-labeler.yml` to see rules for each label/title pattern.
|
||||
|
||||
name: "🏷️ PR Title Labeler"
|
||||
|
||||
on:
|
||||
# Safe since we're not checking out or running the PR's code
|
||||
# Never check out the PR's head in a pull_request_target job
|
||||
pull_request_target:
|
||||
types: [opened, synchronize, reopened, edited]
|
||||
|
||||
jobs:
|
||||
pr-title-labeler:
|
||||
name: 'label'
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Label PR based on title
|
||||
# Archived repo; latest commit (v0.1.0)
|
||||
uses: grafana/pr-labeler-action@f19222d3ef883d2ca5f04420fdfe8148003763f0
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
configuration-path: .github/pr-title-labeler.yml
|
||||
72
.github/workflows/pr_lint.yml
vendored
72
.github/workflows/pr_lint.yml
vendored
@@ -1,43 +1,50 @@
|
||||
# PR title linting.
|
||||
# -----------------------------------------------------------------------------
|
||||
# PR Title Lint Workflow
|
||||
#
|
||||
# FORMAT (Conventional Commits 1.0.0):
|
||||
# Purpose:
|
||||
# Enforces Conventional Commits format for pull request titles to maintain a
|
||||
# clear, consistent, and machine-readable change history across our repository.
|
||||
# This helps with automated changelog generation and semantic versioning.
|
||||
#
|
||||
# Enforced Commit Message Format (Conventional Commits 1.0.0):
|
||||
# <type>[optional scope]: <description>
|
||||
# [optional body]
|
||||
# [optional footer(s)]
|
||||
#
|
||||
# Examples:
|
||||
# feat(core): add multi‐tenant support
|
||||
# fix(cli): resolve flag parsing error
|
||||
# docs: update API usage examples
|
||||
# docs(openai): update API usage examples
|
||||
#
|
||||
# Allowed Types:
|
||||
# * feat — a new feature (MINOR)
|
||||
# * fix — a bug fix (PATCH)
|
||||
# * docs — documentation only changes (either in /docs or code comments)
|
||||
# * style — formatting, linting, etc.; no code change or typing refactors
|
||||
# * refactor — code change that neither fixes a bug nor adds a feature
|
||||
# * perf — code change that improves performance
|
||||
# * test — adding tests or correcting existing
|
||||
# * build — changes that affect the build system/external dependencies
|
||||
# * ci — continuous integration/configuration changes
|
||||
# * chore — other changes that don't modify source or test files
|
||||
# * revert — reverts a previous commit
|
||||
# * release — prepare a new release
|
||||
# • feat — a new feature (MINOR bump)
|
||||
# • fix — a bug fix (PATCH bump)
|
||||
# • docs — documentation only changes
|
||||
# • style — formatting, missing semi-colons, etc.; no code change
|
||||
# • refactor — code change that neither fixes a bug nor adds a feature
|
||||
# • perf — code change that improves performance
|
||||
# • test — adding missing tests or correcting existing tests
|
||||
# • build — changes that affect the build system or external dependencies
|
||||
# • ci — continuous integration/configuration changes
|
||||
# • chore — other changes that don't modify src or test files
|
||||
# • revert — reverts a previous commit
|
||||
# • release — prepare a new release
|
||||
#
|
||||
# Allowed Scopes (optional):
|
||||
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
|
||||
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
|
||||
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
|
||||
# xai, infra
|
||||
# core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek,
|
||||
# exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai,
|
||||
# perplexity, prompty, qdrant, xai
|
||||
#
|
||||
# Rules:
|
||||
# 1. The 'Type' must start with a lowercase letter.
|
||||
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
|
||||
# Rules & Tips for New Committers:
|
||||
# 1. Subject (type) must start with a lowercase letter and, if possible, be
|
||||
# followed by a scope wrapped in parenthesis `(scope)`
|
||||
# 2. Breaking changes:
|
||||
# – Append "!" after type/scope (e.g., feat!: drop Node 12 support)
|
||||
# – Or include a footer "BREAKING CHANGE: <details>"
|
||||
# 3. Example PR titles:
|
||||
# feat(core): add multi‐tenant support
|
||||
# fix(cli): resolve flag parsing error
|
||||
# docs: update API usage examples
|
||||
# docs(openai): update API usage examples
|
||||
#
|
||||
# Enforces Conventional Commits format for pull request titles to maintain a clear and
|
||||
# machine-readable change history.
|
||||
# Resources:
|
||||
# • Conventional Commits spec: https://www.conventionalcommits.org/en/v1.0.0/
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
name: '🏷️ PR Title Lint'
|
||||
|
||||
@@ -49,13 +56,13 @@ on:
|
||||
types: [opened, edited, synchronize]
|
||||
|
||||
jobs:
|
||||
# Validates that PR title follows Conventional Commits 1.0.0 specification
|
||||
# Validates that PR title follows Conventional Commits specification
|
||||
lint-pr-title:
|
||||
name: 'validate format'
|
||||
name: 'Validate PR Title Format'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: '✅ Validate Conventional Commits Format'
|
||||
uses: amannn/action-semantic-pull-request@v6
|
||||
uses: amannn/action-semantic-pull-request@v5
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
@@ -77,7 +84,6 @@ jobs:
|
||||
cli
|
||||
langchain
|
||||
langchain_v1
|
||||
langchain_legacy
|
||||
standard-tests
|
||||
text-splitters
|
||||
docs
|
||||
|
||||
12
.github/workflows/run_notebooks.yml
vendored
12
.github/workflows/run_notebooks.yml
vendored
@@ -1,7 +1,5 @@
|
||||
# Integration tests for documentation notebooks.
|
||||
name: '📝 Run Documentation Notebooks'
|
||||
|
||||
name: '📓 Validate Documentation Notebooks'
|
||||
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
@@ -28,23 +26,21 @@ jobs:
|
||||
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
|
||||
name: '📑 Test Documentation Notebooks'
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: '🐍 Set up Python + UV'
|
||||
uses: "./.github/actions/uv_setup"
|
||||
with:
|
||||
python-version: ${{ github.event.inputs.python_version || '3.11' }}
|
||||
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
|
||||
working-directory: ${{ github.event.inputs.working-directory || '**' }}
|
||||
|
||||
- name: '🔐 Authenticate to Google Cloud'
|
||||
id: 'auth'
|
||||
uses: google-github-actions/auth@v3
|
||||
uses: google-github-actions/auth@v2
|
||||
with:
|
||||
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
||||
|
||||
- name: '🔐 Configure AWS Credentials'
|
||||
uses: aws-actions/configure-aws-credentials@v5
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
|
||||
29
.github/workflows/scheduled_test.yml
vendored
29
.github/workflows/scheduled_test.yml
vendored
@@ -1,14 +1,7 @@
|
||||
# Routine integration tests against partner libraries with live API credentials.
|
||||
#
|
||||
# Uses `make integration_tests` for each library in the matrix.
|
||||
#
|
||||
# Runs daily. Can also be triggered manually for immediate updates.
|
||||
|
||||
name: '⏰ Scheduled Integration Tests'
|
||||
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
workflow_dispatch: # Allows maintainers to trigger the workflow manually in GitHub UI
|
||||
inputs:
|
||||
working-directory-force:
|
||||
type: string
|
||||
@@ -26,7 +19,7 @@ env:
|
||||
POETRY_VERSION: "1.8.4"
|
||||
UV_FROZEN: "true"
|
||||
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
|
||||
POETRY_LIBS: ("libs/partners/aws")
|
||||
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
|
||||
|
||||
jobs:
|
||||
# Generate dynamic test matrix based on input parameters or defaults
|
||||
@@ -60,13 +53,13 @@ jobs:
|
||||
echo $matrix
|
||||
echo "matrix=$matrix" >> $GITHUB_OUTPUT
|
||||
# Run integration tests against partner libraries with live API credentials
|
||||
# Tests are run with Poetry or UV depending on the library's setup
|
||||
# Tests are run with both Poetry and UV depending on the library's setup
|
||||
build:
|
||||
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
|
||||
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
|
||||
runs-on: ubuntu-latest
|
||||
needs: [compute-matrix]
|
||||
timeout-minutes: 30
|
||||
timeout-minutes: 20
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
@@ -74,14 +67,14 @@ jobs:
|
||||
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
path: langchain
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-google
|
||||
path: langchain-google
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-aws
|
||||
path: langchain-aws
|
||||
@@ -112,12 +105,12 @@ jobs:
|
||||
|
||||
- name: '🔐 Authenticate to Google Cloud'
|
||||
id: 'auth'
|
||||
uses: google-github-actions/auth@v3
|
||||
uses: google-github-actions/auth@v2
|
||||
with:
|
||||
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
||||
|
||||
- name: '🔐 Configure AWS Credentials'
|
||||
uses: aws-actions/configure-aws-credentials@v5
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
@@ -167,8 +160,8 @@ jobs:
|
||||
make integration_tests
|
||||
|
||||
- name: '🧹 Clean up External Libraries'
|
||||
# Clean up external libraries to avoid affecting the following git status check
|
||||
run: |
|
||||
# Clean up external libraries to avoid affecting git status check
|
||||
run: |
|
||||
rm -rf \
|
||||
langchain/libs/partners/google-genai \
|
||||
langchain/libs/partners/google-vertexai \
|
||||
|
||||
9
.github/workflows/v1_changes.md
vendored
9
.github/workflows/v1_changes.md
vendored
@@ -1,9 +0,0 @@
|
||||
With the deprecation of v0 docs, the following files will need to be migrated/supported
|
||||
in the new docs repo:
|
||||
|
||||
- run_notebooks.yml: New repo should run Integration tests on code snippets?
|
||||
- people.yml: Need to fix and somehow display on the new docs site
|
||||
- Subsequently, `.github/actions/people/`
|
||||
- _test_doc_imports.yml
|
||||
- check_new_docs.yml
|
||||
- check-broken-links.yml
|
||||
@@ -11,4 +11,4 @@
|
||||
"MD046": {
|
||||
"style": "fenced"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2,104 +2,110 @@ repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: core
|
||||
name: format and lint core
|
||||
name: format core
|
||||
language: system
|
||||
entry: make -C libs/core format lint
|
||||
entry: make -C libs/core format
|
||||
files: ^libs/core/
|
||||
pass_filenames: false
|
||||
- id: langchain
|
||||
name: format and lint langchain
|
||||
name: format langchain
|
||||
language: system
|
||||
entry: make -C libs/langchain format lint
|
||||
entry: make -C libs/langchain format
|
||||
files: ^libs/langchain/
|
||||
pass_filenames: false
|
||||
- id: standard-tests
|
||||
name: format and lint standard-tests
|
||||
name: format standard-tests
|
||||
language: system
|
||||
entry: make -C libs/standard-tests format lint
|
||||
entry: make -C libs/standard-tests format
|
||||
files: ^libs/standard-tests/
|
||||
pass_filenames: false
|
||||
- id: text-splitters
|
||||
name: format and lint text-splitters
|
||||
name: format text-splitters
|
||||
language: system
|
||||
entry: make -C libs/text-splitters format lint
|
||||
entry: make -C libs/text-splitters format
|
||||
files: ^libs/text-splitters/
|
||||
pass_filenames: false
|
||||
- id: anthropic
|
||||
name: format and lint partners/anthropic
|
||||
name: format partners/anthropic
|
||||
language: system
|
||||
entry: make -C libs/partners/anthropic format lint
|
||||
entry: make -C libs/partners/anthropic format
|
||||
files: ^libs/partners/anthropic/
|
||||
pass_filenames: false
|
||||
- id: chroma
|
||||
name: format and lint partners/chroma
|
||||
name: format partners/chroma
|
||||
language: system
|
||||
entry: make -C libs/partners/chroma format lint
|
||||
entry: make -C libs/partners/chroma format
|
||||
files: ^libs/partners/chroma/
|
||||
pass_filenames: false
|
||||
- id: exa
|
||||
name: format and lint partners/exa
|
||||
- id: couchbase
|
||||
name: format partners/couchbase
|
||||
language: system
|
||||
entry: make -C libs/partners/exa format lint
|
||||
entry: make -C libs/partners/couchbase format
|
||||
files: ^libs/partners/couchbase/
|
||||
pass_filenames: false
|
||||
- id: exa
|
||||
name: format partners/exa
|
||||
language: system
|
||||
entry: make -C libs/partners/exa format
|
||||
files: ^libs/partners/exa/
|
||||
pass_filenames: false
|
||||
- id: fireworks
|
||||
name: format and lint partners/fireworks
|
||||
name: format partners/fireworks
|
||||
language: system
|
||||
entry: make -C libs/partners/fireworks format lint
|
||||
entry: make -C libs/partners/fireworks format
|
||||
files: ^libs/partners/fireworks/
|
||||
pass_filenames: false
|
||||
- id: groq
|
||||
name: format and lint partners/groq
|
||||
name: format partners/groq
|
||||
language: system
|
||||
entry: make -C libs/partners/groq format lint
|
||||
entry: make -C libs/partners/groq format
|
||||
files: ^libs/partners/groq/
|
||||
pass_filenames: false
|
||||
- id: huggingface
|
||||
name: format and lint partners/huggingface
|
||||
name: format partners/huggingface
|
||||
language: system
|
||||
entry: make -C libs/partners/huggingface format lint
|
||||
entry: make -C libs/partners/huggingface format
|
||||
files: ^libs/partners/huggingface/
|
||||
pass_filenames: false
|
||||
- id: mistralai
|
||||
name: format and lint partners/mistralai
|
||||
name: format partners/mistralai
|
||||
language: system
|
||||
entry: make -C libs/partners/mistralai format lint
|
||||
entry: make -C libs/partners/mistralai format
|
||||
files: ^libs/partners/mistralai/
|
||||
pass_filenames: false
|
||||
- id: nomic
|
||||
name: format and lint partners/nomic
|
||||
name: format partners/nomic
|
||||
language: system
|
||||
entry: make -C libs/partners/nomic format lint
|
||||
entry: make -C libs/partners/nomic format
|
||||
files: ^libs/partners/nomic/
|
||||
pass_filenames: false
|
||||
- id: ollama
|
||||
name: format and lint partners/ollama
|
||||
name: format partners/ollama
|
||||
language: system
|
||||
entry: make -C libs/partners/ollama format lint
|
||||
entry: make -C libs/partners/ollama format
|
||||
files: ^libs/partners/ollama/
|
||||
pass_filenames: false
|
||||
- id: openai
|
||||
name: format and lint partners/openai
|
||||
name: format partners/openai
|
||||
language: system
|
||||
entry: make -C libs/partners/openai format lint
|
||||
entry: make -C libs/partners/openai format
|
||||
files: ^libs/partners/openai/
|
||||
pass_filenames: false
|
||||
- id: prompty
|
||||
name: format and lint partners/prompty
|
||||
name: format partners/prompty
|
||||
language: system
|
||||
entry: make -C libs/partners/prompty format lint
|
||||
entry: make -C libs/partners/prompty format
|
||||
files: ^libs/partners/prompty/
|
||||
pass_filenames: false
|
||||
- id: qdrant
|
||||
name: format and lint partners/qdrant
|
||||
name: format partners/qdrant
|
||||
language: system
|
||||
entry: make -C libs/partners/qdrant format lint
|
||||
entry: make -C libs/partners/qdrant format
|
||||
files: ^libs/partners/qdrant/
|
||||
pass_filenames: false
|
||||
- id: root
|
||||
name: format and lint docs, cookbook
|
||||
name: format docs, cookbook
|
||||
language: system
|
||||
entry: make format lint
|
||||
entry: make format
|
||||
files: ^(docs|cookbook)/
|
||||
pass_filenames: false
|
||||
|
||||
25
.readthedocs.yaml
Normal file
25
.readthedocs.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Read the Docs configuration file
|
||||
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
||||
version: 2
|
||||
|
||||
# Set the version of Python and other tools you might need
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.11"
|
||||
commands:
|
||||
- mkdir -p $READTHEDOCS_OUTPUT
|
||||
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
|
||||
|
||||
# Build documentation in the docs/ directory with Sphinx
|
||||
sphinx:
|
||||
configuration: docs/api_reference/conf.py
|
||||
|
||||
# If using Sphinx, optionally build your docs in additional formats such as PDF
|
||||
formats:
|
||||
- pdf
|
||||
|
||||
# Optionally declare the Python requirements required to build your docs
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/api_reference/requirements.txt
|
||||
9
.vscode/settings.json
vendored
9
.vscode/settings.json
vendored
@@ -21,7 +21,7 @@
|
||||
"[python]": {
|
||||
"editor.formatOnSave": true,
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.organizeImports.ruff": "explicit",
|
||||
"source.organizeImports": "explicit",
|
||||
"source.fixAll": "explicit"
|
||||
},
|
||||
"editor.defaultFormatter": "charliermarsh.ruff"
|
||||
@@ -77,11 +77,4 @@
|
||||
"editor.tabSize": 2,
|
||||
"editor.insertSpaces": true
|
||||
},
|
||||
"python.terminal.activateEnvironment": false,
|
||||
"python.defaultInterpreterPath": "./.venv/bin/python",
|
||||
"github.copilot.chat.commitMessageGeneration.instructions": [
|
||||
{
|
||||
"file": ".github/workflows/pr_lint.yml"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
325
AGENTS.md
325
AGENTS.md
@@ -1,325 +0,0 @@
|
||||
# Global Development Guidelines for LangChain Projects
|
||||
|
||||
## Core Development Principles
|
||||
|
||||
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
|
||||
|
||||
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
|
||||
|
||||
❌ **Bad - Breaking Change:**
|
||||
|
||||
```python
|
||||
def get_user(id, verbose=False): # Changed from `user_id`
|
||||
pass
|
||||
```
|
||||
|
||||
✅ **Good - Stable Interface:**
|
||||
|
||||
```python
|
||||
def get_user(user_id: str, verbose: bool = False) -> User:
|
||||
"""Retrieve user by ID with optional verbose output."""
|
||||
pass
|
||||
```
|
||||
|
||||
**Before making ANY changes to public APIs:**
|
||||
|
||||
- Check if the function/class is exported in `__init__.py`
|
||||
- Look for existing usage patterns in tests and examples
|
||||
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
|
||||
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
|
||||
|
||||
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
|
||||
|
||||
### 2. Code Quality Standards
|
||||
|
||||
**All Python code MUST include type hints and return types.**
|
||||
|
||||
❌ **Bad:**
|
||||
|
||||
```python
|
||||
def p(u, d):
|
||||
return [x for x in u if x not in d]
|
||||
```
|
||||
|
||||
✅ **Good:**
|
||||
|
||||
```python
|
||||
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
|
||||
"""Filter out users that are not in the known users set.
|
||||
|
||||
Args:
|
||||
users: List of user identifiers to filter.
|
||||
known_users: Set of known/valid user identifiers.
|
||||
|
||||
Returns:
|
||||
List of users that are not in the known_users set.
|
||||
"""
|
||||
return [user for user in users if user not in known_users]
|
||||
```
|
||||
|
||||
**Style Requirements:**
|
||||
|
||||
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
|
||||
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
|
||||
- Avoid unnecessary abstraction or premature optimization
|
||||
- Follow existing patterns in the codebase you're modifying
|
||||
|
||||
### 3. Testing Requirements
|
||||
|
||||
**Every new feature or bugfix MUST be covered by unit tests.**
|
||||
|
||||
**Test Organization:**
|
||||
|
||||
- Unit tests: `tests/unit_tests/` (no network calls allowed)
|
||||
- Integration tests: `tests/integration_tests/` (network calls permitted)
|
||||
- Use `pytest` as the testing framework
|
||||
|
||||
**Test Quality Checklist:**
|
||||
|
||||
- [ ] Tests fail when your new logic is broken
|
||||
- [ ] Happy path is covered
|
||||
- [ ] Edge cases and error conditions are tested
|
||||
- [ ] Use fixtures/mocks for external dependencies
|
||||
- [ ] Tests are deterministic (no flaky tests)
|
||||
|
||||
Checklist questions:
|
||||
|
||||
- [ ] Does the test suite fail if your new logic is broken?
|
||||
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
|
||||
- [ ] Do tests use fixtures or mocks where needed?
|
||||
|
||||
```python
|
||||
def test_filter_unknown_users():
|
||||
"""Test filtering unknown users from a list."""
|
||||
users = ["alice", "bob", "charlie"]
|
||||
known_users = {"alice", "bob"}
|
||||
|
||||
result = filter_unknown_users(users, known_users)
|
||||
|
||||
assert result == ["charlie"]
|
||||
assert len(result) == 1
|
||||
```
|
||||
|
||||
### 4. Security and Risk Assessment
|
||||
|
||||
**Security Checklist:**
|
||||
|
||||
- No `eval()`, `exec()`, or `pickle` on user-controlled input
|
||||
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
|
||||
- Remove unreachable/commented code before committing
|
||||
- Race conditions or resource leaks (file handles, sockets, threads).
|
||||
- Ensure proper resource cleanup (file handles, connections)
|
||||
|
||||
❌ **Bad:**
|
||||
|
||||
```python
|
||||
def load_config(path):
|
||||
with open(path) as f:
|
||||
return eval(f.read()) # ⚠️ Never eval config
|
||||
```
|
||||
|
||||
✅ **Good:**
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
def load_config(path: str) -> dict:
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
```
|
||||
|
||||
### 5. Documentation Standards
|
||||
|
||||
**Use Google-style docstrings with Args section for all public functions.**
|
||||
|
||||
❌ **Insufficient Documentation:**
|
||||
|
||||
```python
|
||||
def send_email(to, msg):
|
||||
"""Send an email to a recipient."""
|
||||
```
|
||||
|
||||
✅ **Complete Documentation:**
|
||||
|
||||
```python
|
||||
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
|
||||
"""
|
||||
Send an email to a recipient with specified priority.
|
||||
|
||||
Args:
|
||||
to: The email address of the recipient.
|
||||
msg: The message body to send.
|
||||
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
|
||||
|
||||
Returns:
|
||||
True if email was sent successfully, False otherwise.
|
||||
|
||||
Raises:
|
||||
InvalidEmailError: If the email address format is invalid.
|
||||
SMTPConnectionError: If unable to connect to email server.
|
||||
"""
|
||||
```
|
||||
|
||||
**Documentation Guidelines:**
|
||||
|
||||
- Types go in function signatures, NOT in docstrings
|
||||
- Focus on "why" rather than "what" in descriptions
|
||||
- Document all parameters, return values, and exceptions
|
||||
- Keep descriptions concise but clear
|
||||
- Use reStructuredText for docstrings to enable rich formatting
|
||||
|
||||
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
|
||||
|
||||
### 6. Architectural Improvements
|
||||
|
||||
**When you encounter code that could be improved, suggest better designs:**
|
||||
|
||||
❌ **Poor Design:**
|
||||
|
||||
```python
|
||||
def process_data(data, db_conn, email_client, logger):
|
||||
# Function doing too many things
|
||||
validated = validate_data(data)
|
||||
result = db_conn.save(validated)
|
||||
email_client.send_notification(result)
|
||||
logger.log(f"Processed {len(data)} items")
|
||||
return result
|
||||
```
|
||||
|
||||
✅ **Better Design:**
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ProcessingResult:
|
||||
"""Result of data processing operation."""
|
||||
items_processed: int
|
||||
success: bool
|
||||
errors: List[str] = field(default_factory=list)
|
||||
|
||||
class DataProcessor:
|
||||
"""Handles data validation, storage, and notification."""
|
||||
|
||||
def __init__(self, db_conn: Database, email_client: EmailClient):
|
||||
self.db = db_conn
|
||||
self.email = email_client
|
||||
|
||||
def process(self, data: List[dict]) -> ProcessingResult:
|
||||
"""Process and store data with notifications."""
|
||||
validated = self._validate_data(data)
|
||||
result = self.db.save(validated)
|
||||
self._notify_completion(result)
|
||||
return result
|
||||
```
|
||||
|
||||
**Design Improvement Areas:**
|
||||
|
||||
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
|
||||
|
||||
- Reduce code duplication through shared utilities
|
||||
- Make unit testing easier
|
||||
- Improve separation of concerns (single responsibility)
|
||||
- Make unit testing easier through dependency injection
|
||||
- Add clarity without adding complexity
|
||||
- Prefer dataclasses for structured data
|
||||
|
||||
## Development Tools & Commands
|
||||
|
||||
### Package Management
|
||||
|
||||
```bash
|
||||
# Add package
|
||||
uv add package-name
|
||||
|
||||
# Sync project dependencies
|
||||
uv sync
|
||||
uv lock
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run unit tests (no network)
|
||||
make test
|
||||
|
||||
# Don't run integration tests, as API keys must be set
|
||||
|
||||
# Run specific test file
|
||||
uv run --group test pytest tests/unit_tests/test_specific.py
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
|
||||
```bash
|
||||
# Lint code
|
||||
make lint
|
||||
|
||||
# Format code
|
||||
make format
|
||||
|
||||
# Type checking
|
||||
uv run --group lint mypy .
|
||||
```
|
||||
|
||||
### Dependency Management Patterns
|
||||
|
||||
**Local Development Dependencies:**
|
||||
|
||||
```toml
|
||||
[tool.uv.sources]
|
||||
langchain-core = { path = "../core", editable = true }
|
||||
langchain-tests = { path = "../standard-tests", editable = true }
|
||||
```
|
||||
|
||||
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def search_database(query: str) -> str:
|
||||
"""Search the database for relevant information.
|
||||
|
||||
Args:
|
||||
query: The search query string.
|
||||
"""
|
||||
# Implementation here
|
||||
return results
|
||||
```
|
||||
|
||||
## Commit Standards
|
||||
|
||||
**Use Conventional Commits format for PR titles:**
|
||||
|
||||
- `feat(core): add multi-tenant support`
|
||||
- `fix(cli): resolve flag parsing error`
|
||||
- `docs: update API usage examples`
|
||||
- `docs(openai): update API usage examples`
|
||||
|
||||
## Framework-Specific Guidelines
|
||||
|
||||
- Follow the existing patterns in `langchain-core` for base abstractions
|
||||
- Use `langchain_core.callbacks` for execution tracking
|
||||
- Implement proper streaming support where applicable
|
||||
- Avoid deprecated components like legacy `LLMChain`
|
||||
|
||||
### Partner Integrations
|
||||
|
||||
- Follow the established patterns in existing partner libraries
|
||||
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
|
||||
- Include comprehensive integration tests
|
||||
- Document API key requirements and authentication
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Before submitting code changes:
|
||||
|
||||
- [ ] **Breaking Changes**: Verified no public API changes
|
||||
- [ ] **Type Hints**: All functions have complete type annotations
|
||||
- [ ] **Tests**: New functionality is fully tested
|
||||
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
|
||||
- [ ] **Documentation**: Google-style docstrings for public functions
|
||||
- [ ] **Code Quality**: `make lint` and `make format` pass
|
||||
- [ ] **Architecture**: Suggested improvements where applicable
|
||||
- [ ] **Commit Message**: Follows Conventional Commits format
|
||||
325
CLAUDE.md
325
CLAUDE.md
@@ -1,325 +0,0 @@
|
||||
# Global Development Guidelines for LangChain Projects
|
||||
|
||||
## Core Development Principles
|
||||
|
||||
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
|
||||
|
||||
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
|
||||
|
||||
❌ **Bad - Breaking Change:**
|
||||
|
||||
```python
|
||||
def get_user(id, verbose=False): # Changed from `user_id`
|
||||
pass
|
||||
```
|
||||
|
||||
✅ **Good - Stable Interface:**
|
||||
|
||||
```python
|
||||
def get_user(user_id: str, verbose: bool = False) -> User:
|
||||
"""Retrieve user by ID with optional verbose output."""
|
||||
pass
|
||||
```
|
||||
|
||||
**Before making ANY changes to public APIs:**
|
||||
|
||||
- Check if the function/class is exported in `__init__.py`
|
||||
- Look for existing usage patterns in tests and examples
|
||||
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
|
||||
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
|
||||
|
||||
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
|
||||
|
||||
### 2. Code Quality Standards
|
||||
|
||||
**All Python code MUST include type hints and return types.**
|
||||
|
||||
❌ **Bad:**
|
||||
|
||||
```python
|
||||
def p(u, d):
|
||||
return [x for x in u if x not in d]
|
||||
```
|
||||
|
||||
✅ **Good:**
|
||||
|
||||
```python
|
||||
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
|
||||
"""Filter out users that are not in the known users set.
|
||||
|
||||
Args:
|
||||
users: List of user identifiers to filter.
|
||||
known_users: Set of known/valid user identifiers.
|
||||
|
||||
Returns:
|
||||
List of users that are not in the known_users set.
|
||||
"""
|
||||
return [user for user in users if user not in known_users]
|
||||
```
|
||||
|
||||
**Style Requirements:**
|
||||
|
||||
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
|
||||
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
|
||||
- Avoid unnecessary abstraction or premature optimization
|
||||
- Follow existing patterns in the codebase you're modifying
|
||||
|
||||
### 3. Testing Requirements
|
||||
|
||||
**Every new feature or bugfix MUST be covered by unit tests.**
|
||||
|
||||
**Test Organization:**
|
||||
|
||||
- Unit tests: `tests/unit_tests/` (no network calls allowed)
|
||||
- Integration tests: `tests/integration_tests/` (network calls permitted)
|
||||
- Use `pytest` as the testing framework
|
||||
|
||||
**Test Quality Checklist:**
|
||||
|
||||
- [ ] Tests fail when your new logic is broken
|
||||
- [ ] Happy path is covered
|
||||
- [ ] Edge cases and error conditions are tested
|
||||
- [ ] Use fixtures/mocks for external dependencies
|
||||
- [ ] Tests are deterministic (no flaky tests)
|
||||
|
||||
Checklist questions:
|
||||
|
||||
- [ ] Does the test suite fail if your new logic is broken?
|
||||
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
|
||||
- [ ] Do tests use fixtures or mocks where needed?
|
||||
|
||||
```python
|
||||
def test_filter_unknown_users():
|
||||
"""Test filtering unknown users from a list."""
|
||||
users = ["alice", "bob", "charlie"]
|
||||
known_users = {"alice", "bob"}
|
||||
|
||||
result = filter_unknown_users(users, known_users)
|
||||
|
||||
assert result == ["charlie"]
|
||||
assert len(result) == 1
|
||||
```
|
||||
|
||||
### 4. Security and Risk Assessment
|
||||
|
||||
**Security Checklist:**
|
||||
|
||||
- No `eval()`, `exec()`, or `pickle` on user-controlled input
|
||||
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
|
||||
- Remove unreachable/commented code before committing
|
||||
- Race conditions or resource leaks (file handles, sockets, threads).
|
||||
- Ensure proper resource cleanup (file handles, connections)
|
||||
|
||||
❌ **Bad:**
|
||||
|
||||
```python
|
||||
def load_config(path):
|
||||
with open(path) as f:
|
||||
return eval(f.read()) # ⚠️ Never eval config
|
||||
```
|
||||
|
||||
✅ **Good:**
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
def load_config(path: str) -> dict:
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
```
|
||||
|
||||
### 5. Documentation Standards
|
||||
|
||||
**Use Google-style docstrings with Args section for all public functions.**
|
||||
|
||||
❌ **Insufficient Documentation:**
|
||||
|
||||
```python
|
||||
def send_email(to, msg):
|
||||
"""Send an email to a recipient."""
|
||||
```
|
||||
|
||||
✅ **Complete Documentation:**
|
||||
|
||||
```python
|
||||
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
|
||||
"""
|
||||
Send an email to a recipient with specified priority.
|
||||
|
||||
Args:
|
||||
to: The email address of the recipient.
|
||||
msg: The message body to send.
|
||||
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
|
||||
|
||||
Returns:
|
||||
True if email was sent successfully, False otherwise.
|
||||
|
||||
Raises:
|
||||
InvalidEmailError: If the email address format is invalid.
|
||||
SMTPConnectionError: If unable to connect to email server.
|
||||
"""
|
||||
```
|
||||
|
||||
**Documentation Guidelines:**
|
||||
|
||||
- Types go in function signatures, NOT in docstrings
|
||||
- Focus on "why" rather than "what" in descriptions
|
||||
- Document all parameters, return values, and exceptions
|
||||
- Keep descriptions concise but clear
|
||||
- Use reStructuredText for docstrings to enable rich formatting
|
||||
|
||||
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
|
||||
|
||||
### 6. Architectural Improvements
|
||||
|
||||
**When you encounter code that could be improved, suggest better designs:**
|
||||
|
||||
❌ **Poor Design:**
|
||||
|
||||
```python
|
||||
def process_data(data, db_conn, email_client, logger):
|
||||
# Function doing too many things
|
||||
validated = validate_data(data)
|
||||
result = db_conn.save(validated)
|
||||
email_client.send_notification(result)
|
||||
logger.log(f"Processed {len(data)} items")
|
||||
return result
|
||||
```
|
||||
|
||||
✅ **Better Design:**
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ProcessingResult:
|
||||
"""Result of data processing operation."""
|
||||
items_processed: int
|
||||
success: bool
|
||||
errors: List[str] = field(default_factory=list)
|
||||
|
||||
class DataProcessor:
|
||||
"""Handles data validation, storage, and notification."""
|
||||
|
||||
def __init__(self, db_conn: Database, email_client: EmailClient):
|
||||
self.db = db_conn
|
||||
self.email = email_client
|
||||
|
||||
def process(self, data: List[dict]) -> ProcessingResult:
|
||||
"""Process and store data with notifications."""
|
||||
validated = self._validate_data(data)
|
||||
result = self.db.save(validated)
|
||||
self._notify_completion(result)
|
||||
return result
|
||||
```
|
||||
|
||||
**Design Improvement Areas:**
|
||||
|
||||
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
|
||||
|
||||
- Reduce code duplication through shared utilities
|
||||
- Make unit testing easier
|
||||
- Improve separation of concerns (single responsibility)
|
||||
- Make unit testing easier through dependency injection
|
||||
- Add clarity without adding complexity
|
||||
- Prefer dataclasses for structured data
|
||||
|
||||
## Development Tools & Commands
|
||||
|
||||
### Package Management
|
||||
|
||||
```bash
|
||||
# Add package
|
||||
uv add package-name
|
||||
|
||||
# Sync project dependencies
|
||||
uv sync
|
||||
uv lock
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run unit tests (no network)
|
||||
make test
|
||||
|
||||
# Don't run integration tests, as API keys must be set
|
||||
|
||||
# Run specific test file
|
||||
uv run --group test pytest tests/unit_tests/test_specific.py
|
||||
```
|
||||
|
||||
### Code Quality
|
||||
|
||||
```bash
|
||||
# Lint code
|
||||
make lint
|
||||
|
||||
# Format code
|
||||
make format
|
||||
|
||||
# Type checking
|
||||
uv run --group lint mypy .
|
||||
```
|
||||
|
||||
### Dependency Management Patterns
|
||||
|
||||
**Local Development Dependencies:**
|
||||
|
||||
```toml
|
||||
[tool.uv.sources]
|
||||
langchain-core = { path = "../core", editable = true }
|
||||
langchain-tests = { path = "../standard-tests", editable = true }
|
||||
```
|
||||
|
||||
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def search_database(query: str) -> str:
|
||||
"""Search the database for relevant information.
|
||||
|
||||
Args:
|
||||
query: The search query string.
|
||||
"""
|
||||
# Implementation here
|
||||
return results
|
||||
```
|
||||
|
||||
## Commit Standards
|
||||
|
||||
**Use Conventional Commits format for PR titles:**
|
||||
|
||||
- `feat(core): add multi-tenant support`
|
||||
- `fix(cli): resolve flag parsing error`
|
||||
- `docs: update API usage examples`
|
||||
- `docs(openai): update API usage examples`
|
||||
|
||||
## Framework-Specific Guidelines
|
||||
|
||||
- Follow the existing patterns in `langchain-core` for base abstractions
|
||||
- Use `langchain_core.callbacks` for execution tracking
|
||||
- Implement proper streaming support where applicable
|
||||
- Avoid deprecated components like legacy `LLMChain`
|
||||
|
||||
### Partner Integrations
|
||||
|
||||
- Follow the established patterns in existing partner libraries
|
||||
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
|
||||
- Include comprehensive integration tests
|
||||
- Document API key requirements and authentication
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Before submitting code changes:
|
||||
|
||||
- [ ] **Breaking Changes**: Verified no public API changes
|
||||
- [ ] **Type Hints**: All functions have complete type annotations
|
||||
- [ ] **Tests**: New functionality is fully tested
|
||||
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
|
||||
- [ ] **Documentation**: Google-style docstrings for public functions
|
||||
- [ ] **Code Quality**: `make lint` and `make format` pass
|
||||
- [ ] **Architecture**: Suggested improvements where applicable
|
||||
- [ ] **Commit Message**: Follows Conventional Commits format
|
||||
16
Makefile
16
Makefile
@@ -1,4 +1,4 @@
|
||||
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck lint lint_package lint_tests format format_diff
|
||||
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
|
||||
|
||||
.EXPORT_ALL_VARIABLES:
|
||||
UV_FROZEN = true
|
||||
@@ -78,6 +78,18 @@ api_docs_linkcheck:
|
||||
fi
|
||||
@echo "✅ API link check complete"
|
||||
|
||||
## spell_check: Run codespell on the project.
|
||||
spell_check:
|
||||
@echo "✏️ Checking spelling across project..."
|
||||
uv run --group codespell codespell --toml pyproject.toml
|
||||
@echo "✅ Spell check complete"
|
||||
|
||||
## spell_fix: Run codespell on the project and fix the errors.
|
||||
spell_fix:
|
||||
@echo "✏️ Fixing spelling errors across project..."
|
||||
uv run --group codespell codespell --toml pyproject.toml -w
|
||||
@echo "✅ Spelling errors fixed"
|
||||
|
||||
######################
|
||||
# LINTING AND FORMATTING
|
||||
######################
|
||||
@@ -88,7 +100,7 @@ lint lint_package lint_tests:
|
||||
uv run --group lint ruff check docs cookbook
|
||||
uv run --group lint ruff format docs cookbook cookbook --diff
|
||||
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
|
||||
|
||||
|
||||
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
|
||||
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
|
||||
exit 1
|
||||
|
||||
114
README.md
114
README.md
@@ -1,75 +1,87 @@
|
||||
<p align="center">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
|
||||
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
|
||||
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
|
||||
</picture>
|
||||
</p>
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
|
||||
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
|
||||
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
|
||||
</picture>
|
||||
|
||||
<p align="center">
|
||||
The platform for reliable agents.
|
||||
</p>
|
||||
<div>
|
||||
<br>
|
||||
</div>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://opensource.org/licenses/MIT" target="_blank">
|
||||
<img src="https://img.shields.io/pypi/l/langchain-core?style=flat-square" alt="PyPI - License">
|
||||
</a>
|
||||
<a href="https://pypistats.org/packages/langchain-core" target="_blank">
|
||||
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
|
||||
</a>
|
||||
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
|
||||
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square" alt="Open in Dev Containers">
|
||||
</a>
|
||||
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
|
||||
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
|
||||
</a>
|
||||
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
|
||||
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
|
||||
</a>
|
||||
<a href="https://twitter.com/langchainai" target="_blank">
|
||||
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
|
||||
</a>
|
||||
</p>
|
||||
[](https://github.com/langchain-ai/langchain/releases)
|
||||
[](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://pypistats.org/packages/langchain-core)
|
||||
[](https://star-history.com/#langchain-ai/langchain)
|
||||
[](https://github.com/langchain-ai/langchain/issues)
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
|
||||
[](https://twitter.com/langchainai)
|
||||
[](https://codspeed.io/langchain-ai/langchain)
|
||||
|
||||
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
|
||||
> [!NOTE]
|
||||
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
|
||||
LangChain is a framework for building LLM-powered applications. It helps you chain
|
||||
together interoperable components and third-party integrations to simplify AI
|
||||
application development — all while future-proofing decisions as the underlying
|
||||
technology evolves.
|
||||
|
||||
```bash
|
||||
pip install -U langchain
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Documentation**: To learn more about LangChain, check out [the docs](https://python.langchain.com/docs/introduction/).
|
||||
|
||||
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
|
||||
|
||||
> [!NOTE]
|
||||
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
To learn more about LangChain, check out
|
||||
[the docs](https://python.langchain.com/docs/introduction/). If you’re looking for more
|
||||
advanced customization or agent orchestration, check out
|
||||
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
|
||||
controllable agent workflows.
|
||||
|
||||
## Why use LangChain?
|
||||
|
||||
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
|
||||
LangChain helps developers build applications powered by LLMs through a standard
|
||||
interface for models, embeddings, vector stores, and more.
|
||||
|
||||
Use LangChain for:
|
||||
|
||||
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain’s vast library of integrations with model providers, tools, vector stores, retrievers, and more.
|
||||
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application’s needs. As the industry frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without losing momentum.
|
||||
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
|
||||
external / internal systems, drawing from LangChain’s vast library of integrations with
|
||||
model providers, tools, vector stores, retrievers, and more.
|
||||
- **Model interoperability**. Swap models in and out as your engineering team
|
||||
experiments to find the best choice for your application’s needs. As the industry
|
||||
frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without
|
||||
losing momentum.
|
||||
|
||||
## LangChain’s ecosystem
|
||||
|
||||
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
|
||||
While the LangChain framework can be used standalone, it also integrates seamlessly
|
||||
with any LangChain product, giving developers a full suite of tools when building LLM
|
||||
applications.
|
||||
|
||||
To improve your LLM application development, pair LangChain with:
|
||||
|
||||
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
|
||||
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
|
||||
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
|
||||
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
|
||||
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
|
||||
visibility in production, and improve performance over time.
|
||||
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
|
||||
reliably handle complex tasks with LangGraph, our low-level agent orchestration
|
||||
framework. LangGraph offers customizable architecture, long-term memory, and
|
||||
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
|
||||
Uber, Klarna, and GitLab.
|
||||
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/) - Deploy
|
||||
and scale agents effortlessly with a purpose-built deployment platform for long
|
||||
running, stateful workflows. Discover, reuse, configure, and share agents across
|
||||
teams — and iterate quickly with visual prototyping in
|
||||
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
|
||||
|
||||
## Additional resources
|
||||
|
||||
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with guided examples on getting started with LangChain.
|
||||
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
|
||||
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key concepts behind the LangChain framework.
|
||||
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
|
||||
guided examples on getting started with LangChain.
|
||||
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
|
||||
snippets for topics such as tool calling, RAG use cases, and more.
|
||||
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
|
||||
concepts behind the LangChain framework.
|
||||
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
|
||||
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on navigating base packages and integrations for LangChain.
|
||||
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
|
||||
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
|
||||
navigating base packages and integrations for LangChain.
|
||||
|
||||
19
SECURITY.md
19
SECURITY.md
@@ -4,9 +4,9 @@ LangChain has a large ecosystem of integrations with various external resources
|
||||
|
||||
## Best practices
|
||||
|
||||
When building such applications, developers should remember to follow good security practices:
|
||||
When building such applications developers should remember to follow good security practices:
|
||||
|
||||
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
|
||||
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
|
||||
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
|
||||
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
|
||||
|
||||
@@ -22,7 +22,9 @@ Example scenarios with mitigation strategies:
|
||||
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
|
||||
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
|
||||
|
||||
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
|
||||
If you're building applications that access external resources like file systems, APIs
|
||||
or databases, consider speaking with your company's security team to determine how to best
|
||||
design and secure your applications.
|
||||
|
||||
## Reporting OSS Vulnerabilities
|
||||
|
||||
@@ -30,13 +32,15 @@ LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
|
||||
a bounty program for our open source projects.
|
||||
|
||||
Please report security vulnerabilities associated with the LangChain
|
||||
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
|
||||
open source projects [here](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
|
||||
|
||||
Before reporting a vulnerability, please review:
|
||||
|
||||
1) In-Scope Targets and Out-of-Scope Targets below.
|
||||
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#supporting-packages) monorepo structure.
|
||||
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
|
||||
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
|
||||
3) The [Best Practices](#best-practices) above to
|
||||
understand what we consider to be a security vulnerability vs. developer
|
||||
responsibility.
|
||||
|
||||
### In-Scope Targets
|
||||
|
||||
@@ -63,7 +67,8 @@ All out of scope targets defined by huntr as well as:
|
||||
for more details, but generally tools interact with the real world. Developers are
|
||||
expected to understand the security implications of their code and are responsible
|
||||
for the security of their tools.
|
||||
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
|
||||
* Code documented with security notices. This will be decided on a case by
|
||||
case basis, but likely will not be eligible for a bounty as the code is already
|
||||
documented with guidelines for developers that should be followed for making their
|
||||
application secure.
|
||||
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
|
||||
|
||||
@@ -144,7 +144,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"id": "kWDWfSDBMPl8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -185,7 +185,7 @@
|
||||
" )\n",
|
||||
" # Text summary chain\n",
|
||||
" model = VertexAI(\n",
|
||||
" temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024\n",
|
||||
" temperature=0, model_name=\"gemini-2.0-flash-lite-001\", max_tokens=1024\n",
|
||||
" ).with_fallbacks([empty_response])\n",
|
||||
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
@@ -235,7 +235,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 6,
|
||||
"id": "PeK9bzXv3olF",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -254,7 +254,7 @@
|
||||
"\n",
|
||||
"def image_summarize(img_base64, prompt):\n",
|
||||
" \"\"\"Make image summary\"\"\"\n",
|
||||
" model = ChatVertexAI(model=\"gemini-2.5-flash\", max_tokens=1024)\n",
|
||||
" model = ChatVertexAI(model=\"gemini-2.0-flash\", max_tokens=1024)\n",
|
||||
"\n",
|
||||
" msg = model.invoke(\n",
|
||||
" [\n",
|
||||
@@ -431,7 +431,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"id": "GlwCErBaCKQW",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -553,7 +553,7 @@
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" # Multi-modal LLM\n",
|
||||
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024)\n",
|
||||
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.0-flash\", max_tokens=1024)\n",
|
||||
"\n",
|
||||
" # RAG pipeline\n",
|
||||
" chain = (\n",
|
||||
|
||||
@@ -63,5 +63,4 @@ Notebook | Description
|
||||
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
|
||||
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
|
||||
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
|
||||
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
|
||||
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.
|
||||
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
|
||||
@@ -79,17 +79,6 @@
|
||||
"tool_executor = ToolExecutor(tools)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "168152fc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"📘 **Note on `SystemMessage` usage with LangGraph-based agents**\n",
|
||||
"\n",
|
||||
"When constructing the `messages` list for an agent, you *must* manually include any `SystemMessage`s.\n",
|
||||
"Unlike some agent executors in LangChain that set a default, LangGraph requires explicit inclusion."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe6e8f78-1ef7-42ad-b2bf-835ed5850553",
|
||||
|
||||
@@ -47,12 +47,10 @@
|
||||
"source": [
|
||||
"### Prerequisites\n",
|
||||
"\n",
|
||||
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
|
||||
"\n",
|
||||
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
|
||||
"Please install the Oracle Database [python-oracledb driver](https://pypi.org/project/oracledb/) to use LangChain with Oracle AI Vector Search:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"$ python -m pip install -U langchain-oracledb\n",
|
||||
"$ python -m pip install --upgrade oracledb\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@@ -219,7 +217,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"\n",
|
||||
"# please update with your related information\n",
|
||||
"# make sure that you have onnx file in the system\n",
|
||||
@@ -298,7 +296,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# loading from Oracle Database table\n",
|
||||
@@ -356,7 +354,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_community.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# using 'database' provider\n",
|
||||
@@ -397,7 +395,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# split by default parameters\n",
|
||||
@@ -454,7 +452,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"# using ONNX model loaded to Oracle Database\n",
|
||||
@@ -500,14 +498,14 @@
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"import oracledb\n",
|
||||
"from langchain_oracledb.document_loaders.oracleai import (\n",
|
||||
"from langchain_community.document_loaders.oracleai import (\n",
|
||||
" OracleDocLoader,\n",
|
||||
" OracleTextSplitter,\n",
|
||||
")\n",
|
||||
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_oracledb.vectorstores import oraclevs\n",
|
||||
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
|
||||
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
|
||||
"from langchain_community.utilities.oracleai import OracleSummary\n",
|
||||
"from langchain_community.vectorstores import oraclevs\n",
|
||||
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
|
||||
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
|
||||
"from langchain_core.documents import Document"
|
||||
]
|
||||
@@ -679,19 +677,19 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query = \"What is Oracle AI Vector Store?\"\n",
|
||||
"db_filter = {\"document_id\": \"1\"}\n",
|
||||
"filter = {\"document_id\": [\"1\"]}\n",
|
||||
"\n",
|
||||
"# Similarity search without a filter\n",
|
||||
"print(vectorstore.similarity_search(query, 1))\n",
|
||||
"\n",
|
||||
"# Similarity search with a filter\n",
|
||||
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
|
||||
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
|
||||
"\n",
|
||||
"# Similarity search with relevance score\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1))\n",
|
||||
"\n",
|
||||
"# Similarity search with relevance score with filter\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
|
||||
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
|
||||
"\n",
|
||||
"# Max marginal relevance search\n",
|
||||
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
|
||||
@@ -699,7 +697,7 @@
|
||||
"# Max marginal relevance search with filter\n",
|
||||
"print(\n",
|
||||
" vectorstore.max_marginal_relevance_search(\n",
|
||||
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
|
||||
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
|
||||
@@ -1,455 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3716230e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
|
||||
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
|
||||
"- **Tracing**: Monitor inputs, outputs, retrieved documents, scores, prompts, and timings\n",
|
||||
"- **Evaluation**: Use MLflow's built-in scorers to assess RAG performance\n",
|
||||
"- **Best Practices**: Implement proper configuration management and reproducible experiments\n",
|
||||
"\n",
|
||||
"We'll build a RAG system that can answer questions about academic papers by:\n",
|
||||
"1. Loading and chunking documents from ArXiv\n",
|
||||
"2. Creating embeddings and a vector store\n",
|
||||
"3. Setting up a retrieval-augmented generation chain\n",
|
||||
"4. Tracking all experiments with MLflow\n",
|
||||
"5. Evaluating the system's performance\n",
|
||||
"\n",
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2f7561c4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0814ebe9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain mlflow langchain-community arxiv pymupdf langchain-text-splitters langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "747399b6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import mlflow\n",
|
||||
"from mlflow.genai.scorers import RelevanceToQuery, Correctness, ExpectationsGuidelines\n",
|
||||
"from langchain_community.document_loaders import ArxivLoader\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4141ee05",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\"\n",
|
||||
"\n",
|
||||
"mlflow.set_experiment(\"LangChain-RAG-MLflow\")\n",
|
||||
"mlflow.langchain.autolog()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dd5eb41b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Define all hyperparameters and configuration in a centralized dictionary. This makes it easy to:\n",
|
||||
"- Track different experiment configurations\n",
|
||||
"- Reproduce results\n",
|
||||
"- Perform hyperparameter tuning\n",
|
||||
"\n",
|
||||
"**Key Parameters**:\n",
|
||||
"- `chunk_size`: Size of text chunks for document splitting\n",
|
||||
"- `chunk_overlap`: Overlap between consecutive chunks\n",
|
||||
"- `retriever_k`: Number of documents to retrieve\n",
|
||||
"- `embeddings_model`: OpenAI embedding model\n",
|
||||
"- `llm`: Language model for generation\n",
|
||||
"- `temperature`: Sampling temperature for the LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "6dcdc5d8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"CONFIG = {\n",
|
||||
" \"chunk_size\": 400,\n",
|
||||
" \"chunk_overlap\": 80,\n",
|
||||
" \"retriever_k\": 3,\n",
|
||||
" \"embeddings_model\": \"text-embedding-3-small\",\n",
|
||||
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
|
||||
" \"llm\": \"gpt-5-nano\",\n",
|
||||
" \"temperature\": 0,\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a2985f1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### ArXiv Dcoument Loading and Processing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1f32aa36",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load documents from ArXiv\n",
|
||||
"loader = ArxivLoader(\n",
|
||||
" query=\"1706.03762\",\n",
|
||||
" load_max_docs=1,\n",
|
||||
")\n",
|
||||
"docs = loader.load()\n",
|
||||
"print(docs[0].metadata)\n",
|
||||
"\n",
|
||||
"# Split documents into chunks\n",
|
||||
"splitter = RecursiveCharacterTextSplitter(\n",
|
||||
" chunk_size=CONFIG[\"chunk_size\"],\n",
|
||||
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
|
||||
")\n",
|
||||
"chunks = splitter.split_documents(docs)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Join chunks into a single string\n",
|
||||
"def join_chunks(chunks):\n",
|
||||
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6e194ab4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Vector Store and Retriever Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "26dfbeaa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create embeddings\n",
|
||||
"embeddings = OpenAIEmbeddings(model=CONFIG[\"embeddings_model\"])\n",
|
||||
"\n",
|
||||
"# Create vector store from documents\n",
|
||||
"vectorstore = InMemoryVectorStore.from_documents(\n",
|
||||
" chunks,\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Create retriever\n",
|
||||
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": CONFIG[\"retriever_k\"]})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bc1f181b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### RAG Chain Construction using [LCEL](https://python.langchain.com/docs/concepts/lcel/)\n",
|
||||
"\n",
|
||||
"Flow:\n",
|
||||
"1. Query → Retriever (finds relevant chunks)\n",
|
||||
"2. Chunks → join_chunks (creates context)\n",
|
||||
"3. Context + Query → Prompt Template\n",
|
||||
"4. Prompt → Language Model → Response\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "6a810dc3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Initialize the language model\n",
|
||||
"llm = ChatOpenAI(model=CONFIG[\"llm\"], temperature=CONFIG[\"temperature\"])\n",
|
||||
"\n",
|
||||
"# Create the prompt template\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", CONFIG[\"system_prompt\"] + \"\\n\\nContext:\\n{context}\\n\\n\"),\n",
|
||||
" (\"human\", \"\\n{question}\\n\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Construct the RAG chain\n",
|
||||
"rag_chain = (\n",
|
||||
" {\n",
|
||||
" \"context\": retriever | RunnableLambda(join_chunks),\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c04bd019",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Prediction Function with MLflow Tracing\n",
|
||||
"\n",
|
||||
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
|
||||
"- Input queries\n",
|
||||
"- Retrieved documents\n",
|
||||
"- Generated responses\n",
|
||||
"- Execution time\n",
|
||||
"- Chain intermediate steps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "7b45fc04",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Question: What is the main idea of the paper?\n",
|
||||
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"@mlflow.trace\n",
|
||||
"def predict_fn(question: str) -> str:\n",
|
||||
" return rag_chain.invoke(question)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Test the prediction function\n",
|
||||
"sample_question = \"What is the main idea of the paper?\"\n",
|
||||
"response = predict_fn(sample_question)\n",
|
||||
"print(f\"Question: {sample_question}\")\n",
|
||||
"print(f\"Response: {response}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "421469de",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Evaluation Dataset and Scoring\n",
|
||||
"\n",
|
||||
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
|
||||
"\n",
|
||||
"<u>Evaluation Components:</u>\n",
|
||||
"- **Dataset**: Questions with expected concepts and facts\n",
|
||||
"- **Scorers**: \n",
|
||||
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
|
||||
" - `Correctness`: Evaluates factual accuracy of the response\n",
|
||||
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
|
||||
"\n",
|
||||
"<u>Best Practices:</u>\n",
|
||||
"- Create diverse test cases covering different query types\n",
|
||||
"- Include expected concepts to guide evaluation\n",
|
||||
"- Use multiple scoring metrics for comprehensive assessment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "5c1dc4f2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
|
||||
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "2b6c6687efa24796b39c7951d589d481",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Evaluating: 0%| | 0/3 [Elapsed: 00:00, Remaining: ?] "
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"✨ Evaluation completed.\n",
|
||||
"\n",
|
||||
"Metrics and evaluation results are logged to the MLflow run:\n",
|
||||
" Run name: \u001b[94mbaseline_eval\u001b[0m\n",
|
||||
" Run ID: \u001b[94ma2218d9f24c9415f8040d3b77af103a9\u001b[0m\n",
|
||||
"\n",
|
||||
"To view the detailed evaluation results with sample-wise scores,\n",
|
||||
"open the \u001b[93m\u001b[1mTraces\u001b[0m tab in the Run page in the MLflow UI.\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Define evaluation dataset\n",
|
||||
"eval_dataset = [\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\"question\": \"What is the main idea of the paper?\"},\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"attention mechanism\", \"transformer\", \"neural network\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"attention mechanism is a key component of the transformer model\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\"The response must be factual and concise\"],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\n",
|
||||
" \"question\": \"What's the difference between a transformer and a recurrent neural network?\"\n",
|
||||
" },\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"sequential\", \"attention mechanism\", \"hidden state\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"transformer processes data in parallel while RNN processes data sequentially\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\n",
|
||||
" \"The response must be factual and focus on the difference between the two models\"\n",
|
||||
" ],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"inputs\": {\"question\": \"What does the attention mechanism do?\"},\n",
|
||||
" \"expectations\": {\n",
|
||||
" \"key_concepts\": [\"query\", \"key\", \"value\", \"relationship\", \"similarity\"],\n",
|
||||
" \"expected_facts\": [\n",
|
||||
" \"attention allows the model to weigh the importance of different parts of the input sequence when processing it\"\n",
|
||||
" ],\n",
|
||||
" \"guidelines\": [\n",
|
||||
" \"The response must be factual and explain the concept of attention\"\n",
|
||||
" ],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# Run evaluation with MLflow\n",
|
||||
"with mlflow.start_run(run_name=\"baseline_eval\") as run:\n",
|
||||
" # Log configuration parameters\n",
|
||||
" mlflow.log_params(CONFIG)\n",
|
||||
"\n",
|
||||
" # Run evaluation\n",
|
||||
" results = mlflow.genai.evaluate(\n",
|
||||
" data=eval_dataset,\n",
|
||||
" predict_fn=predict_fn,\n",
|
||||
" scorers=[RelevanceToQuery(), Correctness(), ExpectationsGuidelines()],\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "52b137c7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Launch MLflow UI to check out the results\n",
|
||||
"\n",
|
||||
"<u>What you'll see in the UI:</u>\n",
|
||||
"- **Experiments**: Compare different RAG configurations\n",
|
||||
"- **Runs**: Individual experiment runs with metrics and parameters\n",
|
||||
"- **Traces**: Detailed execution traces showing retrieval and generation steps\n",
|
||||
"- **Evaluation Results**: Scoring metrics and detailed comparisons\n",
|
||||
"- **Artifacts**: Saved models, datasets, and other files\n",
|
||||
"\n",
|
||||
"Navigate to `http://localhost:5000` after running the command below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "817c3799",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!mlflow ui"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c75861e3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You should see something like this\n",
|
||||
"\n",
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.13.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -34,7 +34,7 @@
|
||||
"tools = [multiply, exponentiate, add]\n",
|
||||
"\n",
|
||||
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
|
||||
"claude3 = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\").bind_tools(tools)\n",
|
||||
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
|
||||
"llm_with_tools = gpt35.configurable_alternatives(\n",
|
||||
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
|
||||
")"
|
||||
@@ -113,15 +113,14 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
|
||||
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'function': {'arguments': '{\"x\": 3, \"y\": 5}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 75, 'prompt_tokens': 131, 'total_tokens': 206, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm2qxSWU3oTTSZQv64J4XQKZhA6', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--35fad027-47f7-44d3-aa8b-99f4fc24098c-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'type': 'tool_call'}, {'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'type': 'tool_call'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'type': 'tool_call'}], usage_metadata={'input_tokens': 131, 'output_tokens': 75, 'total_tokens': 206, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n",
|
||||
" ToolMessage(content='8.0', tool_call_id='call_xuNXwm2P6U2Pp2pAbC1sdIBz'),\n",
|
||||
" ToolMessage(content='300.03770462067547', tool_call_id='call_0pImUJUDlYa5zfBcxxuvWyYS'),\n",
|
||||
" ToolMessage(content='-900.8841', tool_call_id='call_yaownQ9TZK0dkqD1KSFyax4H'),\n",
|
||||
" AIMessage(content='The results are:\\n1. 3 plus 5 is 8.\\n2. 5 raised to the power of 2.743 is approximately 300.04.\\n3. 17.24 minus 918.1241 is approximately -900.88.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 55, 'prompt_tokens': 236, 'total_tokens': 291, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm345MYnpowGS90iAZAlSs7haed', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--5fa66d47-d80e-45d0-9c32-31348c735d72-0', usage_metadata={'input_tokens': 236, 'output_tokens': 55, 'total_tokens': 291, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}"
|
||||
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
|
||||
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
|
||||
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
|
||||
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
|
||||
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -147,17 +146,17 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
|
||||
" AIMessage(content=[{'text': \"I'll solve these calculations for you.\\n\\nFor the first part, I need to calculate 3 plus 5 raised to the power of 2.743.\\n\\nLet me break this down:\\n1) First, I'll calculate 5 raised to the power of 2.743\\n2) Then add 3 to the result\", 'type': 'text'}, {'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'input': {'x': 5, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01HCbDmuzdg9ATMyKbnecbEE', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 563, 'output_tokens': 146, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--9f6469fb-bcbb-4c1c-9eec-79f6979c38e6-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 5, 'y': 2.743}, 'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'type': 'tool_call'}], usage_metadata={'input_tokens': 563, 'output_tokens': 146, 'total_tokens': 709, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
|
||||
" ToolMessage(content='82.65606421491815', tool_call_id='toolu_01L1mXysBQtpPLQ2AZTaCGmE'),\n",
|
||||
" AIMessage(content=[{'text': \"Now I'll add 3 to this result:\", 'type': 'text'}, {'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'input': {'x': 3, 'y': 82.65606421491815}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01ELwyCtVLeGC685PUFqmdz2', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 727, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--d5af3d7c-e8b7-4cc2-997a-ad2dafd08751-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 82.65606421491815}, 'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'type': 'tool_call'}], usage_metadata={'input_tokens': 727, 'output_tokens': 87, 'total_tokens': 814, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
|
||||
" ToolMessage(content='85.65606421491815', tool_call_id='toolu_01NARC83e9obV35mZ6jYzBiN'),\n",
|
||||
" AIMessage(content=[{'text': \"For the second part, you asked for 17.24 - 918.1241. I don't have a subtraction function available, but I can rewrite this as adding a negative number: 17.24 + (-918.1241)\", 'type': 'text'}, {'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01WkmDwUxWjjaKGnTtdLGJnN', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 832, 'output_tokens': 130, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--39a6fbda-4c81-47a6-b361-524bd4ee5823-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'type': 'tool_call'}], usage_metadata={'input_tokens': 832, 'output_tokens': 130, 'total_tokens': 962, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
|
||||
" ToolMessage(content='-900.8841', tool_call_id='toolu_01Q6fLcZkBWZpMPCZ55WXR3N'),\n",
|
||||
" AIMessage(content='So, the answers are:\\n1) 3 plus 5 raised to the 2.743 = 85.65606421491815\\n2) 17.24 - 918.1241 = -900.8841', additional_kwargs={}, response_metadata={'id': 'msg_015Yoc62CvdJbANGFouiQ6AQ', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 978, 'output_tokens': 58, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--174c0882-6180-47ea-8f63-d7b747302327-0', usage_metadata={'input_tokens': 978, 'output_tokens': 58, 'total_tokens': 1036, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})]}"
|
||||
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
|
||||
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
|
||||
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
|
||||
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
|
||||
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
|
||||
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
|
||||
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
|
||||
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -178,7 +177,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -192,7 +191,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.16"
|
||||
"version": "3.10.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
153
docs/README.md
153
docs/README.md
@@ -1,154 +1,3 @@
|
||||
# LangChain Documentation
|
||||
|
||||
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation).
|
||||
|
||||
## Structure
|
||||
|
||||
The primary documentation is located in the `docs/` directory. This directory contains
|
||||
both the source files for the main documentation as well as the API reference doc
|
||||
build process.
|
||||
|
||||
### API Reference
|
||||
|
||||
API reference documentation is located in `docs/api_reference/` and is generated from
|
||||
the codebase using Sphinx.
|
||||
|
||||
The API reference have additional build steps that differ from the main documentation.
|
||||
|
||||
#### Deployment Process
|
||||
|
||||
Currently, the build process roughly follows these steps:
|
||||
|
||||
1. Using the `api_doc_build.yml` GitHub workflow, the API reference docs are
|
||||
[built](#build-technical-details) and copied to the `langchain-api-docs-html`
|
||||
repository. This workflow is triggered either (1) on a cron routine interval or (2)
|
||||
triggered manually.
|
||||
|
||||
In short, the workflow extracts all `langchain-ai`-org-owned repos defined in
|
||||
`langchain/libs/packages.yml`, clones them locally (in the workflow runner's file
|
||||
system), and then builds the API reference RST files (using `create_api_rst.py`).
|
||||
Following post-processing, the HTML files are pushed to the
|
||||
`langchain-api-docs-html` repository.
|
||||
2. After the HTML files are in the `langchain-api-docs-html` repository, they are **not**
|
||||
automatically published to the [live docs site](https://python.langchain.com/api_reference/).
|
||||
|
||||
The docs site is served by Vercel. The Vercel deployment process copies the HTML
|
||||
files from the `langchain-api-docs-html` repository and deploys them to the live
|
||||
site. Deployments are triggered on each new commit pushed to `v0.3`.
|
||||
|
||||
#### Build Technical Details
|
||||
|
||||
The build process creates a virtual monorepo by syncing multiple repositories, then generates comprehensive API documentation:
|
||||
|
||||
1. **Repository Sync Phase:**
|
||||
- `.github/scripts/prep_api_docs_build.py` - Clones external partner repos and organizes them into the `libs/partners/` structure to create a virtual monorepo for documentation building
|
||||
|
||||
2. **RST Generation Phase:**
|
||||
- `docs/api_reference/create_api_rst.py` - Main script that **generates RST files** from Python source code
|
||||
- Scans `libs/` directories and extracts classes/functions from each module (using `inspect`)
|
||||
- Creates `.rst` files using specialized templates for different object types
|
||||
- Templates in `docs/api_reference/templates/` (`pydantic.rst`, `runnable_pydantic.rst`, etc.)
|
||||
|
||||
3. **HTML Build Phase:**
|
||||
- Sphinx-based, uses `sphinx.ext.autodoc` (auto-extracts docstrings from the codebase)
|
||||
- `docs/api_reference/conf.py` (sphinx config) configures `autodoc` and other extensions
|
||||
- `sphinx-build` processes the generated `.rst` files into HTML using autodoc
|
||||
- `docs/api_reference/scripts/custom_formatter.py` - Post-processes the generated HTML
|
||||
- Copies `reference.html` to `index.html` to create the default landing page (artifact? might not need to do this - just put everyhing in index directly?)
|
||||
|
||||
4. **Deployment:**
|
||||
- `.github/workflows/api_doc_build.yml` - Workflow responsible for orchestrating the entire build and deployment process
|
||||
- Built HTML files are committed and pushed to the `langchain-api-docs-html` repository
|
||||
|
||||
#### Local Build
|
||||
|
||||
For local development and testing of API documentation, use the Makefile targets in the repository root:
|
||||
|
||||
```bash
|
||||
# Full build
|
||||
make api_docs_build
|
||||
```
|
||||
|
||||
Like the CI process, this target:
|
||||
|
||||
- Installs the CLI package in editable mode
|
||||
- Generates RST files for all packages using `create_api_rst.py`
|
||||
- Builds HTML documentation with Sphinx
|
||||
- Post-processes the HTML with `custom_formatter.py`
|
||||
- Opens the built documentation (`reference.html`) in your browser
|
||||
|
||||
**Quick Preview:**
|
||||
|
||||
```bash
|
||||
make api_docs_quick_preview API_PKG=openai
|
||||
```
|
||||
|
||||
- Generates RST files for only the specified package (default: `text-splitters`)
|
||||
- Builds and post-processes HTML documentation
|
||||
- Opens the preview in your browser
|
||||
|
||||
Both targets automatically clean previous builds and handle the complete build pipeline locally, mirroring the CI process but for faster iteration during development.
|
||||
|
||||
#### Documentation Standards
|
||||
|
||||
**Docstring Format:**
|
||||
The API reference uses **Google-style docstrings** with reStructuredText markup. Sphinx processes these through the `sphinx.ext.napoleon` extension to generate documentation.
|
||||
|
||||
**Required format:**
|
||||
|
||||
```python
|
||||
def example_function(param1: str, param2: int = 5) -> bool:
|
||||
"""Brief description of the function.
|
||||
|
||||
Longer description can go here. Use reStructuredText syntax for
|
||||
rich formatting like **bold** and *italic*.
|
||||
|
||||
TODO: code: figure out what works?
|
||||
|
||||
Args:
|
||||
param1: Description of the first parameter.
|
||||
param2: Description of the second parameter with default value.
|
||||
|
||||
Returns:
|
||||
Description of the return value.
|
||||
|
||||
Raises:
|
||||
ValueError: When param1 is empty.
|
||||
TypeError: When param2 is not an integer.
|
||||
|
||||
.. warning::
|
||||
This function is experimental and may change.
|
||||
"""
|
||||
```
|
||||
|
||||
**Special Markers:**
|
||||
|
||||
- `:private:` in docstrings excludes members from documentation
|
||||
- `.. warning::` adds warning admonitions
|
||||
|
||||
#### Site Styling and Assets
|
||||
|
||||
**Theme and Styling:**
|
||||
|
||||
- Uses [**PyData Sphinx Theme**](https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html) (`pydata_sphinx_theme`)
|
||||
- Custom CSS in `docs/api_reference/_static/css/custom.css` with LangChain-specific:
|
||||
- Color palette
|
||||
- Inter font family
|
||||
- Custom navbar height and sidebar formatting
|
||||
- Deprecated/beta feature styling
|
||||
|
||||
**Static Assets:**
|
||||
|
||||
- Logos: `_static/wordmark-api.svg` (light) and `_static/wordmark-api-dark.svg` (dark mode)
|
||||
- Favicon: `_static/img/brand/favicon.png`
|
||||
- Custom CSS: `_static/css/custom.css`
|
||||
|
||||
**Post-Processing:**
|
||||
|
||||
- `scripts/custom_formatter.py` cleans up generated HTML:
|
||||
- Shortens TOC entries from `ClassName.method()` to `method()`
|
||||
|
||||
**Analytics and Integration:**
|
||||
|
||||
- GitHub integration (source links, edit buttons)
|
||||
- Example backlinking through custom `ExampleLinksDirective`
|
||||
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation)
|
||||
|
||||
@@ -50,7 +50,7 @@ class GalleryGridDirective(SphinxDirective):
|
||||
individual cards + ["image", "header", "content", "title"].
|
||||
|
||||
Danger:
|
||||
This directive can only be used in the context of a MyST documentation page as
|
||||
This directive can only be used in the context of a Myst documentation page as
|
||||
the templates use Markdown flavored formatting.
|
||||
"""
|
||||
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
"""Configuration file for the Sphinx documentation builder."""
|
||||
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file only contains a selection of the most common options. For a full
|
||||
# list see the documentation:
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
@@ -18,18 +20,16 @@ from docutils.parsers.rst.directives.admonitions import BaseAdmonition
|
||||
from docutils.statemachine import StringList
|
||||
from sphinx.util.docutils import SphinxDirective
|
||||
|
||||
# Add paths to Python import system so Sphinx can import LangChain modules
|
||||
# This allows autodoc to introspect and document the actual code
|
||||
_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, os.path.abspath(".")) # Current directory
|
||||
sys.path.insert(0, os.path.abspath("../../libs/langchain")) # LangChain main package
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
|
||||
_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, os.path.abspath("."))
|
||||
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
|
||||
|
||||
# Load package metadata from pyproject.toml (for version info, etc.)
|
||||
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
|
||||
data = toml.load(f)
|
||||
|
||||
# Load mapping of classes to example notebooks for backlinking
|
||||
# This file is generated by scripts that scan our tutorial/example notebooks
|
||||
with (_DIR / "guide_imports.json").open("r") as f:
|
||||
imported_classes = json.load(f)
|
||||
|
||||
@@ -86,7 +86,6 @@ class Beta(BaseAdmonition):
|
||||
|
||||
|
||||
def setup(app):
|
||||
"""Register custom directives and hooks with Sphinx."""
|
||||
app.add_directive("example_links", ExampleLinksDirective)
|
||||
app.add_directive("beta", Beta)
|
||||
app.connect("autodoc-skip-member", skip_private_members)
|
||||
@@ -98,7 +97,7 @@ def skip_private_members(app, what, name, obj, skip, options):
|
||||
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
|
||||
return True
|
||||
if name == "__init__" and obj.__objclass__ is object:
|
||||
# don't document default init
|
||||
# dont document default init
|
||||
return True
|
||||
return None
|
||||
|
||||
@@ -126,7 +125,7 @@ extensions = [
|
||||
"sphinx.ext.viewcode",
|
||||
"sphinxcontrib.autodoc_pydantic",
|
||||
"IPython.sphinxext.ipython_console_highlighting",
|
||||
"myst_parser", # For generated index.md and reference.md
|
||||
"myst_parser",
|
||||
"_extensions.gallery_directive",
|
||||
"sphinx_design",
|
||||
"sphinx_copybutton",
|
||||
@@ -259,7 +258,6 @@ html_static_path = ["_static"]
|
||||
html_css_files = ["css/custom.css"]
|
||||
html_use_index = False
|
||||
|
||||
# Only used on the generated index.md and reference.md files
|
||||
myst_enable_extensions = ["colon_fence"]
|
||||
|
||||
# generate autosummary even if no references
|
||||
@@ -270,11 +268,11 @@ autosummary_ignore_module_all = False
|
||||
html_copy_source = False
|
||||
html_show_sourcelink = False
|
||||
|
||||
googleanalytics_id = "G-9B66JQQH2F"
|
||||
|
||||
# Set canonical URL from the Read the Docs Domain
|
||||
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
|
||||
|
||||
googleanalytics_id = "G-9B66JQQH2F"
|
||||
|
||||
# Tell Jinja2 templates the build is running on Read the Docs
|
||||
if os.environ.get("READTHEDOCS", "") == "True":
|
||||
html_context["READTHEDOCS"] = True
|
||||
|
||||
@@ -1,41 +1,4 @@
|
||||
"""Auto-generate API reference documentation (RST files) for LangChain packages.
|
||||
|
||||
* Automatically discovers all packages in `libs/` and `libs/partners/`
|
||||
* For each package, recursively walks the filesystem to:
|
||||
* Load Python modules using importlib
|
||||
* Extract classes and functions using Python's inspect module
|
||||
* Classify objects by type (Pydantic models, Runnables, TypedDicts, etc.)
|
||||
* Filter out private members (names starting with '_') and deprecated items
|
||||
* Creates structured RST files with:
|
||||
* Module-level documentation pages with autosummary tables
|
||||
* Different Sphinx templates based on object type (see templates/ directory)
|
||||
* Proper cross-references and navigation structure
|
||||
* Separation of current vs deprecated APIs
|
||||
* Generates a directory tree like:
|
||||
```
|
||||
docs/api_reference/
|
||||
├── index.md # Main landing page with package gallery
|
||||
├── reference.md # Package overview and navigation
|
||||
├── core/ # langchain-core documentation
|
||||
│ ├── index.rst
|
||||
│ ├── callbacks.rst
|
||||
│ └── ...
|
||||
├── langchain/ # langchain documentation
|
||||
│ ├── index.rst
|
||||
│ └── ...
|
||||
└── partners/ # Integration packages
|
||||
├── openai/
|
||||
├── anthropic/
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
* Respects privacy markers:
|
||||
* Modules with `:private:` in docstring are excluded entirely
|
||||
* Objects with `:private:` in docstring are filtered out
|
||||
* Names starting with '_' are treated as private
|
||||
"""
|
||||
"""Script for auto-generating api_reference.rst."""
|
||||
|
||||
import importlib
|
||||
import inspect
|
||||
@@ -134,7 +97,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
|
||||
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
|
||||
kind: ClassKind = "TypedDict"
|
||||
elif type(type_) is typing._TypedDictMeta: # type: ignore
|
||||
kind = "TypedDict"
|
||||
kind: ClassKind = "TypedDict"
|
||||
elif (
|
||||
issubclass(type_, Runnable)
|
||||
and issubclass(type_, BaseModel)
|
||||
@@ -214,20 +177,19 @@ def _load_package_modules(
|
||||
Traversal based on the file system makes it easy to determine which
|
||||
of the modules/packages are part of the package vs. 3rd party or built-in.
|
||||
|
||||
Args:
|
||||
package_directory: Path to the package directory.
|
||||
submodule: Optional name of submodule to load.
|
||||
Parameters:
|
||||
package_directory (Union[str, Path]): Path to the package directory.
|
||||
submodule (Optional[str]): Optional name of submodule to load.
|
||||
|
||||
Returns:
|
||||
A dictionary where keys are module names and values are `ModuleMembers`
|
||||
objects.
|
||||
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
|
||||
"""
|
||||
package_path = (
|
||||
Path(package_directory)
|
||||
if isinstance(package_directory, str)
|
||||
else package_directory
|
||||
)
|
||||
modules_by_namespace: Dict[str, ModuleMembers] = {}
|
||||
modules_by_namespace = {}
|
||||
|
||||
# Get the high level package name
|
||||
package_name = package_path.name
|
||||
@@ -237,13 +199,12 @@ def _load_package_modules(
|
||||
package_path = package_path / submodule
|
||||
|
||||
for file_path in package_path.rglob("*.py"):
|
||||
# Skip private modules
|
||||
if file_path.name.startswith("_"):
|
||||
continue
|
||||
|
||||
# Skip integration_template and project_template directories (for libs/cli)
|
||||
if "integration_template" in file_path.parts:
|
||||
continue
|
||||
|
||||
if "project_template" in file_path.parts:
|
||||
continue
|
||||
|
||||
@@ -254,13 +215,8 @@ def _load_package_modules(
|
||||
continue
|
||||
|
||||
# Get the full namespace of the module
|
||||
# Example: langchain_core/schema/output_parsers.py ->
|
||||
# langchain_core.schema.output_parsers
|
||||
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
|
||||
|
||||
# Keep only the top level namespace
|
||||
# Example: langchain_core.schema.output_parsers ->
|
||||
# langchain_core
|
||||
top_namespace = namespace.split(".")[0]
|
||||
|
||||
try:
|
||||
@@ -297,16 +253,16 @@ def _construct_doc(
|
||||
members_by_namespace: Dict[str, ModuleMembers],
|
||||
package_version: str,
|
||||
) -> List[typing.Tuple[str, str]]:
|
||||
"""Construct the contents of the `reference.rst` for the given package.
|
||||
"""Construct the contents of the reference.rst file for the given package.
|
||||
|
||||
Args:
|
||||
package_namespace: The package top level namespace
|
||||
members_by_namespace: The members of the package dict organized by top level.
|
||||
Module contains a list of classes and functions inside of the top level
|
||||
namespace.
|
||||
members_by_namespace: The members of the package, dict organized by top level
|
||||
module contains a list of classes and functions
|
||||
inside of the top level namespace.
|
||||
|
||||
Returns:
|
||||
The string contents of the reference.rst file.
|
||||
The contents of the reference.rst file.
|
||||
"""
|
||||
docs = []
|
||||
index_doc = f"""\
|
||||
@@ -327,7 +283,7 @@ def _construct_doc(
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 2
|
||||
|
||||
|
||||
"""
|
||||
index_autosummary = """
|
||||
"""
|
||||
@@ -409,9 +365,9 @@ def _construct_doc(
|
||||
|
||||
module_doc += f"""\
|
||||
:template: {template}
|
||||
|
||||
|
||||
{class_["qualified_name"]}
|
||||
|
||||
|
||||
"""
|
||||
index_autosummary += f"""
|
||||
{class_["qualified_name"]}
|
||||
@@ -509,13 +465,10 @@ def _construct_doc(
|
||||
|
||||
|
||||
def _build_rst_file(package_name: str = "langchain") -> None:
|
||||
"""Create a rst file for a given package.
|
||||
"""Create a rst file for building of documentation.
|
||||
|
||||
Args:
|
||||
package_name: Name of the package to create the rst file for.
|
||||
|
||||
Returns:
|
||||
The rst file is created in the same directory as this script.
|
||||
package_name: Can be either "langchain" or "core" or "experimental".
|
||||
"""
|
||||
package_dir = _package_dir(package_name)
|
||||
package_members = _load_package_modules(package_dir)
|
||||
@@ -534,7 +487,7 @@ def _package_namespace(package_name: str) -> str:
|
||||
"""Returns the package name used.
|
||||
|
||||
Args:
|
||||
package_name: Can be either "langchain" or "core"
|
||||
package_name: Can be either "langchain" or "core" or "experimental".
|
||||
|
||||
Returns:
|
||||
modified package_name: Can be either "langchain" or "langchain_{package_name}"
|
||||
@@ -547,10 +500,7 @@ def _package_namespace(package_name: str) -> str:
|
||||
|
||||
|
||||
def _package_dir(package_name: str = "langchain") -> Path:
|
||||
"""Return the path to the directory containing the documentation.
|
||||
|
||||
Attempts to find the package in `libs/` first, then `libs/partners/`.
|
||||
"""
|
||||
"""Return the path to the directory containing the documentation."""
|
||||
if (ROOT_DIR / "libs" / package_name).exists():
|
||||
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
|
||||
else:
|
||||
@@ -564,7 +514,7 @@ def _package_dir(package_name: str = "langchain") -> Path:
|
||||
|
||||
|
||||
def _get_package_version(package_dir: Path) -> str:
|
||||
"""Return the version of the package by reading the `pyproject.toml`."""
|
||||
"""Return the version of the package."""
|
||||
try:
|
||||
with open(package_dir.parent / "pyproject.toml", "r") as f:
|
||||
pyproject = toml.load(f)
|
||||
@@ -590,39 +540,22 @@ def _out_file_path(package_name: str) -> Path:
|
||||
|
||||
|
||||
def _build_index(dirs: List[str]) -> None:
|
||||
"""Build the index.md file for the API reference.
|
||||
|
||||
Args:
|
||||
dirs: List of package directories to include in the index.
|
||||
|
||||
Returns:
|
||||
The index.md file is created in the same directory as this script.
|
||||
"""
|
||||
|
||||
custom_names = {
|
||||
"aws": "AWS",
|
||||
"ai21": "AI21",
|
||||
"ibm": "IBM",
|
||||
}
|
||||
ordered = [
|
||||
"core",
|
||||
"langchain",
|
||||
"text-splitters",
|
||||
"community",
|
||||
"standard-tests",
|
||||
]
|
||||
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
|
||||
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
|
||||
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
|
||||
doc = """# LangChain Python API Reference
|
||||
|
||||
Welcome to the LangChain v0.3 Python API reference. This is a reference for all
|
||||
`langchain-x` packages.
|
||||
Welcome to the LangChain Python API reference. This is a reference for all
|
||||
`langchain-x` packages.
|
||||
|
||||
These pages refer to the the v0.3 versions of LangChain packages and integrations. To
|
||||
visit the documentation for the latest versions of LangChain, visit [https://docs.langchain.com](https://docs.langchain.com)
|
||||
and [https://reference.langchain.com/python/](https://reference.langchain.com/python/) (for references.)
|
||||
For user guides see [https://python.langchain.com](https://python.langchain.com).
|
||||
|
||||
For the legacy API reference (<v0.3) hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
|
||||
For the legacy API reference hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
|
||||
"""
|
||||
|
||||
if main_:
|
||||
@@ -708,14 +641,9 @@ See the full list of integrations in the Section Navigation.
|
||||
{integration_tree}
|
||||
```
|
||||
"""
|
||||
# Write the reference.md file
|
||||
with open(HERE / "reference.md", "w") as f:
|
||||
f.write(doc)
|
||||
|
||||
# Write a dummy index.md file that points to reference.md
|
||||
# Sphinx requires an index file to exist in each doc directory
|
||||
# TODO: investigate why we don't just put everything in index.md directly?
|
||||
# if it works it works I guess
|
||||
dummy_index = """\
|
||||
# API reference
|
||||
|
||||
@@ -731,11 +659,8 @@ Reference<reference>
|
||||
|
||||
|
||||
def main(dirs: Optional[list] = None) -> None:
|
||||
"""Generate the `api_reference.rst` file for each package.
|
||||
|
||||
If dirs is None, generate for all packages in `libs/` and `libs/partners/`.
|
||||
Otherwise generate only for the specified package(s).
|
||||
"""
|
||||
"""Generate the api_reference.rst file for each package."""
|
||||
print("Starting to build API reference files.")
|
||||
if not dirs:
|
||||
dirs = [
|
||||
p.parent.name
|
||||
@@ -744,17 +669,18 @@ def main(dirs: Optional[list] = None) -> None:
|
||||
if p.parent.parent.name in ("libs", "partners")
|
||||
]
|
||||
for dir_ in sorted(dirs):
|
||||
# Skip any hidden directories prefixed with a dot
|
||||
# Skip any hidden directories
|
||||
# Some of these could be present by mistake in the code base
|
||||
# (e.g., .pytest_cache from running tests from the wrong location)
|
||||
# e.g., .pytest_cache from running tests from the wrong location.
|
||||
if dir_.startswith("."):
|
||||
print("Skipping dir:", dir_)
|
||||
continue
|
||||
else:
|
||||
print("Building:", dir_)
|
||||
print("Building package:", dir_)
|
||||
_build_rst_file(package_name=dir_)
|
||||
|
||||
_build_index(sorted(dirs))
|
||||
print("API reference files built.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,12 +1,12 @@
|
||||
autodoc_pydantic>=2,<3
|
||||
sphinx>=8,<9
|
||||
myst-parser>=3
|
||||
sphinx-autobuild>=2024
|
||||
pydata-sphinx-theme>=0.15
|
||||
toml>=0.10.2
|
||||
myst-nb>=1.1.1
|
||||
pyyaml
|
||||
sphinx-design
|
||||
sphinx-copybutton
|
||||
sphinxcontrib-googleanalytics
|
||||
pydata-sphinx-theme>=0.15
|
||||
myst-parser>=3
|
||||
myst-nb>=1.1.1
|
||||
toml>=0.10.2
|
||||
pyyaml
|
||||
beautifulsoup4
|
||||
sphinxcontrib-googleanalytics
|
||||
|
||||
@@ -1,10 +1,3 @@
|
||||
"""Post-process generated HTML files to clean up table-of-contents headers.
|
||||
|
||||
Runs after Sphinx generates the API reference HTML. It finds TOC entries like
|
||||
"ClassName.method_name()" and shortens them to just "method_name()" for better
|
||||
readability in the sidebar navigation.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from glob import glob
|
||||
from pathlib import Path
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# arXiv
|
||||
|
||||
|
||||
LangChain implements the latest research in the field of Natural Language Processing.
|
||||
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
|
||||
Templates, and Cookbooks.
|
||||
|
||||
From the opposite direction, scientists use `LangChain` in research and reference it in the research papers.
|
||||
From the opposite direction, scientists use `LangChain` in research and reference it in the research papers.
|
||||
|
||||
`arXiv` papers with references to:
|
||||
[LangChain](https://arxiv.org/search/?query=langchain&searchtype=all&source=header) | [LangGraph](https://arxiv.org/search/?query=langgraph&searchtype=all&source=header) | [LangSmith](https://arxiv.org/search/?query=langsmith&searchtype=all&source=header)
|
||||
@@ -83,7 +83,7 @@ a set of open-domain QA datasets, covering multiple query complexities, and
|
||||
show that ours enhances the overall efficiency and accuracy of QA systems,
|
||||
compared to relevant baselines including the adaptive retrieval approaches.
|
||||
Code is available at: https://github.com/starsuzi/Adaptive-RAG.
|
||||
|
||||
|
||||
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
|
||||
|
||||
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
|
||||
@@ -106,7 +106,7 @@ than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
|
||||
the self-discovered reasoning structures are universally applicable across
|
||||
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
|
||||
commonalities with human reasoning patterns.
|
||||
|
||||
|
||||
## RAG-Fusion: a New Take on Retrieval-Augmented Generation
|
||||
|
||||
- **Authors:** Zackary Rackauckas
|
||||
@@ -129,7 +129,7 @@ the generated queries' relevance to the original query is insufficient. This
|
||||
research marks significant progress in artificial intelligence (AI) and natural
|
||||
language processing (NLP) applications and demonstrates transformations in a
|
||||
global and multi-industry context.
|
||||
|
||||
|
||||
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
|
||||
|
||||
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
|
||||
@@ -152,7 +152,7 @@ tasks. On question-answering tasks that involve complex, multi-step reasoning,
|
||||
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
|
||||
with the use of GPT-4, we can improve the best performance on the QuALITY
|
||||
benchmark by 20% in absolute accuracy.
|
||||
|
||||
|
||||
## Corrective Retrieval Augmented Generation
|
||||
|
||||
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
|
||||
@@ -180,7 +180,7 @@ them. CRAG is plug-and-play and can be seamlessly coupled with various
|
||||
RAG-based approaches. Experiments on four datasets covering short- and
|
||||
long-form generation tasks show that CRAG can significantly improve the
|
||||
performance of RAG-based approaches.
|
||||
|
||||
|
||||
## Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering
|
||||
|
||||
- **Authors:** Tal Ridnik, Dedy Kredo, Itamar Friedman
|
||||
@@ -206,7 +206,7 @@ to 44% with the AlphaCodium flow. Many of the principles and best practices
|
||||
acquired in this work, we believe, are broadly applicable to general code
|
||||
generation tasks. Full implementation is available at:
|
||||
https://github.com/Codium-ai/AlphaCodium
|
||||
|
||||
|
||||
## Mixtral of Experts
|
||||
|
||||
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
|
||||
@@ -229,7 +229,7 @@ multilingual benchmarks. We also provide a model fine-tuned to follow
|
||||
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
|
||||
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
|
||||
the base and instruct models are released under the Apache 2.0 license.
|
||||
|
||||
|
||||
## Dense X Retrieval: What Retrieval Granularity Should We Use?
|
||||
|
||||
- **Authors:** Tong Chen, Hongwei Wang, Sihao Chen, et al.
|
||||
@@ -255,7 +255,7 @@ also enhances the performance of downstream QA tasks, since the retrieved texts
|
||||
are more condensed with question-relevant information, reducing the need for
|
||||
lengthy input tokens and minimizing the inclusion of extraneous, irrelevant
|
||||
information.
|
||||
|
||||
|
||||
## Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
|
||||
|
||||
- **Authors:** Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al.
|
||||
@@ -286,7 +286,7 @@ with CoN significantly outperform standard RALMs. Notably, CoN achieves an
|
||||
average improvement of +7.9 in EM score given entirely noisy retrieved
|
||||
documents and +10.5 in rejection rates for real-time questions that fall
|
||||
outside the pre-training knowledge scope.
|
||||
|
||||
|
||||
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
|
||||
|
||||
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
|
||||
@@ -317,7 +317,7 @@ outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
|
||||
reasoning and fact verification tasks, and it shows significant gains in
|
||||
improving factuality and citation accuracy for long-form generations relative
|
||||
to these models.
|
||||
|
||||
|
||||
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
|
||||
|
||||
- **Authors:** Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al.
|
||||
@@ -338,7 +338,7 @@ substantial performance gains on various challenging reasoning-intensive tasks
|
||||
including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
|
||||
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
|
||||
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
|
||||
|
||||
|
||||
## Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation
|
||||
|
||||
- **Authors:** Xuefei Ning, Zinan Lin, Zixuan Zhou, et al.
|
||||
@@ -359,7 +359,7 @@ potentially improve the answer quality on several question categories. SoT is
|
||||
an initial attempt at data-centric optimization for inference efficiency, and
|
||||
showcases the potential of eliciting high-quality answers by explicitly
|
||||
planning the answer structure in language.
|
||||
|
||||
|
||||
## Llama 2: Open Foundation and Fine-Tuned Chat Models
|
||||
|
||||
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
|
||||
@@ -377,7 +377,7 @@ safety, may be a suitable substitute for closed-source models. We provide a
|
||||
detailed description of our approach to fine-tuning and safety improvements of
|
||||
Llama 2-Chat in order to enable the community to build on our work and
|
||||
contribute to the responsible development of LLMs.
|
||||
|
||||
|
||||
## Lost in the Middle: How Language Models Use Long Contexts
|
||||
|
||||
- **Authors:** Nelson F. Liu, Kevin Lin, John Hewitt, et al.
|
||||
@@ -399,7 +399,7 @@ significantly degrades when models must access relevant information in the
|
||||
middle of long contexts, even for explicitly long-context models. Our analysis
|
||||
provides a better understanding of how language models use their input context
|
||||
and provides new evaluation protocols for future long-context language models.
|
||||
|
||||
|
||||
## Query Rewriting for Retrieval-Augmented Large Language Models
|
||||
|
||||
- **Authors:** Xinbei Ma, Yeyun Gong, Pengcheng He, et al.
|
||||
@@ -426,7 +426,7 @@ Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice
|
||||
QA. Experiments results show consistent performance improvement, indicating
|
||||
that our framework is proven effective and scalable, and brings a new framework
|
||||
for retrieval-augmented LLM.
|
||||
|
||||
|
||||
## Large Language Model Guided Tree-of-Thought
|
||||
|
||||
- **Authors:** Jieyi Long
|
||||
@@ -452,7 +452,7 @@ the effectiveness of the proposed technique, we implemented a ToT-based solver
|
||||
for the Sudoku Puzzle. Experimental results show that the ToT framework can
|
||||
significantly increase the success rate of Sudoku puzzle solving. Our
|
||||
implementation of the ToT-based Sudoku solver is available on [GitHub](https://github.com/jieyilong/tree-of-thought-puzzle-solver).
|
||||
|
||||
|
||||
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
|
||||
|
||||
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
|
||||
@@ -482,7 +482,7 @@ by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
|
||||
Prompting, and has comparable performance with 8-shot CoT prompting on the math
|
||||
reasoning problem. The code can be found at
|
||||
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
|
||||
|
||||
|
||||
## Zero-Shot Listwise Document Reranking with a Large Language Model
|
||||
|
||||
- **Authors:** Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al.
|
||||
@@ -506,7 +506,7 @@ results, but can also act as a final-stage reranker to improve the top-ranked
|
||||
results of a pointwise method for improved efficiency. Additionally, we apply
|
||||
our approach to subsets of MIRACL, a recent multilingual retrieval dataset,
|
||||
with results showing its potential to generalize across different languages.
|
||||
|
||||
|
||||
## Visual Instruction Tuning
|
||||
|
||||
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
|
||||
@@ -530,7 +530,7 @@ instruction-following dataset. When fine-tuned on Science QA, the synergy of
|
||||
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
|
||||
GPT-4 generated visual instruction tuning data, our model and code base
|
||||
publicly available.
|
||||
|
||||
|
||||
## Generative Agents: Interactive Simulacra of Human Behavior
|
||||
|
||||
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
|
||||
@@ -563,7 +563,7 @@ architecture--observation, planning, and reflection--each contribute critically
|
||||
to the believability of agent behavior. By fusing large language models with
|
||||
computational, interactive agents, this work introduces architectural and
|
||||
interaction patterns for enabling believable simulations of human behavior.
|
||||
|
||||
|
||||
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
|
||||
|
||||
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
|
||||
@@ -590,7 +590,7 @@ include introducing a novel communicative agent framework, offering a scalable
|
||||
approach for studying the cooperative behaviors and capabilities of multi-agent
|
||||
systems, and open-sourcing our library to support research on communicative
|
||||
agents and beyond: https://github.com/camel-ai/camel.
|
||||
|
||||
|
||||
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
|
||||
|
||||
- **Authors:** Yongliang Shen, Kaitao Song, Xu Tan, et al.
|
||||
@@ -619,7 +619,7 @@ HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different
|
||||
modalities and domains and achieve impressive results in language, vision,
|
||||
speech, and other challenging tasks, which paves a new way towards the
|
||||
realization of artificial general intelligence.
|
||||
|
||||
|
||||
## A Watermark for Large Language Models
|
||||
|
||||
- **Authors:** John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al.
|
||||
@@ -641,7 +641,7 @@ interpretable p-values, and derive an information-theoretic framework for
|
||||
analyzing the sensitivity of the watermark. We test the watermark using a
|
||||
multi-billion parameter model from the Open Pretrained Transformer (OPT)
|
||||
family, and discuss robustness and security.
|
||||
|
||||
|
||||
## Precise Zero-Shot Dense Retrieval without Relevance Labels
|
||||
|
||||
- **Authors:** Luyu Gao, Xueguang Ma, Jimmy Lin, et al.
|
||||
@@ -670,7 +670,7 @@ details. Our experiments show that HyDE significantly outperforms the
|
||||
state-of-the-art unsupervised dense retriever Contriever and shows strong
|
||||
performance comparable to fine-tuned retrievers, across various tasks (e.g. web
|
||||
search, QA, fact verification) and languages~(e.g. sw, ko, ja).
|
||||
|
||||
|
||||
## Constitutional AI: Harmlessness from AI Feedback
|
||||
|
||||
- **Authors:** Yuntao Bai, Saurav Kadavath, Sandipan Kundu, et al.
|
||||
@@ -697,7 +697,7 @@ and RL methods can leverage chain-of-thought style reasoning to improve the
|
||||
human-judged performance and transparency of AI decision making. These methods
|
||||
make it possible to control AI behavior more precisely and with far fewer human
|
||||
labels.
|
||||
|
||||
|
||||
## Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
|
||||
|
||||
- **Authors:** Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al.
|
||||
@@ -727,7 +727,7 @@ components and fallacy classes, indicating that fallacy identification is a
|
||||
challenging task that may require specialized forms of reasoning to capture
|
||||
various classes. We share our open-source code and data on GitHub to support
|
||||
further work on logical fallacy identification.
|
||||
|
||||
|
||||
## Complementary Explanations for Effective In-Context Learning
|
||||
|
||||
- **Authors:** Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al.
|
||||
@@ -752,7 +752,7 @@ performance. Therefore, we propose a maximal marginal relevance-based exemplar
|
||||
selection approach for constructing exemplar sets that are both relevant as
|
||||
well as complementary, which successfully improves the in-context learning
|
||||
performance across three real-world tasks on multiple LLMs.
|
||||
|
||||
|
||||
## PAL: Program-aided Language Models
|
||||
|
||||
- **Authors:** Luyu Gao, Aman Madaan, Shuyan Zhou, et al.
|
||||
@@ -784,7 +784,7 @@ larger models. For example, PAL using Codex achieves state-of-the-art few-shot
|
||||
accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
|
||||
which uses chain-of-thought by absolute 15% top-1. Our code and data are
|
||||
publicly available at http://reasonwithpal.com/ .
|
||||
|
||||
|
||||
## An Analysis of Fusion Functions for Hybrid Retrieval
|
||||
|
||||
- **Authors:** Sebastian Bruch, Siyu Gai, Amir Ingber
|
||||
@@ -803,7 +803,7 @@ learning of a CC fusion is generally agnostic to the choice of score
|
||||
normalization; that CC outperforms RRF in in-domain and out-of-domain settings;
|
||||
and finally, that CC is sample efficient, requiring only a small set of
|
||||
training examples to tune its only parameter to a target domain.
|
||||
|
||||
|
||||
## ReAct: Synergizing Reasoning and Acting in Language Models
|
||||
|
||||
- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al.
|
||||
@@ -835,7 +835,7 @@ benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and
|
||||
reinforcement learning methods by an absolute success rate of 34% and 10%
|
||||
respectively, while being prompted with only one or two in-context examples.
|
||||
Project site with code: https://react-lm.github.io
|
||||
|
||||
|
||||
## Deep Lake: a Lakehouse for Deep Learning
|
||||
|
||||
- **Authors:** Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al.
|
||||
@@ -860,7 +860,7 @@ streams the data over the network to (a) Tensor Query Language, (b) in-browser
|
||||
visualization engine, or (c) deep learning frameworks without sacrificing GPU
|
||||
utilization. Datasets stored in Deep Lake can be accessed from PyTorch,
|
||||
TensorFlow, JAX, and integrate with numerous MLOps tools.
|
||||
|
||||
|
||||
## Matryoshka Representation Learning
|
||||
|
||||
- **Authors:** Aditya Kusupati, Gantavya Bhatt, Aniket Rege, et al.
|
||||
@@ -891,7 +891,7 @@ representations. Finally, we show that MRL extends seamlessly to web-scale
|
||||
datasets (ImageNet, JFT) across various modalities -- vision (ViT, ResNet),
|
||||
vision + language (ALIGN) and language (BERT). MRL code and pretrained models
|
||||
are open-sourced at https://github.com/RAIVNLab/MRL.
|
||||
|
||||
|
||||
## Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
||||
|
||||
- **Authors:** Kevin Heffernan, Onur Çelebi, Holger Schwenk
|
||||
@@ -917,7 +917,7 @@ which is valuable in the low-resource setting.
|
||||
very low-resource languages and handle 50 African languages, many of which are
|
||||
not covered by any other model. For these languages, we train sentence
|
||||
encoders, mine bitexts, and validate the bitexts by training NMT systems.
|
||||
|
||||
|
||||
## Evaluating the Text-to-SQL Capabilities of Large Language Models
|
||||
|
||||
- **Authors:** Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau
|
||||
@@ -934,7 +934,7 @@ this setting. Furthermore, we demonstrate on the GeoQuery and Scholar
|
||||
benchmarks that a small number of in-domain examples provided in the prompt
|
||||
enables Codex to perform better than state-of-the-art models finetuned on such
|
||||
few-shot examples.
|
||||
|
||||
|
||||
## Locally Typical Sampling
|
||||
|
||||
- **Authors:** Clara Meister, Tiago Pimentel, Gian Wiher, et al.
|
||||
@@ -963,7 +963,7 @@ human evaluations show that, in comparison to nucleus and top-k sampling,
|
||||
locally typical sampling offers competitive performance (in both abstractive
|
||||
summarization and story generation) in terms of quality while consistently
|
||||
reducing degenerate repetitions.
|
||||
|
||||
|
||||
## ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction
|
||||
|
||||
- **Authors:** Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, et al.
|
||||
@@ -985,7 +985,7 @@ improve the quality and space footprint of late interaction. We evaluate
|
||||
ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art
|
||||
quality within and outside the training domain while reducing the space
|
||||
footprint of late interaction models by 6--10$\times$.
|
||||
|
||||
|
||||
## Learning Transferable Visual Models From Natural Language Supervision
|
||||
|
||||
- **Authors:** Alec Radford, Jong Wook Kim, Chris Hallacy, et al.
|
||||
@@ -1014,7 +1014,7 @@ For instance, we match the accuracy of the original ResNet-50 on ImageNet
|
||||
zero-shot without needing to use any of the 1.28 million training examples it
|
||||
was trained on. We release our code and pre-trained model weights at
|
||||
https://github.com/OpenAI/CLIP.
|
||||
|
||||
|
||||
## Language Models are Few-Shot Learners
|
||||
|
||||
- **Authors:** Tom B. Brown, Benjamin Mann, Nick Ryder, et al.
|
||||
@@ -1047,7 +1047,7 @@ training on large web corpora. Finally, we find that GPT-3 can generate samples
|
||||
of news articles which human evaluators have difficulty distinguishing from
|
||||
articles written by humans. We discuss broader societal impacts of this finding
|
||||
and of GPT-3 in general.
|
||||
|
||||
|
||||
## Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
|
||||
|
||||
- **Authors:** Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al.
|
||||
@@ -1078,7 +1078,7 @@ parametric seq2seq models and task-specific retrieve-and-extract architectures.
|
||||
For language generation tasks, we find that RAG models generate more specific,
|
||||
diverse and factual language than a state-of-the-art parametric-only seq2seq
|
||||
baseline.
|
||||
|
||||
|
||||
## CTRL: A Conditional Transformer Language Model for Controllable Generation
|
||||
|
||||
- **Authors:** Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al.
|
||||
@@ -1098,3 +1098,4 @@ codes also allow CTRL to predict which parts of the training data are most
|
||||
likely given a sequence. This provides a potential method for analyzing large
|
||||
amounts of data via model-based source attribution. We have released multiple
|
||||
full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl.
|
||||
|
||||
@@ -7,4 +7,4 @@
|
||||
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
|
||||
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
|
||||
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
|
||||
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
|
||||
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
|
||||
@@ -90,4 +90,4 @@ Deprecated classes and methods will be removed in 0.2.0
|
||||
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
|
||||
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
|
||||
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
|
||||
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
|
||||
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
|
||||
@@ -11,8 +11,8 @@ Please see the following resources for more information:
|
||||
|
||||
## Legacy agent concept: AgentExecutor
|
||||
|
||||
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
|
||||
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
|
||||
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
|
||||
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
|
||||
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph.
|
||||
|
||||
### Transitioning from AgentExecutor to LangGraph
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Async programming with LangChain
|
||||
# Async programming with langchain
|
||||
|
||||
:::info Prerequisites
|
||||
* [Runnable interface](/docs/concepts/runnables)
|
||||
@@ -12,7 +12,7 @@ You are expected to be familiar with asynchronous programming in Python before r
|
||||
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynchronous programming.
|
||||
:::
|
||||
|
||||
## LangChain asynchronous APIs
|
||||
## Langchain asynchronous APIs
|
||||
|
||||
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
|
||||
|
||||
|
||||
@@ -70,4 +70,4 @@ This is a common reason why you may fail to see events being emitted from custom
|
||||
runnables or tools.
|
||||
:::
|
||||
|
||||
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
|
||||
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
|
||||
@@ -26,7 +26,7 @@ A full conversation often involves a combination of two patterns of alternating
|
||||
|
||||
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).
|
||||
|
||||
While processing chat history, it's essential to preserve a correct conversation structure.
|
||||
While processing chat history, it's essential to preserve a correct conversation structure.
|
||||
|
||||
Key guidelines for managing chat history:
|
||||
|
||||
|
||||
@@ -127,7 +127,7 @@ If the input exceeds the context window, the model may not be able to process th
|
||||
The size of the input is measured in [tokens](/docs/concepts/tokens) which are the unit of processing that the model uses.
|
||||
|
||||
## Advanced topics
|
||||
|
||||
|
||||
### Rate-limiting
|
||||
|
||||
Many chat model providers impose a limit on the number of requests that can be made in a given time period.
|
||||
|
||||
@@ -15,9 +15,9 @@ Embedding models can also be [multimodal](/docs/concepts/multimodality) though s
|
||||
|
||||
Imagine being able to capture the essence of any text - a tweet, document, or book - in a single, compact representation.
|
||||
This is the power of embedding models, which lie at the heart of many retrieval systems.
|
||||
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
|
||||
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
|
||||
These models take text as input and produce a fixed-length array of numbers, a numerical fingerprint of the text's semantic meaning.
|
||||
Embeddings allow search system to find relevant documents not just based on keyword matches, but on semantic understanding.
|
||||
Embeddings allow search system to find relevant documents not just based on keyword matches, but on semantic understanding.
|
||||
|
||||
## Key concepts
|
||||
|
||||
@@ -27,16 +27,16 @@ Embeddings allow search system to find relevant documents not just based on keyw
|
||||
|
||||
(2) **Measure similarity**: Embedding vectors can be compared using simple mathematical operations.
|
||||
|
||||
## Embedding
|
||||
## Embedding
|
||||
|
||||
### Historical context
|
||||
### Historical context
|
||||
|
||||
The landscape of embedding models has evolved significantly over the years.
|
||||
A pivotal moment came in 2018 when Google introduced [BERT (Bidirectional Encoder Representations from Transformers)](https://www.nvidia.com/en-us/glossary/bert/).
|
||||
The landscape of embedding models has evolved significantly over the years.
|
||||
A pivotal moment came in 2018 when Google introduced [BERT (Bidirectional Encoder Representations from Transformers)](https://www.nvidia.com/en-us/glossary/bert/).
|
||||
BERT applied transformer models to embed text as a simple vector representation, which lead to unprecedented performance across various NLP tasks.
|
||||
However, BERT wasn't optimized for generating sentence embeddings efficiently.
|
||||
However, BERT wasn't optimized for generating sentence embeddings efficiently.
|
||||
This limitation spurred the creation of [SBERT (Sentence-BERT)](https://www.sbert.net/examples/training/sts/README.html), which adapted the BERT architecture to generate semantically rich sentence embeddings, easily comparable via similarity metrics like cosine similarity, dramatically reduced the computational overhead for tasks like finding similar sentences.
|
||||
Today, the embedding model ecosystem is diverse, with numerous providers offering their own implementations.
|
||||
Today, the embedding model ecosystem is diverse, with numerous providers offering their own implementations.
|
||||
To navigate this variety, researchers and practitioners often turn to benchmarks like the Massive Text Embedding Benchmark (MTEB) [here](https://huggingface.co/blog/mteb) for objective comparisons.
|
||||
|
||||
:::info[Further reading]
|
||||
@@ -93,9 +93,9 @@ LangChain offers many embedding model integrations which you can find [on the em
|
||||
|
||||
## Measure similarity
|
||||
|
||||
Each embedding is essentially a set of coordinates, often in a high-dimensional space.
|
||||
Each embedding is essentially a set of coordinates, often in a high-dimensional space.
|
||||
In this space, the position of each point (embedding) reflects the meaning of its corresponding text.
|
||||
Just as similar words might be close to each other in a thesaurus, similar concepts end up close to each other in this embedding space.
|
||||
Just as similar words might be close to each other in a thesaurus, similar concepts end up close to each other in this embedding space.
|
||||
This allows for intuitive comparisons between different pieces of text.
|
||||
By reducing text to these numerical representations, we can use simple mathematical operations to quickly measure how alike two pieces of text are, regardless of their original length or structure.
|
||||
Some common similarity metrics include:
|
||||
@@ -118,7 +118,7 @@ def cosine_similarity(vec1, vec2):
|
||||
|
||||
similarity = cosine_similarity(query_result, document_result)
|
||||
print("Cosine Similarity:", similarity)
|
||||
```
|
||||
```
|
||||
|
||||
:::info[Further reading]
|
||||
|
||||
@@ -127,4 +127,4 @@ print("Cosine Similarity:", similarity)
|
||||
* See Pinecone's [blog post](https://www.pinecone.io/learn/vector-similarity/) on similarity metrics.
|
||||
* See OpenAI's [FAQ](https://platform.openai.com/docs/guides/embeddings/faq) on what similarity metric to use with OpenAI embeddings.
|
||||
|
||||
:::
|
||||
:::
|
||||
|
||||
@@ -14,3 +14,4 @@ This process is vital for building reliable applications.
|
||||
- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
|
||||
|
||||
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).
|
||||
|
||||
|
||||
@@ -17,4 +17,4 @@ Sometimes these examples are hardcoded into the prompt, but for more advanced si
|
||||
|
||||
## Related resources
|
||||
|
||||
* [Example selector how-to guides](/docs/how_to/#example-selectors)
|
||||
* [Example selector how-to guides](/docs/how_to/#example-selectors)
|
||||
@@ -31,7 +31,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
|
||||
- **[Vector stores](/docs/concepts/vectorstores)**: Storage of and efficient search over vectors and associated metadata.
|
||||
- **[Retriever](/docs/concepts/retrievers)**: A component that returns relevant documents from a knowledge base in response to a query.
|
||||
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
|
||||
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tools](/docs/concepts/tools).
|
||||
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool](/docs/concepts/tools).
|
||||
- **[Prompt templates](/docs/concepts/prompt_templates)**: Component for factoring out the static parts of a model "prompt" (usually a sequence of messages). Useful for serializing, versioning, and reusing these static parts.
|
||||
- **[Output parsers](/docs/concepts/output_parsers)**: Responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
|
||||
- **[Few-shot prompting](/docs/concepts/few_shot_prompting)**: A technique for improving model performance by providing a few examples of the task to perform in the prompt.
|
||||
@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
|
||||
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
|
||||
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
|
||||
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
|
||||
- **[batch](/docs/concepts/runnables)**: Used to execute a runnable with batch inputs.
|
||||
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
|
||||
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
|
||||
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
|
||||
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
|
||||
|
||||
@@ -147,7 +147,7 @@ An `AIMessage` has the following attributes. The attributes which are **standard
|
||||
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
||||
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
|
||||
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
|
||||
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. See [Message IDs](#message-ids) for details. |
|
||||
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
|
||||
| `response_metadata` | Raw | Response metadata, e.g., response headers, logprobs, token counts. |
|
||||
|
||||
#### content
|
||||
@@ -243,37 +243,3 @@ At the moment, the output of the model will be in terms of LangChain messages, s
|
||||
need OpenAI format for the output as well.
|
||||
|
||||
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
|
||||
|
||||
## Message IDs
|
||||
|
||||
LangChain messages include an optional `id` field that serves as a unique identifier. Understanding when and how these IDs are assigned can be helpful for debugging, tracing, and working with message history.
|
||||
|
||||
### When Messages Get IDs
|
||||
|
||||
Messages receive IDs in the following scenarios:
|
||||
|
||||
**Automatically assigned by LangChain:**
|
||||
- When generated through chat model invocation (`.invoke()`, `.stream()`, `.astream()`) with an active run manager/tracing context
|
||||
- IDs follow the format:
|
||||
- `run-$RUN_ID` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-0`)
|
||||
- `run-$RUN_ID-$IDX` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-1`) when there are multiple generations from a single chat model invocation.
|
||||
|
||||
**Provider-assigned IDs (highest priority):**
|
||||
- When the model provider assigns its own ID to the message
|
||||
- These take precedence over LangChain-generated run IDs
|
||||
- Format varies by provider
|
||||
|
||||
### When Messages Don't Get IDs
|
||||
|
||||
Messages will **not** receive IDs in these situations:
|
||||
|
||||
- **Manual message creation**: Messages created directly (e.g., `AIMessage(content="hello")`) without going through chat models
|
||||
- **No run manager context**: When there's no active callback/tracing infrastructure
|
||||
|
||||
### ID Priority System
|
||||
|
||||
LangChain follows a clear precedence system for message IDs:
|
||||
|
||||
1. **Provider-assigned IDs** (highest priority): IDs from the model provider
|
||||
2. **LangChain run IDs** (medium priority): IDs starting with `run-`
|
||||
3. **Manual IDs** (lowest priority): IDs explicitly set by users
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
* [Chat models](/docs/concepts/chat_models)
|
||||
* [Messages](/docs/concepts/messages)
|
||||
:::
|
||||
|
||||
|
||||
LangChain supports multimodal data as input to chat models:
|
||||
|
||||
1. Following provider-specific formats
|
||||
|
||||
@@ -53,29 +53,17 @@ This is how you use MessagesPlaceholder.
|
||||
|
||||
```python
|
||||
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
from langchain_core.messages import HumanMessage, AIMessage
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
prompt_template = ChatPromptTemplate([
|
||||
("system", "You are a helpful assistant"),
|
||||
MessagesPlaceholder("msgs")
|
||||
])
|
||||
|
||||
# Simple example with one message
|
||||
prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
|
||||
|
||||
# More complex example with conversation history
|
||||
messages_to_pass = [
|
||||
HumanMessage(content="What's the capital of France?"),
|
||||
AIMessage(content="The capital of France is Paris."),
|
||||
HumanMessage(content="And what about Germany?")
|
||||
]
|
||||
|
||||
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
|
||||
print(formatted_prompt)
|
||||
```
|
||||
|
||||
|
||||
This will produce a list of four messages total: the system message plus the three messages we passed in (two HumanMessages and one AIMessage).
|
||||
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.
|
||||
If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).
|
||||
This is useful for letting a list of messages be slotted into a particular spot.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Retrieval Augmented Generation (RAG) is a powerful technique that enhances [language models](/docs/concepts/chat_models/) by combining them with external knowledge bases.
|
||||
Retrieval Augmented Generation (RAG) is a powerful technique that enhances [language models](/docs/concepts/chat_models/) by combining them with external knowledge bases.
|
||||
RAG addresses [a key limitation of models](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise): models rely on fixed training datasets, which can lead to outdated or incomplete information.
|
||||
When given a query, RAG systems first search a knowledge base for relevant information.
|
||||
The system then incorporates this retrieved information into the model's prompt.
|
||||
@@ -44,7 +44,7 @@ See our conceptual guide on [retrieval](/docs/concepts/retrieval/).
|
||||
|
||||
## Adding external knowledge
|
||||
|
||||
With a retrieval system in place, we need to pass knowledge from this system to the model.
|
||||
With a retrieval system in place, we need to pass knowledge from this system to the model.
|
||||
A RAG pipeline typically achieves this following these steps:
|
||||
|
||||
- Receive an input query.
|
||||
@@ -59,12 +59,12 @@ from langchain_openai import ChatOpenAI
|
||||
from langchain_core.messages import SystemMessage, HumanMessage
|
||||
|
||||
# Define a system prompt that tells the model how to use the retrieved context
|
||||
system_prompt = """You are an assistant for question-answering tasks.
|
||||
Use the following pieces of retrieved context to answer the question.
|
||||
If you don't know the answer, just say that you don't know.
|
||||
system_prompt = """You are an assistant for question-answering tasks.
|
||||
Use the following pieces of retrieved context to answer the question.
|
||||
If you don't know the answer, just say that you don't know.
|
||||
Use three sentences maximum and keep the answer concise.
|
||||
Context: {context}:"""
|
||||
|
||||
|
||||
# Define a question
|
||||
question = """What are the main components of an LLM-powered autonomous agent system?"""
|
||||
|
||||
@@ -78,7 +78,7 @@ docs_text = "".join(d.page_content for d in docs)
|
||||
system_prompt_fmt = system_prompt.format(context=docs_text)
|
||||
|
||||
# Create a model
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
|
||||
# Generate a response
|
||||
questions = model.invoke([SystemMessage(content=system_prompt_fmt),
|
||||
|
||||
@@ -10,28 +10,28 @@
|
||||
:::
|
||||
|
||||
:::danger[Security]
|
||||
|
||||
|
||||
Some of the concepts reviewed here utilize models to generate queries (e.g., for SQL or graph databases).
|
||||
There are inherent risks in doing this.
|
||||
Make sure that your database connection permissions are scoped as narrowly as possible for your application's needs.
|
||||
This will mitigate, though not eliminate, the risks of building a model-driven system capable of querying databases.
|
||||
There are inherent risks in doing this.
|
||||
Make sure that your database connection permissions are scoped as narrowly as possible for your application's needs.
|
||||
This will mitigate, though not eliminate, the risks of building a model-driven system capable of querying databases.
|
||||
For more on general security best practices, see our [security guide](/docs/security/).
|
||||
|
||||
:::
|
||||
|
||||
## Overview
|
||||
## Overview
|
||||
|
||||
Retrieval systems are fundamental to many AI applications, efficiently identifying relevant information from large datasets.
|
||||
Retrieval systems are fundamental to many AI applications, efficiently identifying relevant information from large datasets.
|
||||
These systems accommodate various data formats:
|
||||
|
||||
- Unstructured text (e.g., documents) is often stored in vector stores or lexical search indexes.
|
||||
- Structured data is typically housed in relational or graph databases with defined schemas.
|
||||
|
||||
Despite the growing diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces.
|
||||
Models play a crucial role in this process by translating natural language queries into formats compatible with the underlying search index or database.
|
||||
Despite the growing diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces.
|
||||
Models play a crucial role in this process by translating natural language queries into formats compatible with the underlying search index or database.
|
||||
This translation enables more intuitive and flexible interactions with complex data structures.
|
||||
|
||||
## Key concepts
|
||||
## Key concepts
|
||||
|
||||

|
||||
|
||||
@@ -39,20 +39,20 @@ This translation enables more intuitive and flexible interactions with complex d
|
||||
|
||||
(2) **Information retrieval**: Search queries are used to fetch information from various retrieval systems.
|
||||
|
||||
## Query analysis
|
||||
## Query analysis
|
||||
|
||||
While users typically prefer to interact with retrieval systems using natural language, these systems may require specific query syntax or benefit from certain keywords.
|
||||
While users typically prefer to interact with retrieval systems using natural language, these systems may require specific query syntax or benefit from certain keywords.
|
||||
Query analysis serves as a bridge between raw user input and optimized search queries. Some common applications of query analysis include:
|
||||
|
||||
1. **Query Re-writing**: Queries can be re-written or expanded to improve semantic or lexical searches.
|
||||
2. **Query Construction**: Search indexes may require structured queries (e.g., SQL for databases).
|
||||
|
||||
Query analysis employs models to transform or construct optimized search queries from raw user input.
|
||||
Query analysis employs models to transform or construct optimized search queries from raw user input.
|
||||
|
||||
### Query re-writing
|
||||
|
||||
Retrieval systems should ideally handle a wide spectrum of user inputs, from simple and poorly worded queries to complex, multi-faceted questions.
|
||||
To achieve this versatility, a popular approach is to use models to transform raw user queries into more effective search queries.
|
||||
Retrieval systems should ideally handle a wide spectrum of user inputs, from simple and poorly worded queries to complex, multi-faceted questions.
|
||||
To achieve this versatility, a popular approach is to use models to transform raw user queries into more effective search queries.
|
||||
This transformation can range from simple keyword extraction to sophisticated query expansion and reformulation.
|
||||
Here are some key benefits of using models for query analysis in unstructured data retrieval:
|
||||
|
||||
@@ -87,7 +87,7 @@ class Questions(BaseModel):
|
||||
)
|
||||
|
||||
# Create an instance of the model and enforce the output structure
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
model = ChatOpenAI(model="gpt-4o", temperature=0)
|
||||
structured_model = model.with_structured_output(Questions)
|
||||
|
||||
# Define the system prompt
|
||||
@@ -111,7 +111,7 @@ See our RAG from Scratch videos for a few different specific approaches:
|
||||
|
||||
### Query construction
|
||||
|
||||
Query analysis also can focus on translating natural language queries into specialized query languages or filters.
|
||||
Query analysis also can focus on translating natural language queries into specialized query languages or filters.
|
||||
This translation is crucial for effectively interacting with various types of databases that house structured or semi-structured data.
|
||||
|
||||
1. **Structured Data examples**: For relational and graph databases, Domain-Specific Languages (DSLs) are used to query data.
|
||||
@@ -129,10 +129,10 @@ These approaches leverage models to bridge the gap between user intent and the s
|
||||
| [Text to SQL](/docs/tutorials/sql_qa/) | If users are asking questions that require information housed in a relational database, accessible via SQL. | This uses an LLM to transform user input into a SQL query. |
|
||||
| [Text-to-Cypher](/docs/tutorials/graph/) | If users are asking questions that require information housed in a graph database, accessible via Cypher. | This uses an LLM to transform user input into a Cypher query. |
|
||||
|
||||
As an example, here is how to use the `SelfQueryRetriever` to convert natural language queries into metadata filters.
|
||||
As an example, here is how to use the `SelfQueryRetriever` to convert natural language queries into metadata filters.
|
||||
|
||||
```python
|
||||
metadata_field_info = schema_for_metadata
|
||||
metadata_field_info = schema_for_metadata
|
||||
document_content_description = "Brief summary of a movie"
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
retriever = SelfQueryRetriever.from_llm(
|
||||
@@ -149,20 +149,20 @@ retriever = SelfQueryRetriever.from_llm(
|
||||
* See our [blog post overview](https://blog.langchain.dev/query-construction/).
|
||||
* See our RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared).
|
||||
|
||||
:::
|
||||
:::
|
||||
|
||||
## Information retrieval
|
||||
## Information retrieval
|
||||
|
||||
### Common retrieval systems
|
||||
|
||||
#### Lexical search indexes
|
||||
|
||||
Many search engines are based upon matching words in a query to the words in each document.
|
||||
Many search engines are based upon matching words in a query to the words in each document.
|
||||
This approach is called lexical retrieval, using search [algorithms that are typically based upon word frequencies](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
||||
The intution is simple: a word appears frequently both in the user’s query and a particular document, then this document might be a good match.
|
||||
|
||||
The particular data structure used to implement this is often an [*inverted index*](https://www.geeksforgeeks.org/inverted-index/).
|
||||
This types of index contains a list of words and a mapping of each word to a list of locations at which it occurs in various documents.
|
||||
This types of index contains a list of words and a mapping of each word to a list of locations at which it occurs in various documents.
|
||||
Using this data structure, it is possible to efficiently match the words in search queries to the documents in which they appear.
|
||||
[BM25](https://en.wikipedia.org/wiki/Okapi_BM25#:~:text=BM25%20is%20a%20bag%2Dof,slightly%20different%20components%20and%20parameters.) and [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) are [two popular lexical search algorithms](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
||||
|
||||
@@ -171,13 +171,13 @@ Using this data structure, it is possible to efficiently match the words in sear
|
||||
* See the [BM25](/docs/integrations/retrievers/bm25/) retriever integration.
|
||||
* See the [Elasticsearch](/docs/integrations/retrievers/elasticsearch_retriever/) retriever integration.
|
||||
|
||||
:::
|
||||
:::
|
||||
|
||||
#### Vector indexes
|
||||
|
||||
Vector indexes are an alternative way to index and store unstructured data.
|
||||
See our conceptual guide on [vectorstores](/docs/concepts/vectorstores/) for a detailed overview.
|
||||
In short, rather than using word frequencies, vectorstores use an [embedding model](/docs/concepts/embedding_models/) to compress documents into high-dimensional vector representation.
|
||||
See our conceptual guide on [vectorstores](/docs/concepts/vectorstores/) for a detailed overview.
|
||||
In short, rather than using word frequencies, vectorstores use an [embedding model](/docs/concepts/embedding_models/) to compress documents into high-dimensional vector representation.
|
||||
This allows for efficient similarity search over embedding vectors using simple mathematical operations like cosine similarity.
|
||||
|
||||
:::info[Further reading]
|
||||
@@ -190,9 +190,9 @@ This allows for efficient similarity search over embedding vectors using simple
|
||||
|
||||
#### Relational databases
|
||||
|
||||
Relational databases are a fundamental type of structured data storage used in many applications.
|
||||
They organize data into tables with predefined schemas, where each table represents an entity or relationship.
|
||||
Data is stored in rows (records) and columns (attributes), allowing for efficient querying and manipulation through SQL (Structured Query Language).
|
||||
Relational databases are a fundamental type of structured data storage used in many applications.
|
||||
They organize data into tables with predefined schemas, where each table represents an entity or relationship.
|
||||
Data is stored in rows (records) and columns (attributes), allowing for efficient querying and manipulation through SQL (Structured Query Language).
|
||||
Relational databases excel at maintaining data integrity, supporting complex queries, and handling relationships between different data entities.
|
||||
|
||||
:::info[Further reading]
|
||||
@@ -204,8 +204,8 @@ Relational databases excel at maintaining data integrity, supporting complex que
|
||||
|
||||
#### Graph databases
|
||||
|
||||
Graph databases are a specialized type of database designed to store and manage highly interconnected data.
|
||||
Unlike traditional relational databases, graph databases use a flexible structure consisting of nodes (entities), edges (relationships), and properties.
|
||||
Graph databases are a specialized type of database designed to store and manage highly interconnected data.
|
||||
Unlike traditional relational databases, graph databases use a flexible structure consisting of nodes (entities), edges (relationships), and properties.
|
||||
This structure allows for efficient representation and querying of complex, interconnected data.
|
||||
Graph databases store data in a graph structure, with nodes, edges, and properties.
|
||||
They are particularly useful for storing and querying complex relationships between data points, such as social networks, supply-chain management, fraud detection, and recommendation services
|
||||
@@ -213,12 +213,12 @@ They are particularly useful for storing and querying complex relationships betw
|
||||
:::info[Further reading]
|
||||
|
||||
* See our [tutorial](/docs/tutorials/graph/) for working with graph databases.
|
||||
* See our [list of graph database integrations](/docs/integrations/graphs/).
|
||||
* See our [list of graph database integrations](/docs/integrations/graphs/).
|
||||
* See Neo4j's [starter kit for LangChain](https://neo4j.com/developer-blog/langchain-neo4j-starter-kit/).
|
||||
|
||||
:::
|
||||
|
||||
### Retriever
|
||||
### Retriever
|
||||
|
||||
LangChain provides a unified interface for interacting with various retrieval systems through the [retriever](/docs/concepts/retrievers/) concept. The interface is straightforward:
|
||||
|
||||
|
||||
@@ -23,16 +23,16 @@ The LangChain [retriever](/docs/concepts/retrievers/) interface is straightforwa
|
||||
## Key concept
|
||||
|
||||

|
||||
|
||||
|
||||
All retrievers implement a simple interface for retrieving documents using natural language queries.
|
||||
|
||||
## Interface
|
||||
## Interface
|
||||
|
||||
The only requirement for a retriever is the ability to accepts a query and return documents.
|
||||
The only requirement for a retriever is the ability to accepts a query and return documents.
|
||||
In particular, [LangChain's retriever class](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html#) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
|
||||
The underlying logic used to get relevant documents is specified by the retriever and can be whatever is most useful for the application.
|
||||
|
||||
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface for LangChain components.
|
||||
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface for LangChain components.
|
||||
This means that it has a few common methods, including `invoke`, that are used to interact with it. A retriever can be invoked with a query:
|
||||
|
||||
```python
|
||||
@@ -42,23 +42,23 @@ docs = retriever.invoke(query)
|
||||
Retrievers return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
|
||||
|
||||
* `page_content`: The content of this document. Currently is a string.
|
||||
* `metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).
|
||||
* `metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).
|
||||
|
||||
:::info[Further reading]
|
||||
|
||||
* See our [how-to guide](/docs/how_to/custom_retriever/) on building your own custom retriever.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
## Common types
|
||||
|
||||
Despite the flexibility of the retriever interface, a few common types of retrieval systems are frequently used.
|
||||
|
||||
### Search apis
|
||||
|
||||
It's important to note that retrievers don't need to actually *store* documents.
|
||||
For example, we can build retrievers on top of search APIs that simply return search results!
|
||||
See our retriever integrations with [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/) or [Wikipedia Search](/docs/integrations/retrievers/wikipedia/).
|
||||
It's important to note that retrievers don't need to actually *store* documents.
|
||||
For example, we can build retrievers on top of search APIs that simply return search results!
|
||||
See our retriever integrations with [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/) or [Wikipedia Search](/docs/integrations/retrievers/wikipedia/).
|
||||
|
||||
### Relational or graph database
|
||||
|
||||
@@ -75,7 +75,7 @@ For example, you can build a retriever for a SQL database using text-to-SQL conv
|
||||
|
||||
### Lexical search
|
||||
|
||||
As discussed in our conceptual review of [retrieval](/docs/concepts/retrieval/), many search engines are based upon matching words in a query to the words in each document.
|
||||
As discussed in our conceptual review of [retrieval](/docs/concepts/retrieval/), many search engines are based upon matching words in a query to the words in each document.
|
||||
[BM25](https://en.wikipedia.org/wiki/Okapi_BM25#:~:text=BM25%20is%20a%20bag%2Dof,slightly%20different%20components%20and%20parameters.) and [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) are [two popular lexical search algorithms](https://cameronrwolfe.substack.com/p/the-basics-of-ai-powered-vector-search?utm_source=profile&utm_medium=reader2).
|
||||
LangChain has retrievers for many popular lexical search algorithms / engines.
|
||||
|
||||
@@ -85,11 +85,11 @@ LangChain has retrievers for many popular lexical search algorithms / engines.
|
||||
* See the [TF-IDF](/docs/integrations/retrievers/tf_idf/) retriever integration.
|
||||
* See the [Elasticsearch](/docs/integrations/retrievers/elasticsearch_retriever/) retriever integration.
|
||||
|
||||
:::
|
||||
:::
|
||||
|
||||
### Vector store
|
||||
### Vector store
|
||||
|
||||
[Vector stores](/docs/concepts/vectorstores/) are a powerful and efficient way to index and retrieve unstructured data.
|
||||
[Vector stores](/docs/concepts/vectorstores/) are a powerful and efficient way to index and retrieve unstructured data.
|
||||
A vectorstore can be used as a retriever by calling the `as_retriever()` method.
|
||||
|
||||
```python
|
||||
@@ -99,7 +99,7 @@ retriever = vectorstore.as_retriever()
|
||||
|
||||
## Advanced retrieval patterns
|
||||
|
||||
### Ensemble
|
||||
### Ensemble
|
||||
|
||||
Because the retriever interface is so simple, returning a list of `Document` objects given a search query, it is possible to combine multiple retrievers using ensembling.
|
||||
This is particularly useful when you have multiple retrievers that are good at finding different types of relevant documents.
|
||||
@@ -112,24 +112,24 @@ ensemble_retriever = EnsembleRetriever(
|
||||
)
|
||||
```
|
||||
|
||||
When ensembling, how do we combine search results from many retrievers?
|
||||
When ensembling, how do we combine search results from many retrievers?
|
||||
This motivates the concept of re-ranking, which takes the output of multiple retrievers and combines them using a more sophisticated algorithm such as [Reciprocal Rank Fusion (RRF)](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf).
|
||||
|
||||
### Source document retention
|
||||
### Source document retention
|
||||
|
||||
Many retrievers utilize some kind of index to make documents easily searchable.
|
||||
The process of indexing can include a transformation step (e.g., vectorstores often use document splitting).
|
||||
The process of indexing can include a transformation step (e.g., vectorstores often use document splitting).
|
||||
Whatever transformation is used, can be very useful to retain a link between the *transformed document* and the original, giving the retriever the ability to return the *original* document.
|
||||
|
||||

|
||||
|
||||
This is particularly useful in AI applications, because it ensures no loss in document context for the model.
|
||||
For example, you may use small chunk size for indexing documents in a vectorstore.
|
||||
If you return *only* the chunks as the retrieval result, then the model will have lost the original document context for the chunks.
|
||||
For example, you may use small chunk size for indexing documents in a vectorstore.
|
||||
If you return *only* the chunks as the retrieval result, then the model will have lost the original document context for the chunks.
|
||||
|
||||
LangChain has two different retrievers that can be used to address this challenge.
|
||||
The [Multi-Vector](/docs/how_to/multi_vector/) retriever allows the user to use any document transformation (e.g., use an LLM to write a summary of the document) for indexing while retaining linkage to the source document.
|
||||
The [ParentDocument](/docs/how_to/parent_document_retriever/) retriever links document chunks from a text-splitter transformation for indexing while retaining linkage to the source document.
|
||||
LangChain has two different retrievers that can be used to address this challenge.
|
||||
The [Multi-Vector](/docs/how_to/multi_vector/) retriever allows the user to use any document transformation (e.g., use an LLM to write a summary of the document) for indexing while retaining linkage to the source document.
|
||||
The [ParentDocument](/docs/how_to/parent_document_retriever/) retriever links document chunks from a text-splitter transformation for indexing while retaining linkage to the source document.
|
||||
|
||||
| Name | Index Type | Uses an LLM | When to Use | Description |
|
||||
|-----------------------------------------------------------|-------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|
||||
@@ -107,7 +107,7 @@ The Runnable interface provides methods to get the [JSON Schema](https://json-sc
|
||||
|
||||
These APIs are mostly used internally for unit-testing and by [LangServe](/docs/concepts/architecture#langserve) which uses the APIs for input validation and generation of [OpenAPI documentation](https://www.openapis.org/).
|
||||
|
||||
In addition, to the input and output types, some Runnables have been set up with additional run time configuration options.
|
||||
In addition, to the input and output types, some Runnables have been set up with additional run time configuration options.
|
||||
There are corresponding APIs to get the Pydantic Schema and JSON Schema of the configuration options for the Runnable.
|
||||
Please see the [Configurable Runnables](#configurable-runnables) section for more information.
|
||||
|
||||
@@ -151,12 +151,12 @@ Passing `config` to the `invoke` method is done like so:
|
||||
|
||||
```python
|
||||
some_runnable.invoke(
|
||||
some_input,
|
||||
some_input,
|
||||
config={
|
||||
'run_name': 'my_run',
|
||||
'tags': ['tag1', 'tag2'],
|
||||
'run_name': 'my_run',
|
||||
'tags': ['tag1', 'tag2'],
|
||||
'metadata': {'key': 'value'}
|
||||
|
||||
|
||||
}
|
||||
)
|
||||
```
|
||||
@@ -185,13 +185,13 @@ There are two main patterns by which new `Runnables` are created:
|
||||
foo_runnable = RunnableLambda(foo)
|
||||
```
|
||||
|
||||
LangChain will try to propagate `RunnableConfig` automatically for both of the patterns.
|
||||
LangChain will try to propagate `RunnableConfig` automatically for both of the patterns.
|
||||
|
||||
For handling the second pattern, LangChain relies on Python's [contextvars](https://docs.python.org/3/library/contextvars.html).
|
||||
|
||||
In Python 3.11 and above, this works out of the box, and you do not need to do anything special to propagate the `RunnableConfig` to the sub-calls.
|
||||
|
||||
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
|
||||
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
|
||||
|
||||
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
|
||||
not accept a `context` argument.
|
||||
@@ -201,7 +201,7 @@ Propagating the `RunnableConfig` manually is done like so:
|
||||
```python
|
||||
async def foo(input, config): # <-- Note the config argument
|
||||
return await bar_runnable.ainvoke(input, config=config)
|
||||
|
||||
|
||||
foo_runnable = RunnableLambda(foo)
|
||||
```
|
||||
|
||||
@@ -235,7 +235,7 @@ The attributes will also be propagated to [callbacks](/docs/concepts/callbacks),
|
||||
This is an advanced feature that is unnecessary for most users.
|
||||
:::
|
||||
|
||||
You may need to set a custom `run_id` for a given run, in case you want
|
||||
You may need to set a custom `run_id` for a given run, in case you want
|
||||
to reference it later or correlate it with other systems.
|
||||
|
||||
The `run_id` MUST be a valid UUID string and **unique** for each run. It is used to identify
|
||||
@@ -249,7 +249,7 @@ import uuid
|
||||
run_id = uuid.uuid4()
|
||||
|
||||
some_runnable.invoke(
|
||||
some_input,
|
||||
some_input,
|
||||
config={
|
||||
'run_id': run_id
|
||||
}
|
||||
@@ -292,7 +292,7 @@ In addition, you can use it to specify any custom configuration options to pass
|
||||
|
||||
### Setting callbacks
|
||||
|
||||
Use this option to configure [callbacks](/docs/concepts/callbacks) for the runnable at
|
||||
Use this option to configure [callbacks](/docs/concepts/callbacks) for the runnable at
|
||||
runtime. The callbacks will be passed to all sub-calls made by the runnable.
|
||||
|
||||
```python
|
||||
|
||||
@@ -52,7 +52,7 @@ In addition, there is a **legacy** async [astream_log](https://python.langchain.
|
||||
|
||||
The `stream()` method returns an iterator that yields chunks of output synchronously as they are produced. You can use a `for` loop to process each chunk in real-time. For example, when using an LLM, this allows the output to be streamed incrementally as it is generated, reducing the wait time for users.
|
||||
|
||||
The type of chunk yielded by the `stream()` and `astream()` methods depends on the component being streamed. For example, when streaming from an [LLM](/docs/concepts/chat_models) each component will be an [AIMessageChunk](/docs/concepts/messages#aimessagechunk); however, for other components, the chunk may be different.
|
||||
The type of chunk yielded by the `stream()` and `astream()` methods depends on the component being streamed. For example, when streaming from an [LLM](/docs/concepts/chat_models) each component will be an [AIMessageChunk](/docs/concepts/messages#aimessagechunk); however, for other components, the chunk may be different.
|
||||
|
||||
The `stream()` method returns an iterator that yields these chunks as they are produced. For example,
|
||||
|
||||
@@ -99,7 +99,7 @@ If you compose multiple Runnables using [LangChain’s Expression Language (LCEL
|
||||
<span data-heading-keywords="astream_events,stream_events,stream events"></span>
|
||||
|
||||
:::tip
|
||||
Use the `astream_events` API to access custom data and intermediate outputs from LLM applications built entirely with [LCEL](/docs/concepts/lcel).
|
||||
Use the `astream_events` API to access custom data and intermediate outputs from LLM applications built entirely with [LCEL](/docs/concepts/lcel).
|
||||
|
||||
While this API is available for use with [LangGraph](/docs/concepts/architecture#langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs.
|
||||
:::
|
||||
@@ -119,7 +119,7 @@ from langchain_core.output_parsers import StrOutputParser
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
model = ChatAnthropic(model="claude-3-7-sonnet-20250219")
|
||||
model = ChatAnthropic(model="claude-3-sonnet-20240229")
|
||||
|
||||
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
|
||||
parser = StrOutputParser()
|
||||
@@ -148,7 +148,7 @@ LangChain simplifies streaming from [chat models](/docs/concepts/chat_models) by
|
||||
|
||||
### How It Works
|
||||
|
||||
When you call the `invoke` (or `ainvoke`) method on a chat model, LangChain will automatically switch to streaming mode if it detects that you are trying to stream the overall application.
|
||||
When you call the `invoke` (or `ainvoke`) method on a chat model, LangChain will automatically switch to streaming mode if it detects that you are trying to stream the overall application.
|
||||
|
||||
Under the hood, it'll have `invoke` (or `ainvoke`) use the `stream` (or `astream`) method to generate its output. The result of the invocation will be the same as far as the code that was using `invoke` is concerned; however, while the chat model is being streamed, LangChain will take care of invoking `on_llm_new_token` events in LangChain's [callback system](/docs/concepts/callbacks). These callback events
|
||||
allow LangGraph `stream`/`astream` and `astream_events` to surface the chat model's output in real-time.
|
||||
@@ -158,14 +158,14 @@ Example:
|
||||
```python
|
||||
def node(state):
|
||||
...
|
||||
# The code below uses the invoke method, but LangChain will
|
||||
# The code below uses the invoke method, but LangChain will
|
||||
# automatically switch to streaming mode
|
||||
# when it detects that the overall
|
||||
# when it detects that the overall
|
||||
# application is being streamed.
|
||||
ai_message = model.invoke(state["messages"])
|
||||
...
|
||||
|
||||
for chunk in compiled_graph.stream(..., mode="messages"):
|
||||
for chunk in compiled_graph.stream(..., mode="messages"):
|
||||
...
|
||||
```
|
||||
## Async Programming
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
# Structured outputs
|
||||
|
||||
## Overview
|
||||
## Overview
|
||||
|
||||
For many applications, such as chatbots, models need to respond to users directly in natural language.
|
||||
However, there are scenarios where we need models to output in a *structured format*.
|
||||
For many applications, such as chatbots, models need to respond to users directly in natural language.
|
||||
However, there are scenarios where we need models to output in a *structured format*.
|
||||
For example, we might want to store the model output in a database and ensure that the output conforms to the database schema.
|
||||
This need motivates the concept of structured output, where models can be instructed to respond with a particular output structure.
|
||||
|
||||

|
||||
|
||||
## Key concepts
|
||||
## Key concepts
|
||||
|
||||
1. **Schema definition:** The output structure is represented as a schema, which can be defined in several ways.<br/>
|
||||
2. **Returning structured output:** The model is given this schema, and is instructed to return output that conforms to it.
|
||||
@@ -18,7 +18,7 @@ This need motivates the concept of structured output, where models can be instru
|
||||
|
||||
This pseudocode illustrates the recommended workflow when using structured output.
|
||||
LangChain provides a method, [`with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method), that automates the process of binding the schema to the [model](/docs/concepts/chat_models/) and parsing the output.
|
||||
This helper function is available for all model providers that support structured output.
|
||||
This helper function is available for all model providers that support structured output.
|
||||
|
||||
```python
|
||||
# Define schema
|
||||
@@ -29,25 +29,9 @@ model_with_structure = model.with_structured_output(schema)
|
||||
structured_output = model_with_structure.invoke(user_input)
|
||||
```
|
||||
|
||||
:::warning[Tool Order Matters]
|
||||
|
||||
When combining structured output with additional tools, bind tools **first**, then apply structured output:
|
||||
|
||||
```python
|
||||
# Correct
|
||||
model_with_tools = model.bind_tools([tool1, tool2])
|
||||
structured_model = model_with_tools.with_structured_output(schema)
|
||||
|
||||
# Incorrect - will cause tool resolution errors
|
||||
structured_model = model.with_structured_output(schema)
|
||||
broken_model = structured_model.bind_tools([tool1, tool2])
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
## Schema definition
|
||||
|
||||
The central concept is that the output structure of model responses needs to be represented in some way.
|
||||
The central concept is that the output structure of model responses needs to be represented in some way.
|
||||
While types of objects you can use depend on the model you're working with, there are common types of objects that are typically allowed or recommended for structured output in Python.
|
||||
|
||||
The simplest and most common format for structured output is a JSON-like structure, which in Python can be represented as a dictionary (dict) or list (list).
|
||||
@@ -61,7 +45,7 @@ JSON objects (or dicts in Python) are often used directly when the tool requires
|
||||
```
|
||||
|
||||
As a second example, [Pydantic](https://docs.pydantic.dev/latest/) is particularly useful for defining structured output schemas because it offers type hints and validation.
|
||||
Here's an example of a Pydantic schema:
|
||||
Here's an example of a Pydantic schema:
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, Field
|
||||
@@ -75,7 +59,7 @@ class ResponseFormatter(BaseModel):
|
||||
## Returning structured output
|
||||
|
||||
With a schema defined, we need a way to instruct the model to use it.
|
||||
While one approach is to include this schema in the prompt and *ask nicely* for the model to use it, this is not recommended.
|
||||
While one approach is to include this schema in the prompt and *ask nicely* for the model to use it, this is not recommended.
|
||||
Several more powerful methods that utilizes native features in the model provider's API are available.
|
||||
|
||||
### Using tool calling
|
||||
@@ -94,7 +78,7 @@ model_with_tools = model.bind_tools([ResponseFormatter])
|
||||
ai_msg = model_with_tools.invoke("What is the powerhouse of the cell?")
|
||||
```
|
||||
|
||||
The arguments of the tool call are already extracted as a dictionary.
|
||||
The arguments of the tool call are already extracted as a dictionary.
|
||||
This dictionary can be optionally parsed into a Pydantic object, matching our original `ResponseFormatter` schema.
|
||||
|
||||
```python
|
||||
@@ -108,7 +92,7 @@ pydantic_object = ResponseFormatter.model_validate(ai_msg.tool_calls[0]["args"])
|
||||
|
||||
### JSON mode
|
||||
|
||||
In addition to tool calling, some model providers support a feature called `JSON mode`.
|
||||
In addition to tool calling, some model providers support a feature called `JSON mode`.
|
||||
This supports JSON schema definition as input and enforces the model to produce a conforming JSON output.
|
||||
You can find a table of model providers that support JSON mode [here](/docs/integrations/chat/).
|
||||
Here is an example of how to use JSON mode with OpenAI:
|
||||
@@ -121,21 +105,21 @@ ai_msg
|
||||
{'random_ints': [45, 67, 12, 34, 89, 23, 78, 56, 90, 11]}
|
||||
```
|
||||
|
||||
## Structured output method
|
||||
## Structured output method
|
||||
|
||||
There are a few challenges when producing structured output with the above methods:
|
||||
There are a few challenges when producing structured output with the above methods:
|
||||
|
||||
1. When tool calling is used, tool call arguments needs to be parsed from a dictionary back to the original schema.<br/>
|
||||
1. When tool calling is used, tool call arguments needs to be parsed from a dictionary back to the original schema.<br/>
|
||||
|
||||
2. In addition, the model needs to be instructed to *always* use the tool when we want to enforce structured output, which is a provider specific setting.<br/>
|
||||
2. In addition, the model needs to be instructed to *always* use the tool when we want to enforce structured output, which is a provider specific setting.<br/>
|
||||
|
||||
3. When JSON mode is used, the output needs to be parsed into a JSON object.
|
||||
3. When JSON mode is used, the output needs to be parsed into a JSON object.
|
||||
|
||||
With these challenges in mind, LangChain provides a helper function (`with_structured_output()`) to streamline the process.
|
||||
|
||||

|
||||
|
||||
This both binds the schema to the model as a tool and parses the output to the specified output schema.
|
||||
This both binds the schema to the model as a tool and parses the output to the specified output schema.
|
||||
|
||||
```python
|
||||
# Bind the schema to the model
|
||||
|
||||
@@ -23,9 +23,9 @@ def test_convert_to_openai_messages():
|
||||
ToolCall(name='parrot_multiply_tool', id='1', args={'a': 2, 'b': 3}),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
result = convert_to_openai_messages(ai_message)
|
||||
|
||||
|
||||
expected = {
|
||||
"role": "assistant",
|
||||
"tool_calls": [
|
||||
|
||||
@@ -7,4 +7,4 @@ You are probably looking for the [Chat Model Concept Guide](/docs/concepts/chat_
|
||||
LangChain has implementations for older language models that take a string as input and return a string as output. These models are typically named without the "Chat" prefix (e.g., `Ollama`, `Anthropic`, `OpenAI`, etc.), and may include the "LLM" suffix (e.g., `OllamaLLM`, `AnthropicLLM`, `OpenAILLM`, etc.). These models implement the [BaseLLM](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLM) interface.
|
||||
|
||||
Users should be using almost exclusively the newer [Chat Models](/docs/concepts/chat_models) as most
|
||||
model providers have adopted a chat like interface for interacting with language models.
|
||||
model providers have adopted a chat like interface for interacting with language models.
|
||||
@@ -69,7 +69,7 @@ texts = text_splitter.split_text(document)
|
||||
|
||||
### Text-structured based
|
||||
|
||||
Text is naturally organized into hierarchical units such as paragraphs, sentences, and words.
|
||||
Text is naturally organized into hierarchical units such as paragraphs, sentences, and words.
|
||||
We can leverage this inherent structure to inform our splitting strategy, creating split that maintain natural language flow, maintain semantic coherence within split, and adapts to varying levels of text granularity.
|
||||
LangChain's [`RecursiveCharacterTextSplitter`](/docs/how_to/recursive_text_splitter/) implements this concept:
|
||||
- The `RecursiveCharacterTextSplitter` attempts to keep larger units (e.g., paragraphs) intact.
|
||||
@@ -92,7 +92,7 @@ texts = text_splitter.split_text(document)
|
||||
|
||||
### Document-structured based
|
||||
|
||||
Some documents have an inherent structure, such as HTML, Markdown, or JSON files.
|
||||
Some documents have an inherent structure, such as HTML, Markdown, or JSON files.
|
||||
In these cases, it's beneficial to split the document based on its structure, as it often naturally groups semantically related text.
|
||||
Key benefits of structure-based splitting:
|
||||
- Preserves the logical organization of the document
|
||||
@@ -116,7 +116,7 @@ Examples of structure-based splitting:
|
||||
|
||||
### Semantic meaning based
|
||||
|
||||
Unlike the previous methods, semantic-based splitting actually considers the *content* of the text.
|
||||
Unlike the previous methods, semantic-based splitting actually considers the *content* of the text.
|
||||
While other approaches use document or text structure as proxies for semantic meaning, this method directly analyzes the text's semantics.
|
||||
There are several ways to implement this, but conceptually the approach is split text when there are significant changes in text *meaning*.
|
||||
As an example, we can use a sliding window approach to generate embeddings, and compare the embeddings to find significant differences:
|
||||
|
||||
@@ -55,4 +55,4 @@ According to the OpenAI post, the approximate token counts for English text are
|
||||
|
||||
* 1 token ~= 4 chars in English
|
||||
* 1 token ~= ¾ words
|
||||
* 100 tokens ~= 75 words
|
||||
* 100 tokens ~= 75 words
|
||||
@@ -6,7 +6,7 @@
|
||||
:::
|
||||
|
||||
|
||||
## Overview
|
||||
## Overview
|
||||
|
||||
Many AI applications interact directly with humans. In these cases, it is appropriate for models to respond in natural language.
|
||||
But what about cases where we want a model to also interact *directly* with systems, such as databases or an API?
|
||||
@@ -14,12 +14,12 @@ These systems often have a particular input schema; for example, APIs frequently
|
||||
This need motivates the concept of *tool calling*. You can use [tool calling](https://platform.openai.com/docs/guides/function-calling/example-use-cases) to request model responses that match a particular schema.
|
||||
|
||||
:::info
|
||||
You will sometimes hear the term `function calling`. We use this term interchangeably with `tool calling`.
|
||||
You will sometimes hear the term `function calling`. We use this term interchangeably with `tool calling`.
|
||||
:::
|
||||
|
||||

|
||||
|
||||
## Key concepts
|
||||
## Key concepts
|
||||
|
||||
1. **Tool Creation:** Use the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator to create a [tool](/docs/concepts/tools). A tool is an association between a function and its schema.<br/>
|
||||
2. **Tool Binding:** The tool needs to be connected to a model that supports tool calling. This gives the model awareness of the tool and the associated input schema required by the tool.<br/>
|
||||
@@ -40,7 +40,7 @@ The tool call arguments can be passed directly to the tool.
|
||||
tools = [my_tool]
|
||||
# Tool binding
|
||||
model_with_tools = model.bind_tools(tools)
|
||||
# Tool calling
|
||||
# Tool calling
|
||||
response = model_with_tools.invoke(user_input)
|
||||
```
|
||||
|
||||
@@ -65,16 +65,16 @@ def multiply(a: int, b: int) -> int:
|
||||
|
||||
:::
|
||||
|
||||
## Tool binding
|
||||
## Tool binding
|
||||
|
||||
[Many](https://platform.openai.com/docs/guides/function-calling) [model providers](https://platform.openai.com/docs/guides/function-calling) support tool calling.
|
||||
[Many](https://platform.openai.com/docs/guides/function-calling) [model providers](https://platform.openai.com/docs/guides/function-calling) support tool calling.
|
||||
|
||||
:::tip
|
||||
See our [model integration page](/docs/integrations/chat/) for a list of providers that support tool calling.
|
||||
:::
|
||||
|
||||
The central concept to understand is that LangChain provides a standardized interface for connecting tools to models.
|
||||
The `.bind_tools()` method can be used to specify which tools are available for a model to call.
|
||||
The central concept to understand is that LangChain provides a standardized interface for connecting tools to models.
|
||||
The `.bind_tools()` method can be used to specify which tools are available for a model to call.
|
||||
|
||||
```python
|
||||
model_with_tools = model.bind_tools(tools_list)
|
||||
@@ -113,7 +113,7 @@ However, if we pass an input *relevant to the tool*, the model should choose to
|
||||
result = llm_with_tools.invoke("What is 2 multiplied by 3?")
|
||||
```
|
||||
|
||||
As before, the output `result` will be an `AIMessage`.
|
||||
As before, the output `result` will be an `AIMessage`.
|
||||
But, if the tool was called, `result` will have a `tool_calls` [attribute](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.tool_calls).
|
||||
This attribute includes everything needed to execute the tool, including the tool name and input arguments:
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
The **tool** abstraction in LangChain associates a Python **function** with a **schema** that defines the function's **name**, **description** and **expected arguments**.
|
||||
The **tool** abstraction in LangChain associates a Python **function** with a **schema** that defines the function's **name**, **description** and **expected arguments**.
|
||||
|
||||
**Tools** can be passed to [chat models](/docs/concepts/chat_models) that support [tool calling](/docs/concepts/tool_calling) allowing the model to request the execution of a specific function with specific inputs.
|
||||
|
||||
@@ -31,7 +31,7 @@ The key attributes that correspond to the tool's **schema**:
|
||||
The key methods to execute the function associated with the **tool**:
|
||||
|
||||
- **invoke**: Invokes the tool with the given arguments.
|
||||
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with LangChain](/docs/concepts/async).
|
||||
- **ainvoke**: Invokes the tool with the given arguments, asynchronously. Used for [async programming with Langchain](/docs/concepts/async).
|
||||
|
||||
## Create tools using the `@tool` decorator
|
||||
|
||||
@@ -68,10 +68,10 @@ You can also inspect the tool's schema and other properties:
|
||||
```python
|
||||
print(multiply.name) # multiply
|
||||
print(multiply.description) # Multiply two numbers.
|
||||
print(multiply.args)
|
||||
print(multiply.args)
|
||||
# {
|
||||
# 'type': 'object',
|
||||
# 'properties': {'a': {'type': 'integer'}, 'b': {'type': 'integer'}},
|
||||
# 'type': 'object',
|
||||
# 'properties': {'a': {'type': 'integer'}, 'b': {'type': 'integer'}},
|
||||
# 'required': ['a', 'b']
|
||||
# }
|
||||
```
|
||||
@@ -89,14 +89,14 @@ Please see the [API reference for @tool](https://python.langchain.com/api_refere
|
||||
|
||||
## Tool artifacts
|
||||
|
||||
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
|
||||
**Tools** are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.
|
||||
|
||||
```python
|
||||
@tool(response_format="content_and_artifact")
|
||||
def some_tool(...) -> Tuple[str, Any]:
|
||||
"""Tool that does something."""
|
||||
...
|
||||
return 'Message for chat model', some_artifact
|
||||
return 'Message for chat model', some_artifact
|
||||
```
|
||||
|
||||
See [how to return artifacts from tools](/docs/how_to/tool_artifacts/) for more details.
|
||||
@@ -134,7 +134,7 @@ def user_specific_tool(input_data: str, user_id: InjectedToolArg) -> str:
|
||||
Annotating the `user_id` argument with `InjectedToolArg` tells LangChain that this argument should not be exposed as part of the
|
||||
tool's schema.
|
||||
|
||||
See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more details on how to use `InjectedToolArg`.
|
||||
See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more details on how to use `InjectedToolArg`.
|
||||
|
||||
|
||||
### RunnableConfig
|
||||
@@ -171,26 +171,6 @@ Please see the [InjectedState](https://langchain-ai.github.io/langgraph/referenc
|
||||
|
||||
Please see the [InjectedStore](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.InjectedStore) documentation for more details.
|
||||
|
||||
## Tool Artifacts vs. Injected State
|
||||
|
||||
Although similar conceptually, tool artifacts in LangChain and [injected state in LangGraph](https://langchain-ai.github.io/langgraph/reference/agents/#langgraph.prebuilt.tool_node.InjectedState) serve different purposes and operate at different levels of abstraction.
|
||||
|
||||
**Tool Artifacts**
|
||||
|
||||
- **Purpose:** Store and pass data between tool executions within a single chain/workflow
|
||||
- **Scope:** Limited to tool-to-tool communication
|
||||
- **Lifecycle:** Tied to individual tool calls and their immediate context
|
||||
- **Usage:** Temporary storage for intermediate results that tools need to share
|
||||
|
||||
**Injected State (LangGraph)**
|
||||
|
||||
- **Purpose:** Maintain persistent state across the entire graph execution
|
||||
- **Scope:** Global to the entire graph workflow
|
||||
- **Lifecycle:** Persists throughout the entire graph execution and can be saved/restored
|
||||
- **Usage:** Long-term state management, conversation memory, user context, workflow checkpointing
|
||||
|
||||
Tool artifacts are ephemeral data passed between tools, while injected state is persistent workflow-level state that survives across multiple steps, tool calls, and even execution sessions in LangGraph.
|
||||
|
||||
## Best practices
|
||||
|
||||
When designing tools to be used by models, keep the following in mind:
|
||||
|
||||
@@ -7,4 +7,4 @@ Traces contain individual steps called `runs`. These can be individual calls fro
|
||||
tool, or sub-chains.
|
||||
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
|
||||
|
||||
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.langchain.com/langsmith/observability-quickstart).
|
||||
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing).
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
:::
|
||||
:::info[Note]
|
||||
|
||||
This conceptual overview focuses on text-based indexing and retrieval for simplicity.
|
||||
This conceptual overview focuses on text-based indexing and retrieval for simplicity.
|
||||
However, embedding models can be [multi-modal](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-multimodal-embeddings)
|
||||
and vector stores can be used to store and retrieve a variety of data types beyond text.
|
||||
:::
|
||||
@@ -125,7 +125,7 @@ to the documentation of the specific vectorstore you are using to see what simil
|
||||
|
||||
Given a similarity metric to measure the distance between the embedded query and any embedded document, we need an algorithm to efficiently search over *all* the embedded documents to find the most similar ones.
|
||||
There are various ways to do this. As an example, many vectorstores implement [HNSW (Hierarchical Navigable Small World)](https://www.pinecone.io/learn/series/faiss/hnsw/), a graph-based index structure that allows for efficient similarity search.
|
||||
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
|
||||
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
|
||||
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).
|
||||
|
||||
```python
|
||||
@@ -166,7 +166,7 @@ vectorstore.similarity_search(
|
||||
k=2,
|
||||
filter={"source": "tweet"},
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
:::info[Further reading]
|
||||
|
||||
@@ -179,7 +179,7 @@ vectorstore.similarity_search(
|
||||
|
||||
While algorithms like HNSW provide the foundation for efficient similarity search in many cases, additional techniques can be employed to improve search quality and diversity.
|
||||
For example, [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/) is a re-ranking algorithm used to diversify search results, which is applied after the initial similarity search to ensure a more diverse set of results.
|
||||
As a second example, some [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity search, which marries the benefits of both approaches.
|
||||
As a second example, some [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity search, which marries the benefits of both approaches.
|
||||
At the moment, there is no unified way to perform hybrid search using LangChain vectorstores, but it is generally exposed as a keyword argument that is passed in with `similarity_search`.
|
||||
See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
|
||||
|
||||
@@ -188,4 +188,4 @@ See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
|
||||
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). |
|
||||
| [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ LangChain exposes a standard interface for key components, making it easy to swi
|
||||
|
||||
3. **Observability and evaluation:** As applications become more complex, it becomes increasingly difficult to understand what is happening within them.
|
||||
Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice).
|
||||
For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
||||
For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
||||
[Observability](https://en.wikipedia.org/wiki/Observability) and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence.
|
||||
|
||||
|
||||
@@ -29,10 +29,10 @@ As an example, all [chat models](/docs/concepts/chat_models/) implement the [Bas
|
||||
This provides a standard way to interact with chat models, supporting important but often provider-specific features like [tool calling](/docs/concepts/tool_calling/) and [structured outputs](/docs/concepts/structured_outputs/).
|
||||
|
||||
|
||||
### Example: chat models
|
||||
### Example: chat models
|
||||
|
||||
Many [model providers](/docs/concepts/chat_models/) support [tool calling](/docs/concepts/tool_calling/), a critical feature for many applications (e.g., [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)), that allows a developer to request model responses that match a particular schema.
|
||||
The APIs for each provider differ.
|
||||
The APIs for each provider differ.
|
||||
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to bind [tools](/docs/concepts/tools) to a model in order to support [tool calling](/docs/concepts/tool_calling/):
|
||||
|
||||
```python
|
||||
@@ -42,7 +42,7 @@ tools = [my_tool]
|
||||
model_with_tools = model.bind_tools(tools)
|
||||
```
|
||||
|
||||
Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case.
|
||||
Similarly, getting models to produce [structured outputs](/docs/concepts/structured_outputs/) is an extremely common use case.
|
||||
Providers support different approaches for this, including [JSON mode or tool calling](https://platform.openai.com/docs/guides/structured-outputs), with different APIs.
|
||||
LangChain's [chat model](/docs/concepts/chat_models/) interface provides a common way to produce structured outputs using the `with_structured_output()` method:
|
||||
|
||||
@@ -62,9 +62,9 @@ The underlying implementation of the retriever depends on the type of data store
|
||||
documents = my_retriever.invoke("What is the meaning of life?")
|
||||
```
|
||||
|
||||
## Orchestration
|
||||
## Orchestration
|
||||
|
||||
While standardization for individual components is useful, we've increasingly seen that developers want to *combine* components into more complex applications.
|
||||
While standardization for individual components is useful, we've increasingly seen that developers want to *combine* components into more complex applications.
|
||||
This motivates the need for [orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)).
|
||||
There are several common characteristics of LLM applications that this orchestration layer should support:
|
||||
|
||||
@@ -75,7 +75,7 @@ There are several common characteristics of LLM applications that this orchestra
|
||||
The recommended way to orchestrate components for complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/).
|
||||
LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges.
|
||||
LangGraph comes with built-in support for [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), [memory](https://langchain-ai.github.io/langgraph/concepts/memory/), and other features.
|
||||
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications.
|
||||
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications.
|
||||
Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraph **without** using LangChain components.
|
||||
|
||||
:::info[Further reading]
|
||||
@@ -86,8 +86,8 @@ Have a look at our free course, [Introduction to LangGraph](https://academy.lang
|
||||
|
||||
## Observability and evaluation
|
||||
|
||||
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice.
|
||||
Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
||||
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice.
|
||||
Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
|
||||
High quality tracing and evaluations can help you rapidly answer these types of questions with confidence.
|
||||
[LangSmith](https://docs.smith.langchain.com/) is our platform that supports observability and evaluation for AI applications.
|
||||
See our conceptual guides on [evaluations](https://docs.smith.langchain.com/concepts/evaluation) and [tracing](https://docs.smith.langchain.com/concepts/tracing) for more details.
|
||||
|
||||
@@ -3,9 +3,9 @@
|
||||
Here are some things to keep in mind for all types of contributions:
|
||||
|
||||
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
|
||||
- Fill out the checked-in pull request template when opening pull requests. Note related issues.
|
||||
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
|
||||
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
|
||||
- If you would like comments or feedback on your current progress, please open an issue or discussion.
|
||||
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
|
||||
- See the sections on [Testing](setup.mdx#testing) and [Formatting and Linting](setup.mdx#formatting-and-linting) for how to run these checks locally.
|
||||
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
|
||||
- Look for duplicate PRs or issues that have already been opened before opening a new one.
|
||||
|
||||
@@ -9,14 +9,6 @@ This project utilizes [uv](https://docs.astral.sh/uv/) v0.5+ as a dependency man
|
||||
|
||||
Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/getting-started/installation/)**.
|
||||
|
||||
### Windows Users
|
||||
|
||||
If you're on Windows and don't have `make` installed, you can install it via:
|
||||
- **Option 1**: Install via [Chocolatey](https://chocolatey.org/): `choco install make`
|
||||
- **Option 2**: Install via [Scoop](https://scoop.sh/): `scoop install make`
|
||||
- **Option 3**: Use [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/)
|
||||
- **Option 4**: Use the direct `uv` commands shown in the sections below
|
||||
|
||||
## Different packages
|
||||
|
||||
This repository contains multiple packages:
|
||||
@@ -56,11 +48,7 @@ uv sync
|
||||
Then verify dependency installation:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
## Testing
|
||||
@@ -73,11 +61,7 @@ If you add new logic, please add a unit test.
|
||||
To run unit tests:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
There are also [integration tests and code-coverage](../testing.mdx) available.
|
||||
@@ -88,12 +72,7 @@ If you are only developing `langchain_core`, you can simply install the dependen
|
||||
|
||||
```bash
|
||||
cd libs/core
|
||||
|
||||
# If you have `make` installed:
|
||||
make test
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --group test pytest -n auto --disable-socket --allow-unix-socket tests/unit_tests
|
||||
```
|
||||
|
||||
## Formatting and linting
|
||||
@@ -107,37 +86,20 @@ Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules
|
||||
To run formatting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make format
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --all-groups ruff format .
|
||||
uv run --all-groups ruff check --fix .
|
||||
```
|
||||
|
||||
To run formatting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
|
||||
# If you have `make` installed:
|
||||
make format
|
||||
|
||||
# If you don't have make (Windows alternative):
|
||||
uv run --all-groups ruff format .
|
||||
uv run --all-groups ruff check --fix .
|
||||
```
|
||||
|
||||
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make format_diff
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
# First, get the list of modified files:
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff format
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff check --fix
|
||||
```
|
||||
|
||||
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
|
||||
@@ -149,89 +111,52 @@ Linting for this project is done via a combination of [ruff](https://docs.astral
|
||||
To run linting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make lint
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups ruff check .
|
||||
uv run --all-groups ruff format . --diff
|
||||
uv run --all-groups mypy . --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
To run linting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
|
||||
# If you have `make` installed:
|
||||
make lint
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
uv run --all-groups ruff check .
|
||||
uv run --all-groups ruff format . --diff
|
||||
uv run --all-groups mypy . --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
|
||||
|
||||
```bash
|
||||
# If you have `make` installed:
|
||||
make lint_diff
|
||||
|
||||
# If you don't have `make` (Windows alternative):
|
||||
# First, get the list of modified files:
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff check
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups ruff format --diff
|
||||
git diff --relative=libs/langchain --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$' | xargs uv run --all-groups mypy --cache-dir .mypy_cache
|
||||
```
|
||||
|
||||
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
|
||||
|
||||
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
|
||||
|
||||
### Pre-commit
|
||||
### Spellcheck
|
||||
|
||||
We use [pre-commit](https://pre-commit.com/) to ensure commits are formatted/linted.
|
||||
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
|
||||
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
|
||||
|
||||
#### Installing Pre-commit
|
||||
|
||||
First, install pre-commit:
|
||||
To check spelling for this project:
|
||||
|
||||
```bash
|
||||
# Option 1: Using uv (recommended)
|
||||
uv tool install pre-commit
|
||||
|
||||
# Option 2: Using Homebrew (globally for macOS/Linux)
|
||||
brew install pre-commit
|
||||
|
||||
# Option 3: Using pip
|
||||
pip install pre-commit
|
||||
make spell_check
|
||||
```
|
||||
|
||||
Then install the git hook scripts:
|
||||
To fix spelling in place:
|
||||
|
||||
```bash
|
||||
pre-commit install
|
||||
make spell_fix
|
||||
```
|
||||
|
||||
#### How Pre-commit Works
|
||||
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
|
||||
|
||||
Once installed, pre-commit will automatically run on every `git commit`. Hooks are specified in `.pre-commit-config.yaml` and will:
|
||||
|
||||
- Format code using `ruff` for the specific library/package you're modifying
|
||||
- Only run on files that have changed
|
||||
- Prevent commits if formatting fails
|
||||
|
||||
#### Skipping Pre-commit
|
||||
|
||||
In exceptional cases, you can skip pre-commit hooks with:
|
||||
|
||||
```bash
|
||||
git commit --no-verify
|
||||
```python
|
||||
[tool.codespell]
|
||||
...
|
||||
# Add here:
|
||||
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
|
||||
```
|
||||
|
||||
However, this is discouraged as the CI system will still enforce the same formatting rules.
|
||||
|
||||
## Working with optional dependencies
|
||||
|
||||
`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Contribute documentation
|
||||
|
||||
Documentation is a vital part of LangChain. We welcome both new documentation for new features and
|
||||
Documentation is a vital part of LangChain. We welcome both new documentation for new features and
|
||||
community improvements to our current documentation. Please read the resources below before getting started:
|
||||
|
||||
- [Documentation style guide](style_guide.mdx)
|
||||
|
||||
@@ -35,7 +35,7 @@ Some examples include:
|
||||
- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)
|
||||
|
||||
A good structural rule of thumb is to follow the structure of this [example from Numpy](https://numpy.org/numpy-tutorials/content/tutorial-svd.html).
|
||||
|
||||
|
||||
Here are some high-level tips on writing a good tutorial:
|
||||
|
||||
- Focus on guiding the user to get something done, but keep in mind the end-goal is more to impart principles than to create a perfect production system.
|
||||
@@ -49,7 +49,7 @@ Here are some high-level tips on writing a good tutorial:
|
||||
- The first time you mention a LangChain concept, use its full name (e.g. "LangChain Expression Language (LCEL)"), and link to its conceptual/other documentation page.
|
||||
- It's also helpful to add a prerequisite callout that links to any pages with necessary background information.
|
||||
- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as related how-to guides.
|
||||
|
||||
|
||||
### How-to guides
|
||||
|
||||
A how-to guide, as the name implies, demonstrates how to do something discrete and specific.
|
||||
@@ -79,7 +79,7 @@ Here are some high-level tips on writing a good how-to guide:
|
||||
|
||||
### Conceptual guide
|
||||
|
||||
LangChain's conceptual guides fall under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
|
||||
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
|
||||
in a more abstract way than how-to guides or tutorials, targeting curious users interested in
|
||||
gaining a deeper understanding and insights of the framework. Try to avoid excessively large code examples as the primary goal is to
|
||||
provide perspective to the user rather than to finish a practical project. These guides should cover **why** things work the way they do.
|
||||
@@ -105,7 +105,7 @@ Here are some high-level tips on writing a good conceptual guide:
|
||||
### References
|
||||
|
||||
References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
|
||||
In LangChain, these are mainly our API reference pages, which are populated from docstrings within code.
|
||||
In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
|
||||
References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
|
||||
how to use something specific.
|
||||
|
||||
@@ -119,7 +119,7 @@ but here are some high-level tips on writing a good docstring:
|
||||
- Be concise
|
||||
- Discuss special cases and deviations from a user's expectations
|
||||
- Go into detail on required inputs and outputs
|
||||
- Light details on when one might use the feature are fine, but in-depth details belong in other sections
|
||||
- Light details on when one might use the feature are fine, but in-depth details belong in other sections.
|
||||
|
||||
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
|
||||
|
||||
@@ -127,17 +127,17 @@ Each category serves a distinct purpose and requires a specific approach to writ
|
||||
|
||||
Here are some other guidelines you should think about when writing and organizing documentation.
|
||||
|
||||
We generally do not merge new tutorials from outside contributors without an acute need.
|
||||
We generally do not merge new tutorials from outside contributors without an actue need.
|
||||
We welcome updates as well as new integration docs, how-tos, and references.
|
||||
|
||||
### Avoid duplication
|
||||
|
||||
Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
|
||||
be only one (very rarely two) canonical pages for a given concept or feature. Instead, you should link to other guides.
|
||||
be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
|
||||
|
||||
### Link to other sections
|
||||
|
||||
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently
|
||||
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently,
|
||||
to allow a developer to learn more about an unfamiliar topic within the flow of reading.
|
||||
|
||||
This includes linking to the API references and conceptual sections!
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user