Compare commits

..

22 Commits

Author SHA1 Message Date
Chester Curme
ebcee67e1c x 2025-05-08 13:07:12 -04:00
Chester Curme
395008456e x 2025-05-08 13:05:28 -04:00
Chester Curme
d9715ad797 x 2025-05-08 13:03:19 -04:00
Chester Curme
3516c2c394 x 2025-05-08 13:01:28 -04:00
Chester Curme
e01c52d2d0 x 2025-05-08 12:59:31 -04:00
Chester Curme
2f39398736 x 2025-05-08 12:56:12 -04:00
Chester Curme
8868c6b1ff x 2025-05-08 11:06:22 -04:00
Chester Curme
18c1d8a50c x 2025-05-08 11:05:09 -04:00
Chester Curme
fc797fadf4 x 2025-05-08 11:03:31 -04:00
Chester Curme
c678528334 x 2025-05-08 11:00:10 -04:00
Chester Curme
a074bd08f7 x 2025-05-08 10:58:36 -04:00
Chester Curme
e7fef3bd8e Revert "xx"
This reverts commit 08574848cc.
2025-05-08 10:56:38 -04:00
Chester Curme
08574848cc xx 2025-05-08 10:55:08 -04:00
Chester Curme
c65571136f x 2025-05-08 10:52:16 -04:00
Chester Curme
6acd55b216 x 2025-05-08 10:50:14 -04:00
Chester Curme
c225f1a4cb x 2025-05-08 10:47:57 -04:00
Chester Curme
50ce3bf5d9 x 2025-05-08 10:45:55 -04:00
Chester Curme
f23b8008ce x 2025-05-08 10:42:52 -04:00
Chester Curme
31077f5df2 x 2025-05-08 10:41:10 -04:00
Chester Curme
1a43bfb55b x 2025-05-08 10:39:42 -04:00
Chester Curme
3c2e05e43f x 2025-05-08 10:38:05 -04:00
Chester Curme
ad68ddbf40 test stream twice 2025-05-08 10:35:31 -04:00
2350 changed files with 66781 additions and 143038 deletions

View File

@@ -5,31 +5,26 @@ This project includes a [dev container](https://containers.dev/), which lets you
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of <https://github.com/langchain-ai/langchain>.
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```txt
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/&lt;YOUR_USERNAME&gt;/&lt;YOUR_CLONED_REPO_NAME&gt;
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will use VSCode to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
@@ -45,5 +40,5 @@ You can learn more in the [Dev Containers documentation](https://code.visualstud
## Tips and tricks
- If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
- If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,58 +1,36 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
"mounts": [
"source=langchain-workspaces,target=/workspaces/langchain,type=volume"
],
// Prevent the container from shutting down
"overrideCommand": true,
// Features to add to the dev container. More info: https://containers.dev/features
"features": {
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"containerEnv": {
"UV_LINK_MODE": "copy"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Run commands after the container is created
"postCreateCommand": "uv sync && echo 'LangChain (Python) dev environment ready!'",
// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.debugpy",
"ms-python.mypy-type-checker",
"ms-python.isort",
"unifiedjs.vscode-mdx",
"davidanson.vscode-markdownlint",
"ms-toolsai.jupyter",
"GitHub.copilot",
"GitHub.copilot-chat"
],
"settings": {
"python.defaultInterpreterPath": ".venv/bin/python",
"python.formatting.provider": "none",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -4,9 +4,26 @@ services:
build:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
# depends_on:
# - mongo
# mongo:
# image: mongo
# restart: unless-stopped
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example123
# ports:
# - "27017:27017"
# networks:
# - langchain-network
networks:
langchain-network:

View File

@@ -1,52 +0,0 @@
# top-most EditorConfig file
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88
# JSON files
[*.json]
indent_style = space
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_style = space
indent_size = 2
# Markdown files
[*.md]
indent_style = space
indent_size = 2
trim_trailing_whitespace = false
# Configuration files
[*.{toml,ini,cfg}]
indent_style = space
indent_size = 4
# Shell scripts
[*.sh]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 4
# Jupyter notebooks
[*.ipynb]
# Jupyter may include trailing whitespace in cell
# outputs that's semantically meaningful
trim_trailing_whitespace = false

3
.github/CODEOWNERS vendored
View File

@@ -1,3 +1,2 @@
/.github/ @baskaryan @ccurme @eyurtsev
/libs/core/ @eyurtsev
/.github/ @baskaryan @ccurme
/libs/packages.yml @ccurme

View File

@@ -129,4 +129,4 @@ For answers to common questions about this code of conduct, see the FAQ at
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -3,4 +3,4 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).

38
.github/DISCUSSION_TEMPLATE/ideas.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
labels: [idea]
body:
- type: checkboxes
id: checks
attributes:
label: Checked
description: Please confirm and check all the following options.
options:
- label: I searched existing ideas and did not find a similar one
required: true
- label: I added a very descriptive title
required: true
- label: I've clearly described the feature request and motivation for it
required: true
- type: textarea
id: feature-request
validations:
required: true
attributes:
label: Feature request
description: |
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: motivation
validations:
required: true
attributes:
label: Motivation
description: |
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposal
validations:
required: false
attributes:
label: Proposal (If applicable)
description: |
If you would like to propose a solution, please describe it here.

122
.github/DISCUSSION_TEMPLATE/q-a.yml vendored Normal file
View File

@@ -0,0 +1,122 @@
labels: [Question]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
Relevant links to check before opening a question to see if your question has already been answered, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this question.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: checkboxes
id: help
attributes:
label: Commit to Help
description: |
After submitting this, I commit to one of:
* Read open questions until I find 2 where I can help someone and add a comment to help there.
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Once my question is answered, I will mark the answer as "accepted".
options:
- label: I commit to help with one of those options 👆
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
render: python
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,32 +1,33 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
labels: ["02 Bug Report"]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to file a bug report.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
value: >
Thank you for taking the time to file a bug report.
Use this to report bugs in LangChain.
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
to ask for help with your issue.
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a bug, not a usage question.
required: true
- label: I added a clear and descriptive title that summarizes this issue.
- label: I added a very descriptive title to this issue.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
@@ -34,10 +35,6 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
@@ -48,25 +45,25 @@ body:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
The following code:
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
@@ -102,18 +99,16 @@ body:
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Run the following command in your terminal and paste the output here:
`python -m langchain_core.sys_info`
python -m langchain_core.sys_info
or if you have an existing python interpreter running:
```python
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info

View File

@@ -1,9 +1,12 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea
- name: Show and tell
about: Show what you built with LangChain
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell

View File

@@ -0,0 +1,58 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -1,118 +0,0 @@
name: "✨ Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: Feature Description
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
- type: textarea
id: use-case
validations:
required: true
attributes:
label: Use Case
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
- type: textarea
id: proposed-solution
validations:
required: false
attributes:
label: Proposed Solution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...

View File

@@ -4,7 +4,12 @@ body:
- type: markdown
attributes:
value: |
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
- type: checkboxes
id: privileged
attributes:

View File

@@ -1,91 +0,0 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false

View File

@@ -1,33 +1,28 @@
(Replace this entire block of text)
Thank you for contributing to LangChain!
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
- [ ] **Add tests and docs**: If you're adding a new integration, please include
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -4,4 +4,4 @@ RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.
COPY ./app /app
CMD ["python", "/app/main.py"]
CMD ["python", "/app/main.py"]

View File

@@ -1,13 +1,11 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
# TODO: fix this, migrate to new docs repo?
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
required: true
runs:
using: "docker"
image: "Dockerfile"
using: 'docker'
image: 'Dockerfile'

View File

@@ -1,24 +1,12 @@
# Helper to set up Python and uv with caching
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
name: uv-install
description: Set up Python and uv with caching
description: Set up Python and uv
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: "true"
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ""
working-directory:
description: Working directory for cache glob scoping
required: false
default: "**"
env:
UV_VERSION: "0.5.25"
@@ -27,13 +15,7 @@ runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v6
uses: astral-sh/setup-uv@v5
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -1,11 +0,0 @@
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
# and
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -1,80 +0,0 @@
# Label PRs (config)
# Automatically applies labels based on changed files and branch patterns
# Core packages
core:
- changed-files:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
v1:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
cli:
- changed-files:
- any-glob-to-any-file:
- "libs/cli/**/*"
standard-tests:
- changed-files:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
# Partner integrations
integration:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/**/*"
# Infrastructure and DevOps
infra:
- changed-files:
- any-glob-to-any-file:
- ".github/**/*"
- "Makefile"
- ".pre-commit-config.yaml"
- "scripts/**/*"
- "docker/**/*"
- "Dockerfile*"
github_actions:
- changed-files:
- any-glob-to-any-file:
- ".github/workflows/**/*"
- ".github/actions/**/*"
dependencies:
- changed-files:
- any-glob-to-any-file:
- "**/pyproject.toml"
- "uv.lock"
- "**/requirements*.txt"
- "**/poetry.lock"
# Documentation
documentation:
- changed-files:
- any-glob-to-any-file:
- "docs/**/*"
- "**/*.md"
- "**/*.rst"
- "**/README*"
# Security related changes
security:
- changed-files:
- any-glob-to-any-file:
- "**/*security*"
- "**/*auth*"
- "**/*credential*"
- "**/*secret*"
- "**/*token*"
- ".github/workflows/security*"

View File

@@ -1,41 +0,0 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -1,38 +1,24 @@
"""Analyze git diffs to determine which directories need to be tested.
Intelligently determines which LangChain packages and directories need to be tested,
linted, or built based on the changes. Handles dependency relationships between
packages, maps file changes to appropriate CI job configurations, and outputs JSON
configurations for GitHub Actions.
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
- Builds dependency graph to include dependent packages when core components change
- Generates test matrix configurations with appropriate Python versions
- Handles special cases for Pydantic version testing and performance benchmarks
Used as part of the check_diffs workflow.
"""
import glob
import json
import os
import sys
from collections import defaultdict
from pathlib import Path
from typing import Dict, List, Set
from pathlib import Path
import tomllib
from get_min_versions import get_min_version_from_toml
from packaging.requirements import Requirement
from get_min_versions import get_min_version_from_toml
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
]
# When set to True, we are ignoring core dependents
# when set to True, we are ignoring core dependents
# in order to be able to get CI to pass for each individual
# package that depends on core
# e.g. if you touch core, we don't then add textsplitters/etc to CI
@@ -51,7 +37,8 @@ IGNORED_PARTNERS = [
]
PY_312_MAX_PACKAGES = [
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
"libs/partners/voyageai",
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@@ -64,9 +51,9 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""Construct a mapping of package -> dependents
Done such that we can run tests on all dependents of a package when a change is made.
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
@@ -98,9 +85,9 @@ def dependents_graph() -> dict:
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith("-e ../partners/"), (
"Extended test deps should only editable install partner packages"
)
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
@@ -133,21 +120,18 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if job == "test-pydantic":
return _get_pydantic_test_configs(dir_)
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
elif dir_ == "libs/core":
if dir_ == "libs/core":
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ == "libs/partners/milvus":
# milvus doesn't allow 3.12 because they declare deps in funny way
py_versions = ["3.9", "3.11"]
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/langchain_v1":
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
@@ -227,8 +211,6 @@ def _get_configs_for_multi_dirs(
)
elif job == "extended-tests":
dirs = list(dirs_to_run["extended-test"])
elif job == "codspeed":
dirs = list(dirs_to_run["codspeed"])
else:
raise ValueError(f"Unknown job: {job}")
@@ -244,7 +226,6 @@ if __name__ == "__main__":
"lint": set(),
"test": set(),
"extended-test": set(),
"codspeed": set(),
}
docs_edited = False
@@ -268,8 +249,6 @@ if __name__ == "__main__":
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if file.startswith("libs/core"):
dirs_to_run["codspeed"].add(f"libs/core")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
@@ -285,7 +264,7 @@ if __name__ == "__main__":
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# Note: won't run on external repo partners
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
@@ -299,7 +278,7 @@ if __name__ == "__main__":
elif file.startswith("libs/cli"):
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
@@ -308,7 +287,6 @@ if __name__ == "__main__":
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file == "libs/packages.yml":
continue
@@ -317,10 +295,7 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif file.startswith("docs/") or file in [
"pyproject.toml",
"uv.lock",
]: # docs or root uv files
elif file.startswith("docs/") or file in ["pyproject.toml", "uv.lock"]: # docs or root uv files
docs_edited = True
dirs_to_run["lint"].add(".")
@@ -337,7 +312,6 @@ if __name__ == "__main__":
"compile-integration-tests",
"dependencies",
"test-pydantic",
"codspeed",
]
}
map_job_to_configs["test-doc-imports"] = (

View File

@@ -1,21 +1,19 @@
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# See if we're releasing an rc or dev version
# see if we're releasing an rc
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# If not, iterate through dependencies and make sure none allow prereleases
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:

View File

@@ -1,22 +1,24 @@
"""Get minimum versions of dependencies from a pyproject.toml file."""
import sys
from collections import defaultdict
import sys
from typing import Optional
if sys.version_info >= (3, 11):
import tomllib
else:
# For Python 3.10 and below, which doesnt have stdlib tomllib
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
import re
from typing import List
import requests
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version, parse
from packaging.version import Version
import requests
from packaging.version import parse
from typing import List
import re
MIN_VERSION_LIBS = [
"langchain-core",
@@ -36,13 +38,14 @@ SKIP_IF_PULL_REQUEST = [
def get_pypi_versions(package_name: str) -> List[str]:
"""Fetch all available versions for a package from PyPI.
"""
Fetch all available versions for a package from PyPI.
Args:
package_name: Name of the package
package_name (str): Name of the package
Returns:
List of all available versions
List[str]: List of all available versions
Raises:
requests.exceptions.RequestException: If PyPI API request fails
@@ -55,26 +58,25 @@ def get_pypi_versions(package_name: str) -> List[str]:
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""Find the minimum published version that satisfies the given constraints.
"""
Find the minimum published version that satisfies the given constraints.
Args:
package_name: Name of the package
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Minimum compatible version or None if no compatible version found
Optional[str]: Minimum compatible version or None if no compatible version found
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
)
spec_set = SpecifierSet(spec_string)
@@ -154,28 +156,25 @@ def get_min_version_from_toml(
def check_python_version(version_string, constraint_string):
"""Check if the given Python version matches the given constraints.
"""
Check if the given Python version matches the given constraints.
Args:
version_string: A string representing the Python version (e.g. "3.8.5").
constraint_string: A string representing the package's Python version
constraints (e.g. ">=3.6, <4.0").
Returns:
True if the version matches the constraints
:param version_string: A string representing the Python version (e.g. "3.8.5").
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
)
try:

View File

@@ -1,19 +1,15 @@
#!/usr/bin/env python
"""Sync libraries from various repositories into this monorepo.
Moves cloned partner packages into libs/partners structure.
"""
"""Script to sync libraries from various repositories into the main langchain repository."""
import os
import shutil
from pathlib import Path
from typing import Any, Dict
import yaml
from pathlib import Path
from typing import Dict, Any
def load_packages_yaml() -> Dict[str, Any]:
"""Load and parse packages.yml."""
"""Load and parse the packages.yml file."""
with open("langchain/libs/packages.yml", "r") as f:
return yaml.safe_load(f)
@@ -32,6 +28,7 @@ def get_target_dir(package_name: str) -> Path:
def clean_target_directories(packages: list) -> None:
"""Remove old directories that will be replaced."""
for package in packages:
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
@@ -41,6 +38,7 @@ def clean_target_directories(packages: list) -> None:
def move_libraries(packages: list) -> None:
"""Move libraries from their source locations to the target directories."""
for package in packages:
repo_name = package["repo"].split("/")[1]
source_path = package["path"]
target_dir = get_target_dir(package["name"])
@@ -64,46 +62,29 @@ def move_libraries(packages: list) -> None:
def main():
"""Orchestrate the library sync process."""
"""Main function to orchestrate the library sync process."""
try:
# Load packages configuration
package_yaml = load_packages_yaml()
# Clean/empty target directories in preparation for moving new ones
#
# Only for packages in the langchain-ai org or explicitly included via
# include_in_api_ref, excluding 'langchain' itself and 'langchain-ai21'
clean_target_directories(
[
p
for p in package_yaml["packages"]
if (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Clean target directories
clean_target_directories([
p
for p in package_yaml["packages"]
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Move cloned libraries to their new locations, only for packages in the
# langchain-ai org or explicitly included via include_in_api_ref,
# excluding 'langchain' itself and 'langchain-ai21'
move_libraries(
[
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Move libraries to their new locations
move_libraries([
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Delete partner packages without a pyproject.toml
# Delete ones without a pyproject.toml
for partner in Path("langchain/libs/partners").iterdir():
if partner.is_dir() and not (partner / "pyproject.toml").exists():
print(f"Removing {partner} as it does not have a pyproject.toml")

View File

@@ -81,93 +81,56 @@ import time
__version__ = "2022.12+dev"
# Update symlinks only if the platform supports not following them
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
# Call os.path.normpath() only if not in a POSIX platform (Windows)
NORMALIZE_PATHS = os.path.sep != "/"
NORMALIZE_PATHS = (os.path.sep != '/')
# How many files to process in each batch when re-trying merge commits
STEPMISSING = 100
# (Extra) keywords for the os.utime() call performed by touch()
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
# Command-line interface ######################################################
def parse_args():
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
parser = argparse.ArgumentParser(
description=__doc__.split('\n---')[0])
group = parser.add_mutually_exclusive_group()
group.add_argument(
"--quiet",
"-q",
dest="loglevel",
action="store_const",
const=logging.WARNING,
default=logging.INFO,
help="Suppress informative messages and summary statistics.",
)
group.add_argument(
"--verbose",
"-v",
action="count",
help="""
group.add_argument('--quiet', '-q', dest='loglevel',
action="store_const", const=logging.WARNING, default=logging.INFO,
help="Suppress informative messages and summary statistics.")
group.add_argument('--verbose', '-v', action="count", help="""
Print additional information for each processed file.
Specify twice to further increase verbosity.
""",
)
""")
parser.add_argument(
"--cwd",
"-C",
metavar="DIRECTORY",
help="""
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
Run as if %(prog)s was started in directory %(metavar)s.
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
See 'man 1 git' or 'git --help' for more information.
""",
)
""")
parser.add_argument(
"--git-dir",
dest="gitdir",
metavar="GITDIR",
help="""
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
Path to the git repository, by default auto-discovered by searching
the current directory and its parents for a .git/ subdirectory.
""",
)
""")
parser.add_argument(
"--work-tree",
dest="workdir",
metavar="WORKTREE",
help="""
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
Path to the work tree root, by default the parent of GITDIR if it's
automatically discovered, or the current directory if GITDIR is set.
""",
)
""")
parser.add_argument(
"--force",
"-f",
default=False,
action="store_true",
help="""
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
Force updating files with uncommitted modifications.
Untracked files and uncommitted deletions, renames and additions are
always ignored.
""",
)
""")
parser.add_argument(
"--merge",
"-m",
default=False,
action="store_true",
help="""
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
Include merge commits.
Leads to more recent times and more files per commit, thus with the same
time, which may or may not be what you want.
@@ -175,130 +138,71 @@ def parse_args():
are found sooner, which can improve performance, sometimes substantially.
But as merge commits are usually huge, processing them may also take longer.
By default, merge commits are only used for files missing from regular commits.
""",
)
""")
parser.add_argument(
"--first-parent",
default=False,
action="store_true",
help="""
parser.add_argument('--first-parent', default=False, action="store_true", help="""
Consider only the first parent, the "main branch", when evaluating merge commits.
Only effective when merge commits are processed, either when --merge is
used or when finding missing files after the first regular log search.
See --skip-missing.
""",
)
""")
parser.add_argument(
"--skip-missing",
"-s",
dest="missing",
default=True,
action="store_false",
help="""
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
action="store_false", help="""
Do not try to find missing files.
If merge commits were not evaluated with --merge and some files were
not found in regular commits, by default %(prog)s searches for these
files again in the merge commits.
This option disables this retry, so files found only in merge commits
will not have their timestamp updated.
""",
)
""")
parser.add_argument(
"--no-directories",
"-D",
dest="dirs",
default=True,
action="store_false",
help="""
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
action="store_false", help="""
Do not update directory timestamps.
By default, use the time of its most recently created, renamed or deleted file.
Note that just modifying a file will NOT update its directory time.
""",
)
""")
parser.add_argument(
"--test",
"-t",
default=False,
action="store_true",
help="Test run: do not actually update any file timestamp.",
)
parser.add_argument('--test', '-t', default=False, action="store_true",
help="Test run: do not actually update any file timestamp.")
parser.add_argument(
"--commit-time",
"-c",
dest="commit_time",
default=False,
action="store_true",
help="Use commit time instead of author time.",
)
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
action='store_true', help="Use commit time instead of author time.")
parser.add_argument(
"--oldest-time",
"-o",
dest="reverse_order",
default=False,
action="store_true",
help="""
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
action='store_true', help="""
Update times based on the oldest, instead of the most recent commit of a file.
This reverses the order in which the git log is processed to emulate a
file "creation" date. Note this will be inaccurate for files deleted and
re-created at later dates.
""",
)
""")
parser.add_argument(
"--skip-older-than",
metavar="SECONDS",
type=int,
help="""
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
Ignore files that are currently older than %(metavar)s.
Useful in workflows that assume such files already have a correct timestamp,
as it may improve performance by processing fewer files.
""",
)
""")
parser.add_argument(
"--skip-older-than-commit",
"-N",
default=False,
action="store_true",
help="""
parser.add_argument('--skip-older-than-commit', '-N', default=False,
action='store_true', help="""
Ignore files older than the timestamp it would be updated to.
Such files may be considered "original", likely in the author's repository.
""",
)
""")
parser.add_argument(
"--unique-times",
default=False,
action="store_true",
help="""
parser.add_argument('--unique-times', default=False, action="store_true", help="""
Set the microseconds to a unique value per commit.
Allows telling apart changes that would otherwise have identical timestamps,
as git's time accuracy is in seconds.
""",
)
""")
parser.add_argument(
"pathspec",
nargs="*",
metavar="PATHSPEC",
help="""
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
Only modify paths matching %(metavar)s, relative to current directory.
By default, update all but untracked files and submodules.
""",
)
""")
parser.add_argument(
"--version",
"-V",
action="version",
version="%(prog)s version {version}".format(version=get_version()),
)
parser.add_argument('--version', '-V', action='version',
version='%(prog)s version {version}'.format(version=get_version()))
args_ = parser.parse_args()
if args_.verbose:
@@ -308,18 +212,17 @@ def parse_args():
def get_version(version=__version__):
if not version.endswith("+dev"):
if not version.endswith('+dev'):
return version
try:
cwd = os.path.dirname(os.path.realpath(__file__))
return Git(cwd=cwd, errors=False).describe().lstrip("v")
return Git(cwd=cwd, errors=False).describe().lstrip('v')
except Git.Error:
return "-".join((version, "unknown"))
return '-'.join((version, "unknown"))
# Helper functions ############################################################
def setup_logging():
"""Add TRACE logging level and corresponding method, return the root logger"""
logging.TRACE = TRACE = logging.DEBUG // 2
@@ -352,13 +255,11 @@ def normalize(path):
if path and path[0] == '"':
# Python 2: path = path[1:-1].decode("string-escape")
# Python 3: https://stackoverflow.com/a/46650050/624066
path = (
path[1:-1] # Remove enclosing double quotes
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
.decode("unicode-escape") # Perform the actual octal-escaping decode
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
.decode("utf8", "surrogateescape")
) # Decode from UTF-8
path = (path[1:-1] # Remove enclosing double quotes
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
.decode('unicode-escape') # Perform the actual octal-escaping decode
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
if NORMALIZE_PATHS:
# Make sure the slash matches the OS; for Windows we need a backslash
path = os.path.normpath(path)
@@ -381,12 +282,12 @@ def touch_ns(path, mtime_ns):
def isodate(secs: int):
# time.localtime() accepts floats, but discards fractional part
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
def isodate_ns(ns: int):
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
def get_mtime_ns(secs: int, idx: int):
@@ -404,49 +305,35 @@ def get_mtime_path(path):
# Git class and parse_log(), the heart of the script ##########################
class Git:
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
self.gitcmd = ["git"]
self.gitcmd = ['git']
self.errors = errors
self._proc = None
if workdir:
self.gitcmd.extend(("--work-tree", workdir))
if gitdir:
self.gitcmd.extend(("--git-dir", gitdir))
if cwd:
self.gitcmd.extend(("-C", cwd))
if workdir: self.gitcmd.extend(('--work-tree', workdir))
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
if cwd: self.gitcmd.extend(('-C', cwd))
self.workdir, self.gitdir = self._get_repo_dirs()
def ls_files(self, paths: list = None):
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
def ls_dirty(self, force=False):
return (
normalize(_[3:].split(" -> ", 1)[-1])
for _ in self._run("status --porcelain")
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
)
return (normalize(_[3:].split(' -> ', 1)[-1])
for _ in self._run('status --porcelain')
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
or _[1] == 'D')))
def log(
self,
merge=False,
first_parent=False,
commit_time=False,
reverse_order=False,
paths: list = None,
):
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
if merge:
cmd += " -m"
if first_parent:
cmd += " --first-parent"
if reverse_order:
cmd += " --reverse"
def log(self, merge=False, first_parent=False, commit_time=False,
reverse_order=False, paths: list = None):
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
if merge: cmd += ' -m'
if first_parent: cmd += ' --first-parent'
if reverse_order: cmd += ' --reverse'
return self._run(cmd, paths)
def describe(self):
return self._run("describe --tags", check=True)[0]
return self._run('describe --tags', check=True)[0]
def terminate(self):
if self._proc is None:
@@ -458,22 +345,18 @@ class Git:
pass
def _get_repo_dirs(self):
return (
os.path.normpath(_)
for _ in self._run(
"rev-parse --show-toplevel --absolute-git-dir", check=True
)
)
return (os.path.normpath(_) for _ in
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
cmdlist = self.gitcmd + shlex.split(cmdstr)
if paths:
cmdlist.append("--")
cmdlist.append('--')
cmdlist.extend(paths)
popen_args = dict(universal_newlines=True, encoding="utf8")
popen_args = dict(universal_newlines=True, encoding='utf8')
if not self.errors:
popen_args["stderr"] = subprocess.DEVNULL
log.trace("Executing: %s", " ".join(cmdlist))
popen_args['stderr'] = subprocess.DEVNULL
log.trace("Executing: %s", ' '.join(cmdlist))
if not output:
return subprocess.call(cmdlist, **popen_args)
if check:
@@ -496,26 +379,30 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
mtime = 0
datestr = isodate(0)
for line in git.log(
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
merge,
args.first_parent,
args.commit_time,
args.reverse_order,
filterlist
):
stats["loglines"] += 1
stats['loglines'] += 1
# Blank line between Date and list of files
if not line:
continue
# Date line
if line[0] != ":": # Faster than `not line.startswith(':')`
stats["commits"] += 1
if line[0] != ':': # Faster than `not line.startswith(':')`
stats['commits'] += 1
mtime = int(line)
if args.unique_times:
mtime = get_mtime_ns(mtime, stats["commits"])
mtime = get_mtime_ns(mtime, stats['commits'])
if args.debug:
datestr = isodate(mtime)
continue
# File line: three tokens if it describes a renaming, otherwise two
tokens = line.split("\t")
tokens = line.split('\t')
# Possible statuses:
# M: Modified (content changed)
@@ -524,7 +411,7 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
# T: Type changed: to/from regular file, symlinks, submodules
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
status = tokens[0].split(" ")[-1]
status = tokens[0].split(' ')[-1]
file = tokens[-1]
# Handles non-ASCII chars and OS path separator
@@ -532,76 +419,56 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
def do_file():
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
stats["skip"] += 1
stats['skip'] += 1
return
if args.debug:
log.debug(
"%d\t%d\t%d\t%s\t%s",
stats["loglines"],
stats["commits"],
stats["files"],
datestr,
file,
)
log.debug("%d\t%d\t%d\t%s\t%s",
stats['loglines'], stats['commits'], stats['files'],
datestr, file)
try:
touch(os.path.join(git.workdir, file), mtime)
stats["touches"] += 1
stats['touches'] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, file)
stats["errors"] += 1
stats['errors'] += 1
def do_dir():
if args.debug:
log.debug(
"%d\t%d\t-\t%s\t%s",
stats["loglines"],
stats["commits"],
datestr,
"{}/".format(dirname or "."),
)
log.debug("%d\t%d\t-\t%s\t%s",
stats['loglines'], stats['commits'],
datestr, "{}/".format(dirname or '.'))
try:
touch(os.path.join(git.workdir, dirname), mtime)
stats["dirtouches"] += 1
stats['dirtouches'] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, dirname)
stats["direrrors"] += 1
stats['direrrors'] += 1
if file in filelist:
stats["files"] -= 1
stats['files'] -= 1
filelist.remove(file)
do_file()
if args.dirs and status in ("A", "D"):
if args.dirs and status in ('A', 'D'):
dirname = os.path.dirname(file)
if dirname in dirlist:
dirlist.remove(dirname)
do_dir()
# All files done?
if not stats["files"]:
if not stats['files']:
git.terminate()
return
# Main Logic ##################################################################
def main():
start = time.time() # yes, Wall time. CPU time is not realistic for users.
stats = {
_: 0
for _ in (
"loglines",
"commits",
"touches",
"skip",
"errors",
"dirtouches",
"direrrors",
)
}
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
'dirtouches', 'direrrors')}
logging.basicConfig(level=args.loglevel, format="%(message)s")
logging.basicConfig(level=args.loglevel, format='%(message)s')
log.trace("Arguments: %s", args)
# First things first: Where and Who are we?
@@ -632,16 +499,13 @@ def main():
# Symlink (to file, to dir or broken - git handles the same way)
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
log.warning(
"WARNING: Skipping symlink, no OS support for updates: %s", path
)
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
path)
continue
# skip files which are older than given threshold
if (
args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than
):
if (args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than):
continue
# Always add files relative to worktree root
@@ -655,17 +519,15 @@ def main():
else:
dirty = set(git.ls_dirty())
if dirty:
log.warning(
"WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force."
)
log.warning("WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force.")
filelist -= dirty
# Build dir list to be processed
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
stats["totalfiles"] = stats["files"] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
stats['totalfiles'] = stats['files'] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
if not filelist:
# Nothing to do. Exit silently and without errors, just like git does
@@ -682,18 +544,10 @@ def main():
if args.missing and not args.merge:
filterlist = list(filelist)
missing = len(filterlist)
log.info(
"{0:,} files not found in log, trying merge commits".format(missing)
)
log.info("{0:,} files not found in log, trying merge commits".format(missing))
for i in range(0, missing, STEPMISSING):
parse_log(
filelist,
dirlist,
stats,
git,
merge=True,
filterlist=filterlist[i : i + STEPMISSING],
)
parse_log(filelist, dirlist, stats, git,
merge=True, filterlist=filterlist[i:i + STEPMISSING])
# Still missing some?
for file in filelist:
@@ -702,33 +556,29 @@ def main():
# Final statistics
# Suggestion: use git-log --before=mtime to brag about skipped log entries
def log_info(msg, *a, width=13):
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
ffmt = "{:%d,.2f}" % (width,)
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
ffmt = '{:%d,.2f}' % (width,)
# %-formatting lacks a thousand separator, must pre-render with .format()
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
log_info(
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
time.time() - start,
stats["loglines"],
stats["commits"],
)
"Statistics:\n"
"%f seconds\n"
"%d log lines processed\n"
"%d commits evaluated",
time.time() - start, stats['loglines'], stats['commits'])
if args.dirs:
if stats["direrrors"]:
log_info("%d directory update errors", stats["direrrors"])
log_info("%d directories updated", stats["dirtouches"])
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
log_info("%d directories updated", stats['dirtouches'])
if stats["touches"] != stats["totalfiles"]:
log_info("%d files", stats["totalfiles"])
if stats["skip"]:
log_info("%d files skipped", stats["skip"])
if stats["files"]:
log_info("%d files missing", stats["files"])
if stats["errors"]:
log_info("%d file update errors", stats["errors"])
if stats['touches'] != stats['totalfiles']:
log_info("%d files", stats['totalfiles'])
if stats['skip']: log_info("%d files skipped", stats['skip'])
if stats['files']: log_info("%d files missing", stats['files'])
if stats['errors']: log_info("%d file update errors", stats['errors'])
log_info("%d files updated", stats["touches"])
log_info("%d files updated", stats['touches'])
if args.test:
log.info("TEST RUN - No files modified!")

6
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,6 @@
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -1,12 +1,4 @@
# Validates that a package's integration tests compile without syntax or import errors.
#
# (If an integration test fails to compile, it won't run.)
#
# Called as part of check_diffs.yml workflow
#
# Runs pytest with compile marker to check syntax/imports.
name: '🔗 Compile Integration Tests'
name: compile-integration-test
on:
workflow_call:
@@ -20,9 +12,6 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -33,26 +22,24 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Python ${{ inputs.python-version }}'
name: "uv run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
- name: Install integration dependencies
shell: bash
run: uv sync --group test --group test_integration
- name: '🔗 Check Integration Tests Compile'
- name: Check integration tests compile
shell: bash
run: uv run pytest -m compile tests/integration_tests
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu

View File

@@ -1,12 +1,4 @@
# Runs `make integration_tests` on the specified package.
#
# Manually triggered via workflow_dispatch for testing with real APIs.
#
# Installs integration test dependencies and executes full test suite.
name: '🚀 Integration Tests'
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
name: Integration tests
on:
workflow_dispatch:
@@ -19,10 +11,6 @@ on:
required: true
type: string
description: "Python version to use"
default: "3.11"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -33,30 +21,26 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: 'Python ${{ inputs.python-version }}'
name: Python ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
- name: Install dependencies
shell: bash
run: uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
- name: Run integration tests
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
@@ -83,6 +67,7 @@ jobs:
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
@@ -90,7 +75,7 @@ jobs:
run: |
make integration_tests
- name: 'Ensure testing did not create/modify files'
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu

View File

@@ -1,11 +1,4 @@
# Runs linting.
#
# Uses the package's Makefile to run the checks, specifically the
# `lint_package` and `lint_tests` targets.
#
# Called as part of check_diffs.yml workflow.
name: '🧹 Linting'
name: lint
on:
workflow_call:
@@ -19,9 +12,6 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
@@ -31,45 +21,56 @@ env:
UV_FROZEN: "true"
jobs:
# Linting job - runs quality checks on package and test code
build:
name: 'Python ${{ inputs.python-version }}'
name: "make lint #${{ inputs.python-version }}"
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Lint & Typing Dependencies'
- name: Install dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --group lint --group typing
- name: '🔍 Analyze Package Code with Linters'
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint_package
- name: '📦 Install Test Dependencies (non-partners)'
# (For directories NOT starting with libs/partners/)
- name: Install unit test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test
- name: '📦 Install Test Dependencies'
- name: Install unit+integration test dependencies
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test --group test_integration
- name: '🔍 Analyze Test Code with Linters'
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests

View File

@@ -1,11 +1,5 @@
# Builds and publishes LangChain packages to PyPI.
#
# Manually triggered, though can be used as a reusable workflow (workflow_call).
#
# Handles version bumping, building, and publishing to PyPI with authentication.
name: '🚀 Package Release'
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
on:
workflow_call:
inputs:
@@ -20,16 +14,11 @@ on:
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
release-version:
required: true
type: string
default: '0.1.0'
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!) - Only use for hotfixes"
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
@@ -37,8 +26,6 @@ env:
UV_NO_SYNC: "true"
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
@@ -49,7 +36,7 @@ jobs:
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
@@ -58,8 +45,8 @@ jobs:
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# (Release stage has trusted publishing and GitHub repo contents write access,
#
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
@@ -77,7 +64,7 @@ jobs:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Check version
- name: Check Version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
@@ -98,7 +85,7 @@ jobs:
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
@@ -106,7 +93,7 @@ jobs:
${{ inputs.working-directory }}
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check tags
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
@@ -122,7 +109,7 @@ jobs:
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -133,7 +120,7 @@ jobs:
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -189,36 +176,13 @@ jobs:
needs:
- build
- release-notes
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
pre-release-checks:
needs:
@@ -228,7 +192,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
@@ -249,7 +213,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -294,19 +258,16 @@ jobs:
run: |
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Check for prerelease versions
# Block release if any dependencies allow prerelease versions
# (unless this is itself a prerelease version)
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
# Find the minimum published versions that satisfies the given constraints
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
@@ -321,8 +282,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
make tests
working-directory: ${{ inputs.working-directory }}
@@ -331,7 +291,6 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
# Uses the Makefile's `integration_tests` target for the specified package
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
@@ -363,6 +322,7 @@ jobs:
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
@@ -372,11 +332,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
# Done when code changes are made to langchain-core
test-prior-published-packages-against-new-core:
# Installs the new core with old partners: Installs the new unreleased core
# alongside the previously published partner packages and runs integration tests
if: github.ref != 'refs/heads/v0.3'
needs:
- build
- release-notes
@@ -389,8 +345,6 @@ jobs:
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
@@ -400,11 +354,10 @@ jobs:
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
@@ -418,7 +371,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v4
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
@@ -427,12 +380,11 @@ jobs:
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag, excluding pre-releases
# Identify latest tag
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '==0\.3\.[0-9]+$' \
| sort -Vr \
| head -n 1
)"
@@ -459,12 +411,12 @@ jobs:
make integration_tests
publish:
# Publishes the package to PyPI
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
@@ -479,14 +431,14 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -501,7 +453,6 @@ jobs:
attestations: false
mark-release:
# Marks the GitHub release with the new version tag
needs:
- build
- release-notes
@@ -511,7 +462,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
# This permission is needed by `ncipollo/release-action` to
# create the GitHub release/tag
# create the GitHub release.
contents: write
defaults:
@@ -519,18 +470,18 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Tag
uses: ncipollo/release-action@v1
with:

View File

@@ -1,7 +1,4 @@
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range.
name: '🧪 Unit Testing'
name: test
on:
workflow_call:
@@ -15,44 +12,36 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# Main test job - runs unit tests with current deps, then retests with minimum versions
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Python ${{ inputs.python-version }}'
name: "make test #${{ inputs.python-version }}"
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
- name: Install dependencies
shell: bash
run: uv sync --group test --dev
- name: '🧪 Run Core Unit Tests'
- name: Run core tests
shell: bash
run: |
make test
- name: '🔍 Calculate Minimum Dependency Versions'
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
shell: bash
@@ -63,7 +52,7 @@ jobs:
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: '🧪 Run Tests with Minimum Dependencies'
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
@@ -72,7 +61,7 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
@@ -83,4 +72,4 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,11 +1,4 @@
# Validates that all import statements in `.ipynb` notebooks are correct and functional.
#
# Called as part of check_diffs.yml.
#
# Installs test dependencies and LangChain packages in editable mode and
# runs check_imports.py.
name: '📑 Documentation Import Testing'
name: test_doc_imports
on:
workflow_call:
@@ -15,9 +8,6 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -25,32 +15,29 @@ jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
name: "check doc imports #${{ inputs.python-version }}"
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
- name: Install dependencies
shell: bash
run: uv sync --group test
- name: '📦 Install LangChain in Editable Mode'
- name: Install langchain editable
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: '🔍 Validate Documentation Import Statements'
- name: Check doc imports
shell: bash
run: |
uv run python docs/scripts/check_imports.py
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the test did not create any additional files
shell: bash
run: |
set -eu

View File

@@ -1,6 +1,4 @@
# Facilitate unit testing against different Pydantic versions for a provided package.
name: '🐍 Pydantic Version Testing'
name: test pydantic intermediate versions
on:
workflow_call:
@@ -19,9 +17,6 @@ on:
type: string
description: "Pydantic version to test."
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
@@ -33,32 +28,29 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
- name: Install dependencies
shell: bash
run: uv sync --group test
- name: '🔄 Install Specific Pydantic Version'
- name: Overwrite pydantic version
shell: bash
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
- name: '🧪 Run Core Tests'
- name: Run core tests
shell: bash
run: |
make test
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
@@ -68,4 +60,4 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
echo "$STATUS" | grep 'nothing to commit, working tree clean'

106
.github/workflows/_test_release.yml vendored Normal file
View File

@@ -0,0 +1,106 @@
name: test-release
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
jobs:
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
publish:
needs:
- build
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false

View File

@@ -1,46 +1,32 @@
# Build the API reference documentation.
#
# Runs daily. Can also be triggered manually for immediate updates.
#
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
#
# Looks for langchain-ai org repos in packages.yml and checks them out.
# Calls prep_api_docs_build.py.
name: '📚 API Docs'
run-name: 'Build & Deploy API Reference'
name: API docs build
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
- cron: '0 13 * * *'
env:
PYTHON_VERSION: "3.11"
jobs:
# Only runs on main repository to prevent unnecessary builds on forks
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: read
permissions: write-all
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: '📋 Extract Repository List with yq'
- name: Get repos with yq
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: |
# Extract repos from packages.yml that are in the langchain-ai org
# (excluding 'langchain' itself)
yq '
.packages[]
| select(
@@ -55,97 +41,66 @@ jobs:
| .repo
' langchain/libs/packages.yml
- name: '📋 Parse YAML & Checkout Repositories'
- name: Parse YAML and checkout repos
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository
# Checkout each unique repository that is in langchain-ai org
for repo in $REPOS; do
# Validate repository format (allow any org with proper format)
if [[ ! "$repo" =~ ^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository format: $repo"
exit 1
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
# Additional validation for repo name
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository name: $REPO_NAME"
exit 1
fi
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
uses: actions/setup-python@v6
- name: Setup python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: '📦 Install Initial Python Dependencies using uv'
- name: Install initial py deps
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: '📦 Organize Library Directories'
# Places cloned partner packages into libs/partners structure
- name: Move libs with script
run: python langchain/.github/scripts/prep_api_docs_build.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: '🧹 Clear Prior Build'
- name: Rm old html
run:
# Remove artifacts from prior docs build
rm -rf langchain-api-docs-html/api_reference_build/html
- name: '📦 Install Documentation Dependencies using uv'
- name: Install dependencies
working-directory: langchain
run: |
# Install all partner packages in editable mode with overrides
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt --prerelease=allow
# Install core langchain and other main packages
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
# Install Sphinx and related packages for building docs
python -m uv pip install -r docs/api_reference/requirements.txt
- name: '🔧 Configure Git Settings'
- name: Set Git config
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: '📚 Build API Documentation'
- name: Build docs
working-directory: langchain
run: |
# Generate the API reference RST files
python docs/api_reference/create_api_rst.py
# Build the HTML documentation using Sphinx
# -T: show full traceback on exception
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
# -b html: build HTML docs (vs PDS, etc.)
# -d: path for the cached environment (parsed document trees / doctrees)
# - Separate from output dir for faster incremental builds
# -c: path to conf.py
# -j auto: parallel build using all available CPU cores
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
# Post-process the generated HTML
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
# Default index page is blank so we copy in the actual home page.
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
# Removes Sphinx's intermediate build artifacts after the build is complete.
rm -rf ../langchain-api-docs-html/_build/
# Commit and push changes to langchain-api-docs-html repo
# https://github.com/marketplace/actions/add-commit
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain-api-docs-html

View File

@@ -1,30 +1,25 @@
# Runs broken link checker in /docs on a daily schedule.
name: '🔗 Check Broken Links'
name: Check Broken Links
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
- cron: '0 13 * * *'
jobs:
check-links:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v5
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: '📦 Install Node Dependencies'
- name: Install dependencies
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: '🔍 Scan Documentation for Broken Links'
- name: Check broken links
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -1,8 +1,4 @@
# Ensures version numbers in pyproject.toml and version.py stay in sync.
#
# (Prevents releases with mismatched version numbers)
name: '🔍 Check Version Equality'
name: Check `langchain-core` version equality
on:
pull_request:
@@ -10,42 +6,24 @@ on:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
permissions:
contents: read
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '✅ Verify pyproject.toml & version.py Match'
- name: Check version equality
run: |
# Check core versions
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare core versions
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
echo "version.py version: $CORE_VERSION_PY_VERSION"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
exit 1
else
echo "Core versions match: $CORE_PYPROJECT_VERSION"
fi
# Check langchain_v1 versions
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
# Compare langchain_v1 versions
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
exit 1
else
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
echo "Versions match: $PYPROJECT_VERSION"
fi

View File

@@ -1,19 +1,4 @@
# Primary CI workflow.
#
# Only runs against packages that have changed files.
#
# Runs:
# - Linting (_lint.yml)
# - Unit Tests (_test.yml)
# - Pydantic compatibility tests (_test_pydantic.yml)
# - Documentation import tests (_test_doc_imports.yml)
# - Integration test compilation checks (_compile_integration_test.yml)
# - Extended test suites that require additional dependencies
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
#
# Reports status to GitHub checks and PR status.
name: '🔧 CI'
name: CI
on:
push:
@@ -21,43 +6,31 @@ on:
pull_request:
merge_group:
# Optimizes CI performance by canceling redundant workflow runs
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to
# cancel pointless jobs early so that more useful jobs can run sooner.
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: 'Detect Changes & Set Matrix'
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Setup Python 3.11'
uses: actions/setup-python@v6
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: '📂 Get Changed Files'
id: files
uses: Ana06/get-changed-files@v2.3.0
- name: '🔍 Analyze Changed Files & Generate Build Matrix'
id: set-matrix
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
@@ -69,9 +42,8 @@ jobs:
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
@@ -84,8 +56,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run unit tests only on packages that have changed files
test:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
@@ -98,8 +70,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
@@ -120,13 +92,12 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
with:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
name: 'Compile Integration Tests'
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
@@ -139,9 +110,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run extended test suites that require additional dependencies
extended-tests:
name: 'Extended Tests'
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
needs: [ build ]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
@@ -155,16 +125,14 @@ jobs:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
- name: Set up Python ${{ matrix.job-configs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '📦 Install Dependencies & Run Extended Tests'
- name: Install dependencies and run extended tests
shell: bash
run: |
echo "Running extended tests, installing dependencies with uv..."
@@ -173,7 +141,7 @@ jobs:
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
@@ -184,73 +152,9 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: '⚡ CodSpeed Benchmarks'
needs: [ build ]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
if: |
always()
runs-on: ubuntu-latest
@@ -259,7 +163,7 @@ jobs:
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: '🎉 All Checks Passed'
- name: "CI Success"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON

View File

@@ -1,7 +1,4 @@
# For integrations, we run check_templates.py to ensure that new docs use the correct
# templates based on their type. See the script for more details.
name: '📑 Integration Docs Lint'
name: Integration docs lint
on:
push:
@@ -18,24 +15,21 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.3.0
uses: Ana06/get-changed-files@v2.2.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: '🔍 Check New Documentation Templates'
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

35
.github/workflows/codespell.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: CI / cd . / make spell_check
on:
push:
branches: [master, v0.1, v0.2]
pull_request:
permissions:
contents: read
jobs:
codespell:
name: (Check for spelling errors)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Dependencies
run: |
pip install toml
- name: Extract Ignore Words List
run: |
# Use a Python script to extract the ignore words list from pyproject.toml
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

44
.github/workflows/codspeed.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
paths:
- 'libs/core/**'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
codspeed:
name: Run benchmarks
if: (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'run-codspeed-benchmarks')) || github.event_name == 'workflow_dispatch' || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: install deps
run: uv sync --group test
working-directory: ./libs/core
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd libs/core
uv run --no-sync pytest ./tests/benchmarks --codspeed
mode: walltime

View File

@@ -0,0 +1,10 @@
import toml
pyproject_toml = toml.load("pyproject.toml")
# Extract the ignore words list (adjust the key as per your TOML structure)
ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -1,11 +1,8 @@
# Updates the LangChain People data by fetching the latest info from the LangChain Git.
# TODO: broken/not used
name: LangChain People
name: '👥 LangChain People'
run-name: 'Update People Data'
on:
schedule:
- cron: "0 14 1 * *" # Runs at 14:00 UTC on the 1st of every month (10AM EDT/7AM PDT)
- cron: "0 14 1 * *"
push:
branches: [jacob/people]
workflow_dispatch:
@@ -14,17 +11,16 @@ jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: write
permissions: write-all
steps:
- name: '📋 Dump GitHub Context'
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v5
- uses: actions/checkout@v4
# Ref: https://github.com/actions/runner/issues/2033
- name: '🔧 Fix Git Safe Directory in Container'
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -1,28 +0,0 @@
# Label PRs based on changed files.
#
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
name: "🏷️ Pull Request Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label Pull Request
uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr-file-labeler.yml
sync-labels: false

View File

@@ -1,28 +0,0 @@
# Label PRs based on their titles.
#
# See `.github/pr-title-labeler.yml` to see rules for each label/title pattern.
name: "🏷️ PR Title Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
pr-title-labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label PR based on title
# Archived repo; latest commit (v0.1.0)
uses: grafana/pr-labeler-action@f19222d3ef883d2ca5f04420fdfe8148003763f0
with:
token: ${{ secrets.GITHUB_TOKEN }}
configuration-path: .github/pr-title-labeler.yml

View File

@@ -1,105 +0,0 @@
# PR title linting.
#
# FORMAT (Conventional Commits 1.0.0):
#
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Examples:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Allowed Types:
# * feat — a new feature (MINOR)
# * fix — a bug fix (PATCH)
# * docs — documentation only changes (either in /docs or code comments)
# * style — formatting, linting, etc.; no code change or typing refactors
# * refactor — code change that neither fixes a bug nor adds a feature
# * perf — code change that improves performance
# * test — adding tests or correcting existing
# * build — changes that affect the build system/external dependencies
# * ci — continuous integration/configuration changes
# * chore — other changes that don't modify source or test files
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
#
# Enforces Conventional Commits format for pull request titles to maintain a clear and
# machine-readable change history.
name: '🏷️ PR Title Lint'
permissions:
pull-requests: read
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits 1.0.0 specification
lint-pr-title:
name: 'validate format'
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
release
scopes: |
core
cli
langchain
langchain_v1
langchain_legacy
standard-tests
text-splitters
docs
anthropic
chroma
deepseek
exa
fireworks
groq
huggingface
mistralai
nomic
ollama
openai
perplexity
prompty
qdrant
xai
infra
requireScope: false
disallowScopes: |
release
[A-Z]+
ignoreLabels: |
ignore-lint-pr-title

View File

@@ -1,7 +1,5 @@
# Integration tests for documentation notebooks.
name: Run notebooks
name: '📓 Validate Documentation Notebooks'
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
on:
workflow_dispatch:
inputs:
@@ -16,9 +14,6 @@ on:
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -26,45 +21,43 @@ jobs:
build:
runs-on: ubuntu-latest
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: '📑 Test Documentation Notebooks'
name: "Test docs"
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: '🐍 Set up Python + UV'
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
working-directory: ${{ github.event.inputs.working-directory || '**' }}
- name: '🔐 Authenticate to Google Cloud'
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v3
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies'
- name: Install dependencies
run: |
uv sync --group dev --group test
- name: '📦 Pre-download Test Files'
- name: Pre-download files
run: |
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: '🔧 Prepare Notebooks for CI'
- name: Prepare notebooks
run: |
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: '🚀 Execute Notebooks'
- name: Run notebooks
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}

View File

@@ -1,14 +1,7 @@
# Routine integration tests against partner libraries with live API credentials.
#
# Uses `make integration_tests` for each library in the matrix.
#
# Runs daily. Can also be triggered manually for immediate updates.
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
name: Scheduled tests
on:
workflow_dispatch:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
inputs:
working-directory-force:
type: string
@@ -17,28 +10,23 @@ on:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/aws")
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
jobs:
# Generate dynamic test matrix based on input parameters or defaults
# Only runs on the main repo (for scheduled runs) or when manually triggered
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: '📋 Compute Test Matrix'
name: Compute matrix
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: '🔢 Generate Python & Library Matrix'
- name: Set matrix
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
@@ -59,14 +47,12 @@ jobs:
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
# Tests are run with Poetry or UV depending on the library's setup
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 30
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
@@ -74,19 +60,19 @@ jobs:
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: '📦 Organize External Libraries'
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
@@ -95,7 +81,7 @@ jobs:
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: '🐍 Set up Python ${{ matrix.python-version }} + Poetry'
- name: Set up Python ${{ matrix.python-version }} with poetry
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
@@ -104,45 +90,43 @@ jobs:
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: '🐍 Set up Python ${{ matrix.python-version }} + UV'
- name: Set up Python ${{ matrix.python-version }} + uv
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: '🔐 Authenticate to Google Cloud'
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v3
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies (Poetry)'
- name: Install dependencies (poetry)
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: '📦 Install Dependencies (UV)'
- name: Install dependencies (uv)
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
- name: Run integration tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
@@ -166,15 +150,14 @@ jobs:
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: '🧹 Clean up External Libraries'
# Clean up external libraries to avoid affecting the following git status check
run: |
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: '🧹 Verify Clean Working Directory'
- name: Ensure the tests did not create any additional files
working-directory: langchain
run: |
set -eu

View File

@@ -1,9 +0,0 @@
With the deprecation of v0 docs, the following files will need to be migrated/supported
in the new docs repo:
- run_notebooks.yml: New repo should run Integration tests on code snippets?
- people.yml: Need to fix and somehow display on the new docs site
- Subsequently, `.github/actions/people/`
- _test_doc_imports.yml
- check_new_docs.yml
- check-broken-links.yml

1
.gitignore vendored
View File

@@ -1,4 +1,5 @@
.vs/
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/

View File

@@ -1,14 +0,0 @@
{
"MD013": false,
"MD024": {
"siblings_only": true
},
"MD025": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD041": false,
"MD046": {
"style": "fenced"
}
}

View File

@@ -1,105 +1,117 @@
repos:
- repo: local
hooks:
- id: core
name: format and lint core
language: system
entry: make -C libs/core format lint
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format and lint langchain
language: system
entry: make -C libs/langchain format lint
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format and lint standard-tests
language: system
entry: make -C libs/standard-tests format lint
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format and lint text-splitters
language: system
entry: make -C libs/text-splitters format lint
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format and lint partners/anthropic
language: system
entry: make -C libs/partners/anthropic format lint
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format and lint partners/chroma
language: system
entry: make -C libs/partners/chroma format lint
files: ^libs/partners/chroma/
pass_filenames: false
- id: exa
name: format and lint partners/exa
language: system
entry: make -C libs/partners/exa format lint
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format and lint partners/fireworks
language: system
entry: make -C libs/partners/fireworks format lint
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format and lint partners/groq
language: system
entry: make -C libs/partners/groq format lint
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format and lint partners/huggingface
language: system
entry: make -C libs/partners/huggingface format lint
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format and lint partners/mistralai
language: system
entry: make -C libs/partners/mistralai format lint
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format and lint partners/nomic
language: system
entry: make -C libs/partners/nomic format lint
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format and lint partners/ollama
language: system
entry: make -C libs/partners/ollama format lint
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format and lint partners/openai
language: system
entry: make -C libs/partners/openai format lint
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format and lint partners/prompty
language: system
entry: make -C libs/partners/prompty format lint
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format and lint partners/qdrant
language: system
entry: make -C libs/partners/qdrant format lint
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format and lint docs, cookbook
language: system
entry: make format lint
files: ^(docs|cookbook)/
pass_filenames: false
- repo: local
hooks:
- id: core
name: format core
language: system
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
language: system
entry: make -C libs/langchain format
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
language: system
entry: make -C libs/standard-tests format
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
language: system
entry: make -C libs/text-splitters format
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
language: system
entry: make -C libs/partners/chroma format
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
language: system
entry: make -C libs/partners/exa format
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
language: system
entry: make -C libs/partners/groq format
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
language: system
entry: make -C libs/partners/nomic format
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
language: system
entry: make -C libs/partners/ollama format
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
language: system
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system
entry: make -C libs/partners/prompty format
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
files: ^libs/partners/qdrant/
pass_filenames: false
- id: voyageai
name: format partners/voyageai
language: system
entry: make -C libs/partners/voyageai format
files: ^libs/partners/voyageai/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false

25
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,25 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt

View File

@@ -1,21 +0,0 @@
{
"recommendations": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.mypy-type-checker",
"ms-toolsai.jupyter",
"ms-toolsai.jupyter-keymap",
"ms-toolsai.jupyter-renderers",
"ms-toolsai.vscode-jupyter-cell-tags",
"ms-toolsai.vscode-jupyter-slideshow",
"yzhang.markdown-all-in-one",
"davidanson.vscode-markdownlint",
"bierner.markdown-mermaid",
"bierner.markdown-preview-github-styles",
"eamodio.gitlens",
"github.vscode-pull-request-github",
"github.vscode-github-actions",
"redhat.vscode-yaml",
"editorconfig.editorconfig",
],
}

87
.vscode/settings.json vendored
View File

@@ -1,87 +0,0 @@
{
"python.analysis.include": [
"libs/**",
"docs/**",
"cookbook/**"
],
"python.analysis.exclude": [
"**/node_modules",
"**/__pycache__",
"**/.pytest_cache",
"**/.*",
"_dist/**",
"docs/_build/**",
"docs/api_reference/_build/**"
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
"python.testing.cwd": "${workspaceFolder}",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports.ruff": "explicit",
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"editor.rulers": [
88
],
"editor.tabSize": 4,
"editor.insertSpaces": true,
"editor.trimAutoWhitespace": true,
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.exclude": {
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/*.pyc": true,
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": false
},
"search.exclude": {
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
"yarn.lock": true
},
"git.autofetch": true,
"git.enableSmartCommit": true,
"jupyter.askForKernelRestart": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"[markdown]": {
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
}
},
"[yaml]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"[json]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"python.terminal.activateEnvironment": false,
"python.defaultInterpreterPath": "./.venv/bin/python",
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/workflows/pr_lint.yml"
}
]
}

325
AGENTS.md
View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

325
CLAUDE.md
View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -7,5 +7,5 @@ Please see the following guides for migrating LangChain code:
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.

View File

@@ -1,4 +1,4 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck lint lint_package lint_tests format format_diff
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
@@ -8,6 +8,9 @@ help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
@@ -16,67 +19,49 @@ clean: docs_clean api_docs_clean
######################
## docs_build: Build the documentation.
docs_build: docs_clean
@echo "📚 Building LangChain documentation..."
docs_build:
cd docs && make build
@echo "✅ Documentation build complete!"
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@echo "🧹 Cleaning documentation artifacts..."
cd docs && make clean
@echo "✅ LangChain documentation cleaned"
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
@echo "🔗 Checking documentation links..."
@if [ -d _dist/docs ]; then \
uv run --group test linkchecker _dist/docs/ --ignore-url node_modules; \
else \
echo "⚠️ Documentation not built. Run 'make docs_build' first."; \
exit 1; \
fi
@echo "✅ Link check complete"
uv run --no-group test linkchecker _dist/docs/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build: clean
@echo "📖 Building API Reference documentation..."
uv pip install -e libs/cli
uv run --group docs python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "✅ API documentation built"
@echo "🌐 Opening documentation in browser..."
open docs/api_reference/_build/html/reference.html
api_docs_build:
uv run --no-group test python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --no-group test make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
API_PKG ?= text-splitters
api_docs_quick_preview: clean
@echo "⚡ Building quick API preview for $(API_PKG)..."
uv run --group docs python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "🌐 Opening preview in browser..."
api_docs_quick_preview:
uv run --no-group test python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
@echo "🧹 Cleaning API documentation artifacts..."
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
@echo "✅ API documentation cleaned"
rm docs/api_reference/index.md
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
@echo "🔗 Checking API documentation links..."
@if [ -f docs/api_reference/_build/html/index.html ]; then \
uv run --group test linkchecker docs/api_reference/_build/html/index.html; \
else \
echo "⚠️ API documentation not built. Run 'make api_docs_build' first."; \
exit 1; \
fi
@echo "✅ API link check complete"
uv run --no-group test linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
uv run --no-group test codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
uv run --no-group test codespell --toml pyproject.toml -w
######################
# LINTING AND FORMATTING
@@ -84,24 +69,19 @@ api_docs_linkcheck:
## lint: Run linting on the project.
lint lint_package lint_tests:
@echo "🔍 Running code linting and checks..."
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
uv run --group lint ruff check --select I docs cookbook
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
@echo "✅ Linting complete"
## format: Format the project files.
format format_diff:
@echo "🎨 Formatting project files..."
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --fix docs cookbook
@echo "✅ Formatting complete"
uv run --group lint ruff check --select I --fix docs cookbook
update-package-downloads:
@echo "📊 Updating package download statistics..."
uv run python docs/scripts/packages_yml_get_downloads.py
@echo "✅ Package downloads updated"

118
README.md
View File

@@ -1,75 +1,83 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
</p>
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
<p align="center">
The platform for reliable agents.
</p>
<div>
<br>
</div>
<p align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank">
<img src="https://img.shields.io/pypi/l/langchain-core?style=flat-square" alt="PyPI - License">
</a>
<a href="https://pypistats.org/packages/langchain-core" target="_blank">
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
</a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square" alt="Open in Dev Containers">
</a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
</a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
</a>
<a href="https://twitter.com/langchainai" target="_blank">
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
</a>
</p>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
```bash
pip install -U langchain
```
---
**Documentation**: To learn more about LangChain, check out [the docs](https://python.langchain.com/docs/introduction/).
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
## Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly — LangChains abstractions keep you moving without losing momentum.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/#langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.

View File

@@ -4,14 +4,13 @@ LangChain has a large ecosystem of integrations with various external resources
## Best practices
When building such applications, developers should remember to follow good security practices:
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, its safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Its best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
@@ -22,58 +21,65 @@ Example scenarios with mitigation strategies:
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#supporting-packages) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best practicies](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
* langchain-core
* langchain (see exceptions)
* langchain-community (see exceptions)
* langgraph
* langserve
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
* **langchain-experimental**: This repository is for experimental code and is not
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* Please review the [Best Practices](#best-practices)
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
- Code documented with security notices. This will be decided done on a case by
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
* LangSmith site: [https://smith.langchain.com](https://smith.langchain.com)
* SDK client: [https://github.com/langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk)
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
### Other Security Concerns

View File

@@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
"metadata": {},
"outputs": [
@@ -61,7 +61,7 @@
],
"source": [
"# Local\n",
"from langchain_ollama import ChatOllama\n",
"from langchain_community.chat_models import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",

View File

@@ -144,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "kWDWfSDBMPl8",
"metadata": {},
"outputs": [
@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024\n",
" temperature=0, model_name=\"gemini-pro\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -235,7 +235,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "PeK9bzXv3olF",
"metadata": {},
"outputs": [],
@@ -254,7 +254,7 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model=\"gemini-2.5-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" msg = model.invoke(\n",
" [\n",
@@ -394,7 +394,7 @@
"# The vectorstore to use to index the summaries\n",
"vectorstore = Chroma(\n",
" collection_name=\"mm_rag_cj_blog\",\n",
" embedding_function=VertexAIEmbeddings(model_name=\"text-embedding-005\"),\n",
" embedding_function=VertexAIEmbeddings(model_name=\"textembedding-gecko@latest\"),\n",
")\n",
"\n",
"# Create retriever\n",
@@ -431,7 +431,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"id": "GlwCErBaCKQW",
"metadata": {},
"outputs": [],
@@ -553,7 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",

View File

@@ -63,5 +63,4 @@ Notebook | Description
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.

View File

@@ -204,14 +204,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "523e6ed2-2132-4748-bdb7-db765f20648d",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama import ChatOllama"
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{

View File

@@ -22,19 +22,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e8d63d14-138d-4aa5-a741-7fd3537d00aa",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 16,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
@@ -49,7 +37,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 17,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
@@ -66,10 +54,19 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 18,
"id": "f2675861",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
@@ -84,7 +81,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
@@ -96,25 +93,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "1431cded",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
@@ -124,20 +113,16 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "96a2edf8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"name": "stdout",
"output_type": "stream",
"text": [
"Created a chunk of size 2122, which is longer than the specified 1000\n",
"Created a chunk of size 3187, which is longer than the specified 1000\n",
"Created a chunk of size 1017, which is longer than the specified 1000\n",
"Created a chunk of size 1049, which is longer than the specified 1000\n",
"Created a chunk of size 1256, which is longer than the specified 1000\n",
"Created a chunk of size 2321, which is longer than the specified 1000\n"
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
@@ -150,6 +135,14 @@
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71ecef90",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "c0a6c031",
@@ -160,30 +153,31 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 43,
"id": "eb142786",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import Tool"
"from langchain.agents import AgentType, Tool, initialize_agent\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 44,
"id": "850bc4e9",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" ),\n",
@@ -192,116 +186,94 @@
},
{
"cell_type": "code",
"execution_count": 11,
"id": "70c461d8-aaca-4f2a-9a93-bf35841cc615",
"execution_count": 45,
"id": "fc47f230",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a6d2b911-3044-4430-a35b-75832bb45334",
"execution_count": 46,
"id": "10ca2db8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_26QlRdsptjEJJZjFsAUjEbaH)\n",
" Call ID: call_26QlRdsptjEJJZjFsAUjEbaH\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\n",
"In the State of the Union address, Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e836b4cd-abf7-49eb-be0e-b9ad501213f3",
"execution_count": 47,
"id": "4e91b811",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_KqDoWeO9bo9OAXdxOsCb6msC)\n",
" Call ID: call_KqDoWeO9bo9OAXdxOsCb6msC\n",
" Args:\n",
" __arg1: Why use ruff over flake8?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
"\n",
"There are a few reasons why someone might choose to use Ruff over Flake8:\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\n",
"1. Larger rule set: Ruff implements over 800 rules, while Flake8 only implements around 200. This means that Ruff can catch more potential issues in your code.\n",
"\n",
"2. Better compatibility with other tools: Ruff is designed to work well with other tools like Black, isort, and type checkers like Mypy. This means that you can use Ruff alongside these tools to get more comprehensive feedback on your code.\n",
"\n",
"3. Automatic fixing of lint violations: Unlike Flake8, Ruff is capable of automatically fixing its own lint violations. This can save you time and effort when fixing issues in your code.\n",
"\n",
"4. Native implementation of popular Flake8 plugins: Ruff re-implements some of the most popular Flake8 plugins natively, which means you don't have to install and configure multiple plugins to get the same functionality.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8, making it a popular choice for many developers.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"You might choose to use Ruff over Flake8 for several reasons:\n",
"\n",
"1. Ruff has a much larger rule set, implementing over 800 rules compared to Flake8's roughly 200, so it can catch more potential issues.\n",
"2. Ruff is designed to work better with other tools like Black, isort, and type checkers like Mypy, providing more comprehensive code feedback.\n",
"3. Ruff can automatically fix its own lint violations, which Flake8 cannot, saving time and effort.\n",
"4. Ruff natively implements some popular Flake8 plugins, so you don't need to install and configure multiple plugins separately.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8.\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
@@ -324,20 +296,20 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 48,
"id": "f59b377e",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" return_direct=True,\n",
@@ -347,92 +319,90 @@
},
{
"cell_type": "code",
"execution_count": 15,
"id": "06f69c0f-c83d-4b7f-a1c8-7614aced3bae",
"execution_count": 49,
"id": "8615707a",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "a6b38c12-ac25-43c0-b9c2-2b1985ab4825",
"execution_count": 50,
"id": "36e718a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_yjxh11OnZiauoyTAn9npWdxj)\n",
" Call ID: call_yjxh11OnZiauoyTAn9npWdxj\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
"agent.run(\n",
" \"What did biden say about ketanji brown jackson in the state of the union address?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "88f08d86-7972-4148-8128-3ac8898ad68a",
"execution_count": 51,
"id": "edfd0a1a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_GiWWfwF6wbbRFQrHlHbhRtGW)\n",
" Call ID: call_GiWWfwF6wbbRFQrHlHbhRtGW\n",
" Args:\n",
" __arg1: What are the advantages of using ruff over flake8 for Python linting?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
" Ruff has a larger rule set, supports automatic fixing of lint violations, and does not require the installation of additional plugins. It also has better compatibility with Black and can be used alongside a type checker for more comprehensive code analysis.\n"
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
@@ -447,19 +417,19 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 57,
"id": "d397a233",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" name=\"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" name=\"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
@@ -468,60 +438,60 @@
},
{
"cell_type": "code",
"execution_count": 19,
"id": "41743f29-150d-40ba-aa8e-3a63c32216aa",
"execution_count": 58,
"id": "06157240",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e20e81dd-284a-4d07-9160-63a84b65cba8",
"execution_count": 59,
"id": "b492b520",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_VOnxiOEehauQyVOTjDJkR5L2)\n",
" Call ID: call_VOnxiOEehauQyVOTjDJkR5L2\n",
" Args:\n",
" __arg1: What tool does ruff use to run over Jupyter Notebooks?\n",
" state_of_union_qa_system (call_AbSsXAxwe4JtCRhga926SxOZ)\n",
" Call ID: call_AbSsXAxwe4JtCRhga926SxOZ\n",
" Args:\n",
" __arg1: Did the president mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" No, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what tool ruff uses to run over Jupyter Notebooks, and if the president mentioned it in the state of the union.\n",
"Action: Ruff QA System\n",
"Action Input: What tool does ruff use to run over Jupyter Notebooks?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to find out if the president mentioned this tool in the state of the union.\n",
"Action: State of Union QA System\n",
"Action Input: Did the president mention nbQA in the state of the union?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"\n",
"Ruff does not support source.organizeImports and source.fixAll code actions in Jupyter Notebooks. Additionally, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'No, the president did not mention nbQA in the state of the union.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
"agent.run(\n",
" \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\"\n",
")"
]
},
{
@@ -549,7 +519,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -229,9 +229,9 @@
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all([column in df.columns for column in expected_columns]), (\n",
" \"DataFrame does not have the expected columns\"\n",
")"
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
]
},
{

View File

@@ -487,7 +487,7 @@
" print(\"*\" * 40)\n",
" print(\n",
" colored(\n",
" f\"After {i + 1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" f\"After {i+1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" \"blue\",\n",
" )\n",
" )\n",

View File

@@ -20,7 +20,11 @@
"cell_type": "markdown",
"id": "5939a54c-3198-4ba4-8346-1cc088c473c0",
"metadata": {},
"source": "##### You can embed text in the same VectorDB space as images, and retrieve text and images as well based on input text or image.\n##### Following link demonstrates that.\n<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
"source": [
"##### You can embed text in the same VectorDB space as images, and retreive text and images as well based on input text or image.\n",
"##### Following link demonstrates that.\n",
"<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
]
},
{
"cell_type": "markdown",
@@ -385,7 +389,7 @@
" ax = axs[idx]\n",
" ax.imshow(img)\n",
" # Assuming similarity is not available in the new data, removed sim_score\n",
" ax.title.set_text(f\"\\nProduct ID: {data['id']}\\n Score: {score}\")\n",
" ax.title.set_text(f\"\\nProduct ID: {data[\"id\"]}\\n Score: {score}\")\n",
" ax.axis(\"off\") # Turn off axis\n",
"\n",
" # Hide any remaining empty subplots\n",
@@ -596,4 +600,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -79,17 +79,6 @@
"tool_executor = ToolExecutor(tools)"
]
},
{
"cell_type": "markdown",
"id": "168152fc",
"metadata": {},
"source": [
"📘 **Note on `SystemMessage` usage with LangGraph-based agents**\n",
"\n",
"When constructing the `messages` list for an agent, you *must* manually include any `SystemMessage`s.\n",
"Unlike some agent executors in LangChain that set a default, LangGraph requires explicit inclusion."
]
},
{
"cell_type": "markdown",
"id": "fe6e8f78-1ef7-42ad-b2bf-835ed5850553",

View File

@@ -148,11 +148,11 @@
"\n",
" instructions = \"None\"\n",
" for i in range(max_meta_iters):\n",
" print(f\"[Episode {i + 1}/{max_meta_iters}]\")\n",
" print(f\"[Episode {i+1}/{max_meta_iters}]\")\n",
" chain = initialize_chain(instructions, memory=None)\n",
" output = chain.predict(human_input=task)\n",
" for j in range(max_iters):\n",
" print(f\"(Step {j + 1}/{max_iters})\")\n",
" print(f\"(Step {j+1}/{max_iters})\")\n",
" print(f\"Assistant: {output}\")\n",
" print(\"Human: \")\n",
" human_input = input()\n",

View File

@@ -183,7 +183,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for a Dungeons & Dragons game: {quest}.\n",
" The characters are: {(*character_names,)}.\n",
" The characters are: {*character_names,}.\n",
" The story is narrated by the storyteller, {storyteller_name}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
@@ -334,7 +334,7 @@
" You are the storyteller, {storyteller_name}.\n",
" Please make the quest more specific. Be creative and imaginative.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the characters: {(*character_names,)}.\n",
" Speak directly to the characters: {*character_names,}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -200,7 +200,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for the presidential debate: {topic}.\n",
"The presidential candidates are: {\", \".join(character_names)}.\"\"\"\n",
"The presidential candidates are: {', '.join(character_names)}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of each presidential candidate.\"\n",
@@ -595,7 +595,7 @@
" Frame the debate topic as a problem to be solved.\n",
" Be creative and imaginative.\n",
" Please reply with the specified topic in {word_limit} words or less. \n",
" Speak directly to the presidential candidates: {(*character_names,)}.\n",
" Speak directly to the presidential candidates: {*character_names,}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -215,8 +215,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama import ChatOllama\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Prompt\n",

View File

@@ -395,7 +395,8 @@
"prompt_messages = [\n",
" SystemMessage(\n",
" content=(\n",
" \"You are a world class algorithm to answer questions in a specific format.\"\n",
" \"You are a world class algorithm to answer \"\n",
" \"questions in a specific format.\"\n",
" )\n",
" ),\n",
" HumanMessage(content=\"Answer question using the following context\"),\n",

View File

@@ -25,7 +25,7 @@
" * [Oracle Blockchain](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_blockchain_table.html#GUID-B469E277-978E-4378-A8C1-26D3FF96C9A6)\n",
" * [JSON](https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-in-oracle-database.html)\n",
"\n",
"This guide demonstrates how Oracle AI Vector Search can be used with LangChain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"This guide demonstrates how Oracle AI Vector Search can be used with Langchain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"\n",
" * Loading the documents from various sources using OracleDocLoader\n",
" * Summarizing them within/outside the database using OracleSummary\n",
@@ -47,21 +47,7 @@
"source": [
"### Prerequisites\n",
"\n",
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
"\n",
"```\n",
"$ python -m pip install -U langchain-oracledb\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Demo User\n",
"First, connect as a privileged user to create a demo user with all the required privileges. Change the credentials for your environment. Also set the DEMO_PY_DIR path to a directory on the database host where your model file is located:"
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
]
},
{
@@ -70,30 +56,65 @@
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Demo User\n",
"First, create a demo user with all the required privileges. "
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"User setup done!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# Please update with your SYSTEM (or privileged user) username, password, and database connection string\n",
"username = \"SYSTEM\"\n",
"# Update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"with oracledb.connect(user=username, password=password, dsn=dsn) as connection:\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"\n",
" with connection.cursor() as cursor:\n",
" cursor = conn.cursor()\n",
" try:\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- Drop user\n",
" execute immediate 'drop user if exists testuser cascade';\n",
"\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error dropping user: ' || SQLERRM);\n",
" end;\n",
" \n",
" -- Create user and grant privileges\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/home/yourname/demo/orachain''';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" \n",
" -- Network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
@@ -106,7 +127,15 @@
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")"
" print(\"User setup done!\")\n",
" except Exception as e:\n",
" print(f\"User setup failed with error: {e}\")\n",
" finally:\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(f\"Connection failed with error: {e}\")\n",
" sys.exit(1)"
]
},
{
@@ -114,13 +143,13 @@
"metadata": {},
"source": [
"## Process Documents using Oracle AI\n",
"Consider the following scenario: users possess documents stored either in an Oracle Database or a file system and intend to utilize this data with Oracle AI Vector Search powered by LangChain.\n",
"Consider the following scenario: users possess documents stored either in an Oracle Database or a file system and intend to utilize this data with Oracle AI Vector Search powered by Langchain.\n",
"\n",
"To prepare the documents for analysis, a comprehensive preprocessing workflow is necessary. Initially, the documents must be retrieved, summarized (if required), and chunked as needed. Subsequent steps involve generating embeddings for these chunks and integrating them into the Oracle AI Vector Store. Users can then conduct semantic searches on this data.\n",
"\n",
"The Oracle AI Vector Search LangChain library encompasses a suite of document processing tools that facilitate document loading, chunking, summary generation, and embedding creation.\n",
"The Oracle AI Vector Search Langchain library encompasses a suite of document processing tools that facilitate document loading, chunking, summary generation, and embedding creation.\n",
"\n",
"In the sections that follow, we will detail the utilization of Oracle AI LangChain APIs to effectively implement each of these processes."
"In the sections that follow, we will detail the utilization of Oracle AI Langchain APIs to effectively implement each of these processes."
]
},
{
@@ -128,24 +157,38 @@
"metadata": {},
"source": [
"### Connect to Demo User\n",
"The following sample code shows how to connect to Oracle Database using the python-oracledb driver. By default, python-oracledb runs in a Thin mode which connects directly to Oracle Database. This mode does not need Oracle Client libraries. However, some additional functionality is available when python-oracledb uses them. Python-oracledb is said to be in Thick mode when Oracle Client libraries are used. Both modes have comprehensive functionality supporting the Python Database API v2.0 Specification. See the following [guide](https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_a.html#featuresummary) that talks about features supported in each mode. You can switch to Thick mode if you are unable to use Thin mode."
"The following sample code will show how to connect to Oracle Database. By default, python-oracledb runs in a Thin mode which connects directly to Oracle Database. This mode does not need Oracle Client libraries. However, some additional functionality is available when python-oracledb uses them. Python-oracledb is said to be in Thick mode when Oracle Client libraries are used. Both modes have comprehensive functionality supporting the Python Database API v2.0 Specification. See the following [guide](https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_a.html#featuresummary) that talks about features supported in each mode. You might want to switch to thick-mode if you are unable to use thin-mode."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 45,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, and database connection string\n",
"username = \"testuser\"\n",
"# please update with your username, password, hostname and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"connection = oracledb.connect(user=username, password=password, dsn=dsn)\n",
"print(\"Connection successful!\")"
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)"
]
},
{
@@ -158,12 +201,22 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 46,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Table created and populated.\n"
]
}
],
"source": [
"with connection.cursor() as cursor:\n",
" drop_table_sql = \"\"\"drop table if exists demo_tab\"\"\"\n",
"try:\n",
" cursor = conn.cursor()\n",
"\n",
" drop_table_sql = \"\"\"drop table demo_tab\"\"\"\n",
" cursor.execute(drop_table_sql)\n",
"\n",
" create_table_sql = \"\"\"create table demo_tab (id number, data clob)\"\"\"\n",
@@ -186,9 +239,15 @@
" ]\n",
" cursor.executemany(insert_row_sql, rows_to_insert)\n",
"\n",
"connection.commit()\n",
" conn.commit()\n",
"\n",
"print(\"Table created and populated.\")"
" print(\"Table created and populated.\")\n",
" cursor.close()\n",
"except Exception as e:\n",
" print(\"Table creation failed.\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
@@ -202,24 +261,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the ONNX Model\n",
"### Load ONNX Model\n",
"\n",
"Oracle accommodates a variety of embedding providers, enabling you to choose between proprietary database solutions and third-party services such as Oracle Generative AI Service and HuggingFace. This selection dictates the methodology for generating and managing embeddings.\n",
"Oracle accommodates a variety of embedding providers, enabling users to choose between proprietary database solutions and third-party services such as OCIGENAI and HuggingFace. This selection dictates the methodology for generating and managing embeddings.\n",
"\n",
"***Important*** : Should you opt for the database option, you must upload an ONNX model into the Oracle Database. Conversely, if a third-party provider is selected for embedding generation, uploading an ONNX model to Oracle Database is not required.\n",
"***Important*** : Should users opt for the database option, they must upload an ONNX model into the Oracle Database. Conversely, if a third-party provider is selected for embedding generation, uploading an ONNX model to Oracle Database is not required.\n",
"\n",
"A significant advantage of utilizing an ONNX model directly within Oracle Database is the enhanced security and performance it offers by eliminating the need to transmit data to external parties. Additionally, this method avoids the latency typically associated with network or REST API calls.\n",
"A significant advantage of utilizing an ONNX model directly within Oracle is the enhanced security and performance it offers by eliminating the need to transmit data to external parties. Additionally, this method avoids the latency typically associated with network or REST API calls.\n",
"\n",
"Below is the example code to upload an ONNX model into Oracle Database:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 47,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ONNX model loaded.\n"
]
}
],
"source": [
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
@@ -227,8 +294,12 @@
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"\n",
"OracleEmbeddings.load_onnx_model(connection, onnx_dir, onnx_file, model_name)\n",
"print(\"ONNX model loaded.\")"
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)"
]
},
{
@@ -250,7 +321,8 @@
"metadata": {},
"outputs": [],
"source": [
"with connection.cursor() as cursor:\n",
"try:\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" declare\n",
@@ -277,7 +349,12 @@
" params => json(jo.to_string));\n",
" end;\n",
" \"\"\"\n",
" )"
" )\n",
" cursor.close()\n",
" print(\"Credentials created.\")\n",
"except Exception as ex:\n",
" cursor.close()\n",
" raise"
]
},
{
@@ -285,24 +362,33 @@
"metadata": {},
"source": [
"### Load Documents\n",
"You have the flexibility to load documents from either the Oracle Database, a file system, or both, by appropriately configuring the loader parameters. For comprehensive details on these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-73397E89-92FB-48ED-94BB-1AD960C4EA1F).\n",
"Users have the flexibility to load documents from either the Oracle Database, a file system, or both, by appropriately configuring the loader parameters. For comprehensive details on these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-73397E89-92FB-48ED-94BB-1AD960C4EA1F).\n",
"\n",
"A significant advantage of utilizing OracleDocLoader is its capability to process over 150 distinct file formats, eliminating the need for multiple loaders for different document types. For a complete list of the supported formats, please refer to the [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html).\n",
"\n",
"Below is a sample code snippet that demonstrates how to use OracleDocLoader:"
"Below is a sample code snippet that demonstrates how to use OracleDocLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 48,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of docs loaded: 3\n"
]
}
],
"source": [
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
"# make sure you have the table with this specification\n",
"loader_params = {}\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
@@ -310,7 +396,7 @@
"}\n",
"\n",
"\"\"\" load the docs \"\"\"\n",
"loader = OracleDocLoader(conn=connection, params=loader_params)\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"docs = loader.load()\n",
"\n",
"\"\"\" verify \"\"\"\n",
@@ -323,23 +409,23 @@
"metadata": {},
"source": [
"### Generate Summary\n",
"Now that you have loaded the documents, you may want to generate a summary for each document. The Oracle AI Vector Search LangChain library offers a suite of APIs designed for document summarization. It supports multiple summarization providers such as Database, Oracle Generative AI Service, HuggingFace, among others, allowing you to select the provider that best meets their needs. To utilize these capabilities, you must configure the summary parameters as specified. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-EC9DDB58-6A15-4B36-BA66-ECBA20D2CE57)."
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library offers a suite of APIs designed for document summarization. It supports multiple summarization providers such as Database, OCIGENAI, HuggingFace, among others, allowing users to select the provider that best meets their needs. To utilize these capabilities, users must configure the summary parameters as specified. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-EC9DDB58-6A15-4B36-BA66-ECBA20D2CE57)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** You may need to set proxy if you want to use some 3rd party summary generation providers other than Oracle's in-house and default provider: 'database'. If you don't have proxy, please remove the proxy parameter when you instantiate the OracleSummary."
"***Note:*** The users may need to set proxy if they want to use some 3rd party summary generation providers other than Oracle's in-house and default provider: 'database'. If you don't have proxy, please remove the proxy parameter when you instantiate the OracleSummary."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder objects\n",
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
@@ -347,16 +433,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code shows how to generate a summary:"
"The following sample code will show how to generate summary:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 49,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Summaries: 3\n"
]
}
],
"source": [
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
@@ -369,7 +463,7 @@
"\n",
"# get the summary instance\n",
"# Remove proxy if not required\n",
"summ = OracleSummary(conn=connection, params=summary_params, proxy=proxy)\n",
"summ = OracleSummary(conn=conn, params=summary_params, proxy=proxy)\n",
"\n",
"list_summary = []\n",
"for doc in docs:\n",
@@ -393,18 +487,26 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 50,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Chunks: 3\n"
]
}
],
"source": [
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"\n",
"\"\"\" get the splitter instance \"\"\"\n",
"splitter = OracleTextSplitter(conn=connection, params=splitter_params)\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"\n",
"list_chunks = []\n",
"for doc in docs:\n",
@@ -421,19 +523,19 @@
"metadata": {},
"source": [
"### Generate Embeddings\n",
"Now that the documents are chunked as per requirements, you may want to generate embeddings for these chunks. Oracle AI Vector Search provides multiple methods for generating embeddings, utilizing either locally hosted ONNX models or third-party APIs. For comprehensive instructions on configuring these alternatives, please refer to the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-C6439E94-4E86-4ECD-954E-4B73D53579DE)."
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides multiple methods for generating embeddings, utilizing either locally hosted ONNX models or third-party APIs. For comprehensive instructions on configuring these alternatives, please refer to the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-C6439E94-4E86-4ECD-954E-4B73D53579DE)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** You may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
@@ -445,16 +547,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code shows how to generate embeddings:"
"The following sample code will show how to generate embeddings:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 51,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of embeddings: 3\n"
]
}
],
"source": [
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
@@ -462,7 +572,7 @@
"\n",
"# get the embedding instance\n",
"# Remove proxy if not required\n",
"embedder = OracleEmbeddings(conn=connection, params=embedder_params, proxy=proxy)\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params, proxy=proxy)\n",
"\n",
"embeddings = []\n",
"for doc in docs:\n",
@@ -481,33 +591,33 @@
"metadata": {},
"source": [
"## Create Oracle AI Vector Store\n",
"Now that you know how to use Oracle AI LangChain library APIs individually to process the documents, let us show how to integrate with Oracle AI Vector Store to facilitate the semantic searches."
"Now that you know how to use Oracle AI Langchain library APIs individually to process the documents, let us show how to integrate with Oracle AI Vector Store to facilitate the semantic searches."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import all the dependencies:"
"First, let's import all the dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_oracledb.document_loaders.oracleai import (\n",
"from langchain_community.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.vectorstores import oraclevs\n",
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
@@ -516,80 +626,100 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's combine all document processing stages together. Here is the sample code:"
"Next, let's combine all document processing stages together. Here is the sample code below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 53,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"ONNX model loaded.\n",
"Number of total chunks with metadata: 3\n"
]
}
],
"source": [
"\"\"\"\n",
"In this sample example, we will use 'database' provider for both summary and embeddings\n",
"so, we don't need to do the following:\n",
"In this sample example, we will use 'database' provider for both summary and embeddings.\n",
"So, we don't need to do the followings:\n",
" - set proxy for 3rd party providers\n",
" - create credential for 3rd party providers\n",
"\n",
"If you choose to use 3rd party provider, please follow the necessary steps for proxy and credential.\n",
"If you choose to use 3rd party provider, \n",
"please follow the necessary steps for proxy and credential.\n",
"\"\"\"\n",
"\n",
"# please update with your username, password, and database connection string\n",
"# oracle connection\n",
"# please update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"with oracledb.connect(user=username, password=password, dsn=dsn) as connection:\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)\n",
"\n",
" # load onnx model\n",
" # please update with your related information\n",
" onnx_dir = \"DEMO_PY_DIR\"\n",
" onnx_file = \"tinybert.onnx\"\n",
" model_name = \"demo_model\"\n",
" OracleEmbeddings.load_onnx_model(connection, onnx_dir, onnx_file, model_name)\n",
"\n",
"# load onnx model\n",
"# please update with your related information\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)\n",
"\n",
" # params\n",
" # please update necessary fields with related information\n",
" loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
" }\n",
" summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
" }\n",
" splitter_params = {\"normalize\": \"all\"}\n",
" embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
" # instantiate loader, summary, splitter, and embedder\n",
" loader = OracleDocLoader(conn=connection, params=loader_params)\n",
" summary = OracleSummary(conn=connection, params=summary_params)\n",
" splitter = OracleTextSplitter(conn=connection, params=splitter_params)\n",
" embedder = OracleEmbeddings(conn=connection, params=embedder_params)\n",
"# params\n",
"# please update necessary fields with related information\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
" # process the documents\n",
" chunks_with_mdata = []\n",
" for id, doc in enumerate(docs, start=1):\n",
" summ = summary.get_summary(doc.page_content)\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for ic, chunk in enumerate(chunks, start=1):\n",
" chunk_metadata = doc.metadata.copy()\n",
" chunk_metadata[\"id\"] = (\n",
" chunk_metadata[\"_oid\"] + \"$\" + str(id) + \"$\" + str(ic)\n",
" )\n",
" chunk_metadata[\"document_id\"] = str(id)\n",
" chunk_metadata[\"document_summary\"] = str(summ[0])\n",
" chunks_with_mdata.append(\n",
" Document(page_content=str(chunk), metadata=chunk_metadata)\n",
" )\n",
"# instantiate loader, summary, splitter, and embedder\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"summary = OracleSummary(conn=conn, params=summary_params)\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params)\n",
"\n",
" \"\"\" verify \"\"\"\n",
" print(f\"Number of total chunks with metadata: {len(chunks_with_mdata)}\")"
"# process the documents\n",
"chunks_with_mdata = []\n",
"for id, doc in enumerate(docs, start=1):\n",
" summ = summary.get_summary(doc.page_content)\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for ic, chunk in enumerate(chunks, start=1):\n",
" chunk_metadata = doc.metadata.copy()\n",
" chunk_metadata[\"id\"] = chunk_metadata[\"_oid\"] + \"$\" + str(id) + \"$\" + str(ic)\n",
" chunk_metadata[\"document_id\"] = str(id)\n",
" chunk_metadata[\"document_summary\"] = str(summ[0])\n",
" chunks_with_mdata.append(\n",
" Document(page_content=str(chunk), metadata=chunk_metadata)\n",
" )\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of total chunks with metadata: {len(chunks_with_mdata)}\")"
]
},
{
@@ -603,15 +733,23 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 55,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vector Store Table: oravs\n"
]
}
],
"source": [
"# create Oracle AI Vector Store\n",
"vectorstore = OracleVS.from_documents(\n",
" chunks_with_mdata,\n",
" embedder,\n",
" client=connection,\n",
" client=conn,\n",
" table_name=\"oravs\",\n",
" distance_strategy=DistanceStrategy.DOT_PRODUCT,\n",
")\n",
@@ -640,12 +778,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"oraclevs.create_index(\n",
" connection, vectorstore, params={\"idx_name\": \"hnsw_oravs\", \"idx_type\": \"HNSW\"}\n",
" conn, vectorstore, params={\"idx_name\": \"hnsw_oravs\", \"idx_type\": \"HNSW\"}\n",
")\n",
"\n",
"print(\"Index created.\")"
@@ -655,7 +793,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This example demonstrates the creation of a default HNSW index on embeddings within the 'oravs' table. You may adjust various parameters according to your specific needs. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/vecse/manage-different-categories-vector-indexes.html).\n",
"This example demonstrates the creation of a default HNSW index on embeddings within the 'oravs' table. Users may adjust various parameters according to their specific needs. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/vecse/manage-different-categories-vector-indexes.html).\n",
"\n",
"Additionally, various types of vector indices can be created to meet diverse requirements. More details can be found in our [comprehensive guide](https://python.langchain.com/v0.1/docs/integrations/vectorstores/oracle/).\n"
]
@@ -667,31 +805,44 @@
"## Perform Semantic Search\n",
"All set!\n",
"\n",
"You have successfully processed the documents and stored them in the vector store, followed by the creation of an index to enhance query performance. You are now prepared to proceed with semantic searches.\n",
"We have successfully processed the documents and stored them in the vector store, followed by the creation of an index to enhance query performance. We are now prepared to proceed with semantic searches.\n",
"\n",
"Below is the sample code for this process:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 58,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'})]\n",
"[]\n",
"[(Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'}), 0.055675752460956573)]\n",
"[]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n"
]
}
],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"db_filter = {\"document_id\": \"1\"}\n",
"filter = {\"document_id\": [\"1\"]}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
@@ -699,7 +850,7 @@
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" )\n",
")"
]
@@ -721,7 +872,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.3"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -552,7 +552,9 @@
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": "**Appending all retrieved documents in a single document**"
"source": [
"**Appending all retreived documents in a single document**"
]
},
{
"cell_type": "code",
@@ -756,4 +758,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -1,455 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3716230e",
"metadata": {},
"source": [
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
"\n",
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
"\n",
"\n",
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
"- **Tracing**: Monitor inputs, outputs, retrieved documents, scores, prompts, and timings\n",
"- **Evaluation**: Use MLflow's built-in scorers to assess RAG performance\n",
"- **Best Practices**: Implement proper configuration management and reproducible experiments\n",
"\n",
"We'll build a RAG system that can answer questions about academic papers by:\n",
"1. Loading and chunking documents from ArXiv\n",
"2. Creating embeddings and a vector store\n",
"3. Setting up a retrieval-augmented generation chain\n",
"4. Tracking all experiments with MLflow\n",
"5. Evaluating the system's performance\n",
"\n",
"![System Diagram](https://miro.medium.com/v2/resize:fit:720/format:webp/1*eiw86PP4hrBBxhjTjP0JUQ.png)"
]
},
{
"cell_type": "markdown",
"id": "2f7561c4",
"metadata": {},
"source": [
"#### Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0814ebe9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain mlflow langchain-community arxiv pymupdf langchain-text-splitters langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "747399b6",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import mlflow\n",
"from mlflow.genai.scorers import RelevanceToQuery, Correctness, ExpectationsGuidelines\n",
"from langchain_community.document_loaders import ArxivLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4141ee05",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\"\n",
"\n",
"mlflow.set_experiment(\"LangChain-RAG-MLflow\")\n",
"mlflow.langchain.autolog()"
]
},
{
"cell_type": "markdown",
"id": "dd5eb41b",
"metadata": {},
"source": [
"Define all hyperparameters and configuration in a centralized dictionary. This makes it easy to:\n",
"- Track different experiment configurations\n",
"- Reproduce results\n",
"- Perform hyperparameter tuning\n",
"\n",
"**Key Parameters**:\n",
"- `chunk_size`: Size of text chunks for document splitting\n",
"- `chunk_overlap`: Overlap between consecutive chunks\n",
"- `retriever_k`: Number of documents to retrieve\n",
"- `embeddings_model`: OpenAI embedding model\n",
"- `llm`: Language model for generation\n",
"- `temperature`: Sampling temperature for the LLM"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6dcdc5d8",
"metadata": {},
"outputs": [],
"source": [
"CONFIG = {\n",
" \"chunk_size\": 400,\n",
" \"chunk_overlap\": 80,\n",
" \"retriever_k\": 3,\n",
" \"embeddings_model\": \"text-embedding-3-small\",\n",
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
" \"llm\": \"gpt-5-nano\",\n",
" \"temperature\": 0,\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "8a2985f1",
"metadata": {},
"source": [
"#### ArXiv Dcoument Loading and Processing"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f32aa36",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
]
}
],
"source": [
"# Load documents from ArXiv\n",
"loader = ArxivLoader(\n",
" query=\"1706.03762\",\n",
" load_max_docs=1,\n",
")\n",
"docs = loader.load()\n",
"print(docs[0].metadata)\n",
"\n",
"# Split documents into chunks\n",
"splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=CONFIG[\"chunk_size\"],\n",
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
")\n",
"chunks = splitter.split_documents(docs)\n",
"\n",
"\n",
"# Join chunks into a single string\n",
"def join_chunks(chunks):\n",
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
]
},
{
"cell_type": "markdown",
"id": "6e194ab4",
"metadata": {},
"source": [
"#### Vector Store and Retriever Setup"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "26dfbeaa",
"metadata": {},
"outputs": [],
"source": [
"# Create embeddings\n",
"embeddings = OpenAIEmbeddings(model=CONFIG[\"embeddings_model\"])\n",
"\n",
"# Create vector store from documents\n",
"vectorstore = InMemoryVectorStore.from_documents(\n",
" chunks,\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Create retriever\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": CONFIG[\"retriever_k\"]})"
]
},
{
"cell_type": "markdown",
"id": "bc1f181b",
"metadata": {},
"source": [
"#### RAG Chain Construction using [LCEL](https://python.langchain.com/docs/concepts/lcel/)\n",
"\n",
"Flow:\n",
"1. Query → Retriever (finds relevant chunks)\n",
"2. Chunks → join_chunks (creates context)\n",
"3. Context + Query → Prompt Template\n",
"4. Prompt → Language Model → Response\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6a810dc3",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the language model\n",
"llm = ChatOpenAI(model=CONFIG[\"llm\"], temperature=CONFIG[\"temperature\"])\n",
"\n",
"# Create the prompt template\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", CONFIG[\"system_prompt\"] + \"\\n\\nContext:\\n{context}\\n\\n\"),\n",
" (\"human\", \"\\n{question}\\n\"),\n",
" ]\n",
")\n",
"\n",
"# Construct the RAG chain\n",
"rag_chain = (\n",
" {\n",
" \"context\": retriever | RunnableLambda(join_chunks),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c04bd019",
"metadata": {},
"source": [
"#### Prediction Function with MLflow Tracing\n",
"\n",
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
"- Input queries\n",
"- Retrieved documents\n",
"- Generated responses\n",
"- Execution time\n",
"- Chain intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b45fc04",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What is the main idea of the paper?\n",
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
]
}
],
"source": [
"@mlflow.trace\n",
"def predict_fn(question: str) -> str:\n",
" return rag_chain.invoke(question)\n",
"\n",
"\n",
"# Test the prediction function\n",
"sample_question = \"What is the main idea of the paper?\"\n",
"response = predict_fn(sample_question)\n",
"print(f\"Question: {sample_question}\")\n",
"print(f\"Response: {response}\")"
]
},
{
"cell_type": "markdown",
"id": "421469de",
"metadata": {},
"source": [
"#### Evaluation Dataset and Scoring\n",
"\n",
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
"\n",
"<u>Evaluation Components:</u>\n",
"- **Dataset**: Questions with expected concepts and facts\n",
"- **Scorers**: \n",
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
" - `Correctness`: Evaluates factual accuracy of the response\n",
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
"\n",
"<u>Best Practices:</u>\n",
"- Create diverse test cases covering different query types\n",
"- Include expected concepts to guide evaluation\n",
"- Use multiple scoring metrics for comprehensive assessment"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c1dc4f2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "2b6c6687efa24796b39c7951d589d481",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Evaluating: 0%| | 0/3 [Elapsed: 00:00, Remaining: ?] "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"✨ Evaluation completed.\n",
"\n",
"Metrics and evaluation results are logged to the MLflow run:\n",
" Run name: \u001b[94mbaseline_eval\u001b[0m\n",
" Run ID: \u001b[94ma2218d9f24c9415f8040d3b77af103a9\u001b[0m\n",
"\n",
"To view the detailed evaluation results with sample-wise scores,\n",
"open the \u001b[93m\u001b[1mTraces\u001b[0m tab in the Run page in the MLflow UI.\n",
"\n"
]
}
],
"source": [
"# Define evaluation dataset\n",
"eval_dataset = [\n",
" {\n",
" \"inputs\": {\"question\": \"What is the main idea of the paper?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"attention mechanism\", \"transformer\", \"neural network\"],\n",
" \"expected_facts\": [\n",
" \"attention mechanism is a key component of the transformer model\"\n",
" ],\n",
" \"guidelines\": [\"The response must be factual and concise\"],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\n",
" \"question\": \"What's the difference between a transformer and a recurrent neural network?\"\n",
" },\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"sequential\", \"attention mechanism\", \"hidden state\"],\n",
" \"expected_facts\": [\n",
" \"transformer processes data in parallel while RNN processes data sequentially\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and focus on the difference between the two models\"\n",
" ],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\"question\": \"What does the attention mechanism do?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"query\", \"key\", \"value\", \"relationship\", \"similarity\"],\n",
" \"expected_facts\": [\n",
" \"attention allows the model to weigh the importance of different parts of the input sequence when processing it\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and explain the concept of attention\"\n",
" ],\n",
" },\n",
" },\n",
"]\n",
"\n",
"# Run evaluation with MLflow\n",
"with mlflow.start_run(run_name=\"baseline_eval\") as run:\n",
" # Log configuration parameters\n",
" mlflow.log_params(CONFIG)\n",
"\n",
" # Run evaluation\n",
" results = mlflow.genai.evaluate(\n",
" data=eval_dataset,\n",
" predict_fn=predict_fn,\n",
" scorers=[RelevanceToQuery(), Correctness(), ExpectationsGuidelines()],\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "52b137c7",
"metadata": {},
"source": [
"#### Launch MLflow UI to check out the results\n",
"\n",
"<u>What you'll see in the UI:</u>\n",
"- **Experiments**: Compare different RAG configurations\n",
"- **Runs**: Individual experiment runs with metrics and parameters\n",
"- **Traces**: Detailed execution traces showing retrieval and generation steps\n",
"- **Evaluation Results**: Scoring metrics and detailed comparisons\n",
"- **Artifacts**: Saved models, datasets, and other files\n",
"\n",
"Navigate to `http://localhost:5000` after running the command below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "817c3799",
"metadata": {},
"outputs": [],
"source": [
"!mlflow ui"
]
},
{
"cell_type": "markdown",
"id": "c75861e3",
"metadata": {},
"source": [
"You should see something like this\n",
"\n",
"![MLflow UI image](https://miro.medium.com/v2/resize:fit:720/format:webp/1*Cx7MMy53pAP7150x_hvztA.png)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -53,7 +53,7 @@
"id": "f5ccda4e-7af5-4355-b9c4-25547edf33f9",
"metadata": {},
"source": [
"Let's first load up this paper, and split into text chunks of size 1000."
"Lets first load up this paper, and split into text chunks of size 1000."
]
},
{
@@ -241,7 +241,7 @@
"id": "360b2837-8024-47e0-a4ba-592505a9a5c8",
"metadata": {},
"source": [
"With our embedder in place, let's define our retriever:"
"With our embedder in place, lets define our retriever:"
]
},
{
@@ -312,7 +312,7 @@
"id": "d84ea8f4-a5de-4d76-b44d-85e56583f489",
"metadata": {},
"source": [
"Let's write our documents into our new store. This will use our embedder on each document."
"Lets write our documents into our new store. This will use our embedder on each document."
]
},
{
@@ -339,7 +339,7 @@
"id": "580bc212-8ecd-4d28-8656-b96fcd0d7eb6",
"metadata": {},
"source": [
"Great! Our retriever is good to go. Let's load up an LLM, that will reason over the retrieved documents:"
"Great! Our retriever is good to go. Lets load up an LLM, that will reason over the retrieved documents:"
]
},
{
@@ -430,7 +430,7 @@
"id": "3bc53602-86d6-420f-91b1-fc2effa7e986",
"metadata": {},
"source": [
"Excellent! Let's ask it a question.\n",
"Excellent! lets ask it a question.\n",
"We will also use a verbose and debug, to check which documents were used by the model to produce the answer."
]
},

View File

@@ -34,7 +34,7 @@
"tools = [multiply, exponentiate, add]\n",
"\n",
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\").bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
"llm_with_tools = gpt35.configurable_alternatives(\n",
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
")"
@@ -113,15 +113,14 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'function': {'arguments': '{\"x\": 3, \"y\": 5}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 75, 'prompt_tokens': 131, 'total_tokens': 206, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm2qxSWU3oTTSZQv64J4XQKZhA6', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--35fad027-47f7-44d3-aa8b-99f4fc24098c-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'type': 'tool_call'}, {'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'type': 'tool_call'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'type': 'tool_call'}], usage_metadata={'input_tokens': 131, 'output_tokens': 75, 'total_tokens': 206, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n",
" ToolMessage(content='8.0', tool_call_id='call_xuNXwm2P6U2Pp2pAbC1sdIBz'),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_0pImUJUDlYa5zfBcxxuvWyYS'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_yaownQ9TZK0dkqD1KSFyax4H'),\n",
" AIMessage(content='The results are:\\n1. 3 plus 5 is 8.\\n2. 5 raised to the power of 2.743 is approximately 300.04.\\n3. 17.24 minus 918.1241 is approximately -900.88.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 55, 'prompt_tokens': 236, 'total_tokens': 291, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm345MYnpowGS90iAZAlSs7haed', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--5fa66d47-d80e-45d0-9c32-31348c735d72-0', usage_metadata={'input_tokens': 236, 'output_tokens': 55, 'total_tokens': 291, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -147,17 +146,17 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content=[{'text': \"I'll solve these calculations for you.\\n\\nFor the first part, I need to calculate 3 plus 5 raised to the power of 2.743.\\n\\nLet me break this down:\\n1) First, I'll calculate 5 raised to the power of 2.743\\n2) Then add 3 to the result\", 'type': 'text'}, {'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'input': {'x': 5, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01HCbDmuzdg9ATMyKbnecbEE', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 563, 'output_tokens': 146, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--9f6469fb-bcbb-4c1c-9eec-79f6979c38e6-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 5, 'y': 2.743}, 'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'type': 'tool_call'}], usage_metadata={'input_tokens': 563, 'output_tokens': 146, 'total_tokens': 709, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='82.65606421491815', tool_call_id='toolu_01L1mXysBQtpPLQ2AZTaCGmE'),\n",
" AIMessage(content=[{'text': \"Now I'll add 3 to this result:\", 'type': 'text'}, {'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'input': {'x': 3, 'y': 82.65606421491815}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01ELwyCtVLeGC685PUFqmdz2', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 727, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--d5af3d7c-e8b7-4cc2-997a-ad2dafd08751-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 82.65606421491815}, 'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'type': 'tool_call'}], usage_metadata={'input_tokens': 727, 'output_tokens': 87, 'total_tokens': 814, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='85.65606421491815', tool_call_id='toolu_01NARC83e9obV35mZ6jYzBiN'),\n",
" AIMessage(content=[{'text': \"For the second part, you asked for 17.24 - 918.1241. I don't have a subtraction function available, but I can rewrite this as adding a negative number: 17.24 + (-918.1241)\", 'type': 'text'}, {'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01WkmDwUxWjjaKGnTtdLGJnN', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 832, 'output_tokens': 130, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--39a6fbda-4c81-47a6-b361-524bd4ee5823-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'type': 'tool_call'}], usage_metadata={'input_tokens': 832, 'output_tokens': 130, 'total_tokens': 962, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01Q6fLcZkBWZpMPCZ55WXR3N'),\n",
" AIMessage(content='So, the answers are:\\n1) 3 plus 5 raised to the 2.743 = 85.65606421491815\\n2) 17.24 - 918.1241 = -900.8841', additional_kwargs={}, response_metadata={'id': 'msg_015Yoc62CvdJbANGFouiQ6AQ', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 978, 'output_tokens': 58, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--174c0882-6180-47ea-8f63-d7b747302327-0', usage_metadata={'input_tokens': 978, 'output_tokens': 58, 'total_tokens': 1036, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -178,7 +177,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -192,7 +191,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -227,7 +227,7 @@
"outputs": [],
"source": [
"conversation_description = f\"\"\"Here is the topic of conversation: {topic}\n",
"The participants are: {\", \".join(names.keys())}\"\"\"\n",
"The participants are: {', '.join(names.keys())}\"\"\"\n",
"\n",
"agent_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of the conversation participant.\"\n",
@@ -396,7 +396,7 @@
" You are the moderator.\n",
" Please make the topic more specific.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the participants: {(*names,)}.\n",
" Speak directly to the participants: {*names,}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -1,9 +1,9 @@
# We build the docs in these stages:
# 1. Install vercel and python dependencies
# 2. Copy files from "source dir" to "intermediate dir"
# 2. Generate files like model feat table, etc in "intermediate dir"
# 3. Copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. Build the docs from "intermediate dir" to "output dir"
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
@@ -18,45 +18,32 @@ PORT ?= 3001
clean:
rm -rf build
clean-cache:
rm -rf build .venv/deps_installed
install-vercel-deps:
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
@echo "📦 Installing Python dependencies..."
@if [ ! -d .venv ]; then python3 -m venv .venv; fi
@if [ ! -f .venv/deps_installed ]; then \
$(PYTHON) -m pip install -q --upgrade pip --disable-pip-version-check; \
$(PYTHON) -m pip install -q --upgrade uv; \
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt; \
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt; \
touch .venv/deps_installed; \
fi
@echo "✅ Dependencies installed"
python3 -m venv .venv
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt
generate-files:
@echo "📄 Generating documentation files..."
mkdir -p $(INTERMEDIATE_DIR)
cp -rp $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
@if [ ! -f build/langserve_readme_cache.md ] || [ $$(find build/langserve_readme_cache.md -mtime +1 -print) ]; then \
echo "🌐 Downloading LangServe README..."; \
curl -s https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > build/langserve_readme_cache.md; \
fi
cp build/langserve_readme_cache.md $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
@echo "🔧 Generating feature tables and processing links..."
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/ & \
wait
@echo "✅ Files generated"
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
@echo "📂 Copying infrastructure files..."
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
@@ -68,22 +55,15 @@ copy-infra:
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
@echo "✅ Infrastructure files copied"
render:
@echo "📓 Converting notebooks (this may take a while)..."
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Notebooks converted"
md-sync:
@echo "📝 Syncing markdown files..."
rsync -avmq --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Markdown files synced"
append-related:
@echo "🔗 Appending related links..."
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Related links appended"
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
@@ -91,10 +71,6 @@ generate-references:
update-md: generate-files md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
@echo ""
@echo "🎉 Documentation build complete!"
@echo "📖 To view locally, run: cd docs && make start"
@echo ""
vercel-build: install-vercel-deps build generate-references
rm -rf docs
@@ -108,9 +84,4 @@ vercel-build: install-vercel-deps build generate-references
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build
start:
@echo "🚀 Starting documentation server on port $(PORT)..."
@echo "📖 Installing Node.js dependencies..."
cd $(OUTPUT_NEW_DIR) && yarn install --silent
@echo "🌐 Starting server at http://localhost:$(PORT)"
@echo "Press Ctrl+C to stop the server"
cd $(OUTPUT_NEW_DIR) && yarn start --port=$(PORT)
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

View File

@@ -1,154 +1,3 @@
# LangChain Documentation
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation).
## Structure
The primary documentation is located in the `docs/` directory. This directory contains
both the source files for the main documentation as well as the API reference doc
build process.
### API Reference
API reference documentation is located in `docs/api_reference/` and is generated from
the codebase using Sphinx.
The API reference have additional build steps that differ from the main documentation.
#### Deployment Process
Currently, the build process roughly follows these steps:
1. Using the `api_doc_build.yml` GitHub workflow, the API reference docs are
[built](#build-technical-details) and copied to the `langchain-api-docs-html`
repository. This workflow is triggered either (1) on a cron routine interval or (2)
triggered manually.
In short, the workflow extracts all `langchain-ai`-org-owned repos defined in
`langchain/libs/packages.yml`, clones them locally (in the workflow runner's file
system), and then builds the API reference RST files (using `create_api_rst.py`).
Following post-processing, the HTML files are pushed to the
`langchain-api-docs-html` repository.
2. After the HTML files are in the `langchain-api-docs-html` repository, they are **not**
automatically published to the [live docs site](https://python.langchain.com/api_reference/).
The docs site is served by Vercel. The Vercel deployment process copies the HTML
files from the `langchain-api-docs-html` repository and deploys them to the live
site. Deployments are triggered on each new commit pushed to `v0.3`.
#### Build Technical Details
The build process creates a virtual monorepo by syncing multiple repositories, then generates comprehensive API documentation:
1. **Repository Sync Phase:**
- `.github/scripts/prep_api_docs_build.py` - Clones external partner repos and organizes them into the `libs/partners/` structure to create a virtual monorepo for documentation building
2. **RST Generation Phase:**
- `docs/api_reference/create_api_rst.py` - Main script that **generates RST files** from Python source code
- Scans `libs/` directories and extracts classes/functions from each module (using `inspect`)
- Creates `.rst` files using specialized templates for different object types
- Templates in `docs/api_reference/templates/` (`pydantic.rst`, `runnable_pydantic.rst`, etc.)
3. **HTML Build Phase:**
- Sphinx-based, uses `sphinx.ext.autodoc` (auto-extracts docstrings from the codebase)
- `docs/api_reference/conf.py` (sphinx config) configures `autodoc` and other extensions
- `sphinx-build` processes the generated `.rst` files into HTML using autodoc
- `docs/api_reference/scripts/custom_formatter.py` - Post-processes the generated HTML
- Copies `reference.html` to `index.html` to create the default landing page (artifact? might not need to do this - just put everyhing in index directly?)
4. **Deployment:**
- `.github/workflows/api_doc_build.yml` - Workflow responsible for orchestrating the entire build and deployment process
- Built HTML files are committed and pushed to the `langchain-api-docs-html` repository
#### Local Build
For local development and testing of API documentation, use the Makefile targets in the repository root:
```bash
# Full build
make api_docs_build
```
Like the CI process, this target:
- Installs the CLI package in editable mode
- Generates RST files for all packages using `create_api_rst.py`
- Builds HTML documentation with Sphinx
- Post-processes the HTML with `custom_formatter.py`
- Opens the built documentation (`reference.html`) in your browser
**Quick Preview:**
```bash
make api_docs_quick_preview API_PKG=openai
```
- Generates RST files for only the specified package (default: `text-splitters`)
- Builds and post-processes HTML documentation
- Opens the preview in your browser
Both targets automatically clean previous builds and handle the complete build pipeline locally, mirroring the CI process but for faster iteration during development.
#### Documentation Standards
**Docstring Format:**
The API reference uses **Google-style docstrings** with reStructuredText markup. Sphinx processes these through the `sphinx.ext.napoleon` extension to generate documentation.
**Required format:**
```python
def example_function(param1: str, param2: int = 5) -> bool:
"""Brief description of the function.
Longer description can go here. Use reStructuredText syntax for
rich formatting like **bold** and *italic*.
TODO: code: figure out what works?
Args:
param1: Description of the first parameter.
param2: Description of the second parameter with default value.
Returns:
Description of the return value.
Raises:
ValueError: When param1 is empty.
TypeError: When param2 is not an integer.
.. warning::
This function is experimental and may change.
"""
```
**Special Markers:**
- `:private:` in docstrings excludes members from documentation
- `.. warning::` adds warning admonitions
#### Site Styling and Assets
**Theme and Styling:**
- Uses [**PyData Sphinx Theme**](https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html) (`pydata_sphinx_theme`)
- Custom CSS in `docs/api_reference/_static/css/custom.css` with LangChain-specific:
- Color palette
- Inter font family
- Custom navbar height and sidebar formatting
- Deprecated/beta feature styling
**Static Assets:**
- Logos: `_static/wordmark-api.svg` (light) and `_static/wordmark-api-dark.svg` (dark mode)
- Favicon: `_static/img/brand/favicon.png`
- Custom CSS: `_static/css/custom.css`
**Post-Processing:**
- `scripts/custom_formatter.py` cleans up generated HTML:
- Shortens TOC entries from `ClassName.method()` to `method()`
**Analytics and Integration:**
- GitHub integration (source links, edit buttons)
- Example backlinking through custom `ExampleLinksDirective`
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation)

View File

@@ -50,7 +50,7 @@ class GalleryGridDirective(SphinxDirective):
individual cards + ["image", "header", "content", "title"].
Danger:
This directive can only be used in the context of a MyST documentation page as
This directive can only be used in the context of a Myst documentation page as
the templates use Markdown flavored formatting.
"""
@@ -108,7 +108,7 @@ class GalleryGridDirective(SphinxDirective):
# Parse the template with Sphinx Design to create an output container
# Prep the options for the template grid
class_ = "gallery-directive" + f" {self.options.get('class-container', '')}"
class_ = "gallery-directive" + f' {self.options.get("class-container", "")}'
options = {"gutter": 2, "class-container": class_}
options_str = "\n".join(f":{k}: {v}" for k, v in options.items())

View File

@@ -394,21 +394,3 @@ p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}
/* Deprecation announcement banner styling */
.bd-header-announcement {
background-color: #790000 !important;
color: white !important;
font-weight: 600;
padding: 0.75rem 1rem;
text-align: center;
}
.bd-header-announcement a {
color: white !important;
text-decoration: underline !important;
}
.bd-header-announcement a:hover {
color: #f0f0f0 !important;
}

View File

@@ -1,5 +1,7 @@
"""Configuration file for the Sphinx documentation builder."""
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
@@ -18,18 +20,16 @@ from docutils.parsers.rst.directives.admonitions import BaseAdmonition
from docutils.statemachine import StringList
from sphinx.util.docutils import SphinxDirective
# Add paths to Python import system so Sphinx can import LangChain modules
# This allows autodoc to introspect and document the actual code
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath(".")) # Current directory
sys.path.insert(0, os.path.abspath("../../libs/langchain")) # LangChain main package
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
# Load package metadata from pyproject.toml (for version info, etc.)
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
data = toml.load(f)
# Load mapping of classes to example notebooks for backlinking
# This file is generated by scripts that scan our tutorial/example notebooks
with (_DIR / "guide_imports.json").open("r") as f:
imported_classes = json.load(f)
@@ -86,7 +86,6 @@ class Beta(BaseAdmonition):
def setup(app):
"""Register custom directives and hooks with Sphinx."""
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
@@ -98,7 +97,7 @@ def skip_private_members(app, what, name, obj, skip, options):
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# don't document default init
# dont document default init
return True
return None
@@ -126,7 +125,7 @@ extensions = [
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"IPython.sphinxext.ipython_console_highlighting",
"myst_parser", # For generated index.md and reference.md
"myst_parser",
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
@@ -218,7 +217,7 @@ html_theme_options = {
# # Use :html_theme.sidebar_secondary.remove: for file-wide removal
# "secondary_sidebar_items": {"**": ["page-toc", "sourcelink"]},
# "show_version_warning_banner": True,
"announcement": "⚠️ THESE DOCS ARE OUTDATED. <a href='https://docs.langchain.com/oss/python/langchain/overview' target='_blank' style='color: white; text-decoration: underline;'>Visit the new v1.0 docs</a> and new <a href='https://reference.langchain.com/python' target='_blank' style='color: white; text-decoration: underline;'>reference docs</a>",
# "announcement": None,
"icon_links": [
{
# Label for this link
@@ -259,22 +258,19 @@ html_static_path = ["_static"]
html_css_files = ["css/custom.css"]
html_use_index = False
# Only used on the generated index.md and reference.md files
myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
# Don't fail on autosummary import warnings
autosummary_ignore_module_all = False
html_copy_source = False
html_show_sourcelink = False
googleanalytics_id = "G-9B66JQQH2F"
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -1,41 +1,4 @@
"""Auto-generate API reference documentation (RST files) for LangChain packages.
* Automatically discovers all packages in `libs/` and `libs/partners/`
* For each package, recursively walks the filesystem to:
* Load Python modules using importlib
* Extract classes and functions using Python's inspect module
* Classify objects by type (Pydantic models, Runnables, TypedDicts, etc.)
* Filter out private members (names starting with '_') and deprecated items
* Creates structured RST files with:
* Module-level documentation pages with autosummary tables
* Different Sphinx templates based on object type (see templates/ directory)
* Proper cross-references and navigation structure
* Separation of current vs deprecated APIs
* Generates a directory tree like:
```
docs/api_reference/
├── index.md # Main landing page with package gallery
├── reference.md # Package overview and navigation
├── core/ # langchain-core documentation
│ ├── index.rst
│ ├── callbacks.rst
│ └── ...
├── langchain/ # langchain documentation
│ ├── index.rst
│ └── ...
└── partners/ # Integration packages
├── openai/
├── anthropic/
└── ...
```
## Key Features
* Respects privacy markers:
* Modules with `:private:` in docstring are excluded entirely
* Objects with `:private:` in docstring are filtered out
* Names starting with '_' are treated as private
"""
"""Script for auto-generating api_reference.rst."""
import importlib
import inspect
@@ -134,7 +97,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind = "TypedDict"
kind: ClassKind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
@@ -214,20 +177,19 @@ def _load_package_modules(
Traversal based on the file system makes it easy to determine which
of the modules/packages are part of the package vs. 3rd party or built-in.
Args:
package_directory: Path to the package directory.
submodule: Optional name of submodule to load.
Parameters:
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
Returns:
A dictionary where keys are module names and values are `ModuleMembers`
objects.
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
"""
package_path = (
Path(package_directory)
if isinstance(package_directory, str)
else package_directory
)
modules_by_namespace: Dict[str, ModuleMembers] = {}
modules_by_namespace = {}
# Get the high level package name
package_name = package_path.name
@@ -237,16 +199,9 @@ def _load_package_modules(
package_path = package_path / submodule
for file_path in package_path.rglob("*.py"):
# Skip private modules
if file_path.name.startswith("_"):
continue
# Skip integration_template and project_template directories (for libs/cli)
if "integration_template" in file_path.parts:
continue
if "project_template" in file_path.parts:
continue
relative_module_name = file_path.relative_to(package_path)
# Skip if any module part starts with an underscore
@@ -254,13 +209,8 @@ def _load_package_modules(
continue
# Get the full namespace of the module
# Example: langchain_core/schema/output_parsers.py ->
# langchain_core.schema.output_parsers
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
# Keep only the top level namespace
# Example: langchain_core.schema.output_parsers ->
# langchain_core
top_namespace = namespace.split(".")[0]
try:
@@ -297,16 +247,16 @@ def _construct_doc(
members_by_namespace: Dict[str, ModuleMembers],
package_version: str,
) -> List[typing.Tuple[str, str]]:
"""Construct the contents of the `reference.rst` for the given package.
"""Construct the contents of the reference.rst file for the given package.
Args:
package_namespace: The package top level namespace
members_by_namespace: The members of the package dict organized by top level.
Module contains a list of classes and functions inside of the top level
namespace.
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
Returns:
The string contents of the reference.rst file.
The contents of the reference.rst file.
"""
docs = []
index_doc = f"""\
@@ -317,7 +267,7 @@ def _construct_doc(
.. _{package_namespace}:
======================================
{package_namespace.replace("_", "-")}: {package_version}
{package_namespace.replace('_', '-')}: {package_version}
======================================
.. automodule:: {package_namespace}
@@ -327,7 +277,7 @@ def _construct_doc(
.. toctree::
:hidden:
:maxdepth: 2
"""
index_autosummary = """
"""
@@ -375,7 +325,7 @@ def _construct_doc(
index_autosummary += f"""
:ref:`{package_namespace}_{module}`
{"^" * (len(package_namespace) + len(module) + 8)}
{'^' * (len(package_namespace) + len(module) + 8)}
"""
if classes:
@@ -409,12 +359,12 @@ def _construct_doc(
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_["qualified_name"]}
{class_['qualified_name']}
"""
if functions:
@@ -477,7 +427,7 @@ def _construct_doc(
"""
index_autosummary += f"""
{class_["qualified_name"]}
{class_['qualified_name']}
"""
if deprecated_functions:
@@ -509,13 +459,10 @@ def _construct_doc(
def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for a given package.
"""Create a rst file for building of documentation.
Args:
package_name: Name of the package to create the rst file for.
Returns:
The rst file is created in the same directory as this script.
package_name: Can be either "langchain" or "core" or "experimental".
"""
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
@@ -534,7 +481,7 @@ def _package_namespace(package_name: str) -> str:
"""Returns the package name used.
Args:
package_name: Can be either "langchain" or "core"
package_name: Can be either "langchain" or "core" or "experimental".
Returns:
modified package_name: Can be either "langchain" or "langchain_{package_name}"
@@ -547,11 +494,16 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation.
Attempts to find the package in `libs/` first, then `libs/partners/`.
"""
if (ROOT_DIR / "libs" / package_name).exists():
"""Return the path to the directory containing the documentation."""
if package_name in (
"langchain",
"experimental",
"community",
"core",
"cli",
"text-splitters",
"standard-tests",
):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
@@ -564,7 +516,7 @@ def _package_dir(package_name: str = "langchain") -> Path:
def _get_package_version(package_dir: Path) -> str:
"""Return the version of the package by reading the `pyproject.toml`."""
"""Return the version of the package."""
try:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
@@ -590,42 +542,22 @@ def _out_file_path(package_name: str) -> Path:
def _build_index(dirs: List[str]) -> None:
"""Build the index.md file for the API reference.
Args:
dirs: List of package directories to include in the index.
Returns:
The index.md file is created in the same directory as this script.
"""
custom_names = {
"aws": "AWS",
"ai21": "AI21",
"ibm": "IBM",
}
ordered = [
"core",
"langchain",
"text-splitters",
"community",
"standard-tests",
]
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
doc = """# LangChain Python API Reference
Welcome to the LangChain v0.3 Python API reference. This is a reference for all
`langchain-x` packages.
Welcome to the LangChain Python API reference. This is a reference for all
`langchain-x` packages.
```{danger}
These pages refer to the the v0.3 versions of LangChain packages and integrations. To
visit the documentation for the latest versions of LangChain, visit [https://docs.langchain.com](https://docs.langchain.com)
and [https://reference.langchain.com/python/](https://reference.langchain.com/python/) (for references.)
For the legacy API reference (<v0.3) hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
```
For user guides see [https://python.langchain.com](https://python.langchain.com).
For the legacy API reference hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
"""
if main_:
@@ -660,12 +592,7 @@ For the legacy API reference (<v0.3) hosted on ReadTheDocs see [https://api.pyth
if integrations:
integration_headers = [
" ".join(
custom_names.get(
x,
x.title().replace("db", "DB")
if dir_ == "langchain_v1"
else x.title().replace("ai", "AI").replace("db", "DB"),
)
custom_names.get(x, x.title().replace("ai", "AI").replace("db", "DB"))
for x in dir_.split("-")
)
for dir_ in integrations
@@ -711,14 +638,9 @@ See the full list of integrations in the Section Navigation.
{integration_tree}
```
"""
# Write the reference.md file
with open(HERE / "reference.md", "w") as f:
f.write(doc)
# Write a dummy index.md file that points to reference.md
# Sphinx requires an index file to exist in each doc directory
# TODO: investigate why we don't just put everything in index.md directly?
# if it works it works I guess
dummy_index = """\
# API reference
@@ -734,30 +656,33 @@ Reference<reference>
def main(dirs: Optional[list] = None) -> None:
"""Generate the `api_reference.rst` file for each package.
If dirs is None, generate for all packages in `libs/` and `libs/partners/`.
Otherwise generate only for the specified package(s).
"""
"""Generate the api_reference.rst file for each package."""
print("Starting to build API reference files.")
if not dirs:
dirs = [
p.parent.name
for p in (ROOT_DIR / "libs").rglob("pyproject.toml")
# Exclude packages that are not directly under libs/ or libs/partners/
if p.parent.parent.name in ("libs", "partners")
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners", "packages.yml")
]
for dir_ in sorted(dirs):
# Skip any hidden directories prefixed with a dot
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
]
for dir_ in dirs:
# Skip any hidden directories
# Some of these could be present by mistake in the code base
# (e.g., .pytest_cache from running tests from the wrong location)
# e.g., .pytest_cache from running tests from the wrong location.
if dir_.startswith("."):
print("Skipping dir:", dir_)
continue
else:
print("Building:", dir_)
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
_build_index(sorted(dirs))
_build_index(dirs)
print("API reference files built.")
if __name__ == "__main__":

File diff suppressed because one or more lines are too long

View File

@@ -1,12 +1,12 @@
autodoc_pydantic>=2,<3
sphinx>=8,<9
myst-parser>=3
sphinx-autobuild>=2024
pydata-sphinx-theme>=0.15
toml>=0.10.2
myst-nb>=1.1.1
pyyaml
sphinx-design
sphinx-copybutton
sphinxcontrib-googleanalytics
pydata-sphinx-theme>=0.15
myst-parser>=3
myst-nb>=1.1.1
toml>=0.10.2
pyyaml
beautifulsoup4
sphinxcontrib-googleanalytics

View File

@@ -1,10 +1,3 @@
"""Post-process generated HTML files to clean up table-of-contents headers.
Runs after Sphinx generates the API reference HTML. It finds TOC entries like
"ClassName.method_name()" and shortens them to just "method_name()" for better
readability in the sidebar navigation.
"""
import sys
from glob import glob
from pathlib import Path

View File

@@ -1 +1 @@
eNrNWA2MHFUdL6CxEQI2aMVq9Lk5257d2bu93ftsmvS8a8u117v27oqlvXp5O/N29/Vm5w3z3uzetpSkKIQoDW4hIMW0Ktc7uZztVRoRoUmRQqoiKibilQIx2GgABZQqGCj+35vZ3dm9j0KCiZu93M7M+7//x+/3/5h383iWOJwy65JJagniYF3ABS/cPO6QG1zCxTfGMkSkmTG6qbd/4H7XodNfSgth87a6OmzTCLZE2mE21SM6y9Rlo3UZwjlOET6aYEb+zKLI7lAGjwwJNkwsHmqL1jfEw6HimlDb9t0hh5kk1BZyOXFC4ZDOwApLwI1r6RdDe3bAYmYQE651E7sG0WJao8aZZRGhmViAgSAkGDO9zSyckZsJnKVmfogT7OhpWEAt2xVDXE+TDA617Q7ZYDJxBJUm7A6Bn05e/jAI1x1qywjAJv1KGqmnSDBkMjaMXFvqy9tSCxcOtVKhPXJ/3QTjhgyWwdRSm2Ir35tUNlFBMurWDDH/BnYcnIfr0grLNU3lu0GS2DUhGtvVRdC6dmRSLhBLIl+pNNEhcnNdIM9zeQ3i8lFk0Bq0kP/ZwgkSacqRjR0IGMCOcmlitZVXRCNoIE2QBAWREdukOhVmHvmc4IhaSeZksLQFJR2WQdwmOk1SHeVIgoPHHC0nkVQkjAZDa6llIN2ksJwgAwvsSViY40iKZQdDtWW9DQG9GeCB5CLCFmJOClt0l6eQOQjIZkOEUY4CN13hq89DXMEx4oekyoSgzTghpeimNLPAVGVPu22bRBlTNqfLQgkm0kjHnPAwyjMXcdBnGsiQUctQiyh92JaMcqjy0IfDV759MFT2dIe0He5gqUtmDNyqBf8MAExUQVKBWJ8PZI6aJurt6b5eRoB4hksD/PATo6ReQxZDYDsEkjPX0YkvnCDIZ6sRKe/f6RENgQE9EBK0HIT9IBY5BWGrDUhI/pKR/w/a+3b4TKyg/ofMe1CGs4wCn6F8yOjMQfeZXCsmgJ7GVoqgBNyymPARBBpLxBUhLpYNUioHZVdyvZjflSkYTBZeShEojfOmhy1zAXbMUpLjRfvSUuR/kh4B2MqZUhGIDydZktSUOMsIYLj0+8s8mfP+skLZzmfJCY9+QwaxRboyIYjlZuBHKIE51aGLYCOLLZ0YoR0zO8o8OVGSr0yMDnAMWikv8h9Ad5ibSkN94ypoHldU5XQIsJ7TLJEPffsrs2TQ0zIYQsBjxGkGIFCdEFqmygYqDYU7VB8OIzAb01RawNocdgxQx3MwVQQiU7170XdQsNz3rFapkuaZZKSoK+zhg026CxASctCAe+XdHOyQYLKFJWdkTsNNTWEAxmAzzwFC+FqEVNa9YPOG/EyRqiLmo5CAAYNga35gktjkpBqWzmIigP6kR7hifRKuY8lgmgRiASRT6oGkDAIrkxYJMiICdawijP0yBRgacFzi+SvmK1tZyl2wrhwcT1dO5Wp5U05TlqwgYA5IEystMUKuZQCYAjgkIS8Vjf40y0FVgg2xQAkT68MoDVHi3pxk0mESBnQDFPDLDLQSF5ySJbyPWJjCKCi1YEdAdZmVMt0EZ2EBR2tlhKsIk2HQDgIMwGaJp+AqsENFEbDnbhJco5D8VfgLmiFDjqzKs+erAV0qHMoRMhyS0ygMvPA/Dyh+wMSVd6rp0U0zVPDAnFYqULabgE6XBtZLLkDFweUCLy1Oyso3a7jUeFAqkXOSozxeqf0QDMSUBdlQQtqbs1F7F+RPjitUoXkLqpvF2cnE8FxGaC4I1wJSpqxENrNdEzuSiRaFebyU02AeSjgMA9c8exQiPGjPoBd90A/SgwoC0Kc6icWBUwGIy2k1R/70KgR4G+wD+Moy1BBP10pvPTfQ8mZDXRZVLo/Vezc8tXDd1GjM7mxV26imm3R4dqaloCA72AR6yUDDvyS1ZHJ80A5R3qfqjcbvdaUmoQOyKQavNzKTGCzLqDo7R8y80u1vPqNyq0Qsle2uno7uLZ1dPetkIwI5v6A5Ga5KQ0Vd8Pjl8Qoc938rjB2iA5pwpWjskCRxVOMBzH07grk/Y9gbLEbQb2QZ7AwTAeZRKwtqZArABYGcYxnIK/lyojqIJwVtRzF+lm3lbdhTpZrc2GYmFYrH3GaO3FRp2yk7mus4kpEkK7VBfsMLN8Q4kQ/EQA0ewiE4A2lpUAzTe0/vgNd38zCy61imB/bfRTEf5kqrsgPMqAxBmWzANq9Xw5AB9PLec8tcYomdRBeSk9Uztk8PYqXkCFcmRrFDlwaIMMI6OAgAhtWQIRyXiwCDZLiSrqkEFYRyRpRdWE3SakxAqkd5lciblCsCFkFd3qTMLNXRgP/wgKMtfd3eXMMtatsKVDmzsiRQBB7480XREzUKwBJ/esvAUJAiEv5woNj4syL3XFE90pcPe7VUnjPAViaVZyZgCzY1KV0dBKivCgstAU0xEIwueQpRnJATEk8eOGKIQBLvGU8TWQH5CwsWjULeiMKxGYcsR0EZeKMRS2eyIxceTO2iNrhGkqYyYRcXxoQuj0jUS1thYpgQW8PS6gdHNC5fW2QpVqZDvAsPANWG1nVdt6ZnzNu6MCVHbhgDpHjdTs6sSb8raZI5Mx9PyN6iQbgsUXiovWhs3aY85L6F6iON8Uj9VFC1CeEujNnq+SPBBzZEDPbR/EOpwpgnfCS4hvHC4Y1Y7+2v2FIGsnAYO5mmeIWXjmtJRwvjHZtmqvMfltSNxyLRKHyPVezM85ZeOKzGu2MlKEoyEw31DTGtvkmrjz5UsTdQNa/pDFQUvl9/pBhBE5JKpAuj8Vhryw+BGTYQn3x9DOSEy28eBUzJU6fH/ROyH/RuKDPiM6OdgG/hxIALIEeb0XrXQqC7EUXjbQ3wbUbrNg5Mdvh6BmZF6tgAMJ1DimhrivQZ19OuBTSd6JiVMtNLyy5LiptyYtHUgZrmnepp6lZhLF4vP9Nfvuh6h8gklLqLMkvfhwxMNIXj0l8Z62jzgO9107bpZbNJA7NnmHi4RWlbcfH1ZRN9mWXvR2YOE5u3TYdmEy+O5751o/E5Q1FaWbZrNNba2nqRfee0KL5tGs0mWQWo53rNPCuDUHqr0byr5wRxwjdao0bhUfg9VB+NdmzuzNkN6/TrEls3p3uyuK8xmRp42DuN0oQksuy1GidQfKnIF6bDGTwiy9CqWLQx1gTWrCyedfW7iU7v9X4lgu5lwpz5cNnQ4IGJNGC6JtbaFGs0YgmNJJKGFm9tadZaWxuiWqKhocWIt0Sb40bT/VmKCxNQLlCKsZRJjupJTccw02peMhfGO6/vad/Y1TG5VetjCQaADGAAzoKZcKyfOFBAChO6yVwDCrdDxjrWan3t1xeOtzZGY6Ack5ZYU2ss0aqt+WrfVDGtS2k7Kqu+OmTfO+YNhU9csv8L31q4QH0ug7/33jM2b7jj96uvuLDiyXtOjG6583nn9Eff+ERi0cdqVl87uSneffbUL089/fTImX0TF3J3rV50250/eo385q3bzyb26S8f2PH2/vOvnzt/9VWvvXjTW//63Qt3Df7930enVt+3bNvAXftfWXj57UsWnzx86hffvWZxy84Nlwy/+di5u5seuu/dpQ8snB47syV37sjE1OqjiZpbzz/+9pKdbx7/47N9L638ePuhx9799cJVay7t/2ttd/3wCz9ftrh3w+UnVu4/e81P45suv/KZzoNXvfqnaH/DdXvvndyqhx7Ofs25SXvkTK+5quXeT+eN9StanGefO/DoLbtf77nQfWtn5D+fv3LJ3nPfu0G8csWnWn52+sDV3+l59eo/n3pm7/mPHHq+8PiN7n4RPn7L0ROxFnrl3w58dmTroZbbDp5897dm3ev/OHHSaGssDPY9tqTrzJPOJ9/ZWLPrqZqDU3d/7it786vW71l7d+12ve3bNZO/euTln6x+7cbefXccP3L2nRXTO94+d+eFv8Sfe/H0P398svON0weX6N/c+pLzzh+eWFx3zyEV+MsWTL1Yd3L9pQsW/BfdSCzc
eNqFVW1sU1UYBrcfhgQlEhMlRg8NEBN229vbj63DKGNzOnRu0IqAWebpuaftZbf3XM85d1u3LIQh0YREcxMTEzXRSGmxzI0JBAySaBCjBH/rMEr4YdQoGn+YqInO99y1MGTB/mhOz/u8X8/7vKeT1WHKhcWc5VOWIynHRMIP4U9WOX3eo0K+UClSWWBmub8vnTnkcWtufUFKV7RHIti1wtiRBc5ci4QJK0aGo5EiFQLnqShnmVm6tPy38VARjw5KNkQdEWpHUd2It6BQAwU3z46HOLMpnEKeoDwEVsKgFEeqq4K1NjQxoDyYSW11Q2zsmVSLaYI5DpWaARF1w0gpR8mYXY/p4GIQU+Jhyy4NCoo5KQxyKjxbisE94KwcTCoIt1zVswJ3oAUcok7ecihiYClaY9REOcYRdOhyWoBGrGHagjAhHsdSnRwTSe4JCcB6hjB6WtCcZweOI+CDSsxDDgWEZOAgRihHAcGKboSzzJMI4nHoG9Fh+IYQPY4Lt6LAPNtEWYpwozxw5KWwasBSkEFBCrSIoYPxkAvToFxaAbfjoQAZnP7T6uJIqiSbsSHkuQGLJTegTkhuOfnQxATcKTVYnJqK3HrQgUVQlt1DiQTowES1QLEJmnqlXGBC+rM3qWQGiKOu1KhDmAkJ/PfyY5bbgkyas4HOGlFzDWTo14YodTVsA9+VBS//GHZd2yJY2SNqjFN1tWiqlpvNNSUqDbTmSP9UR6OOSH8JRO0gPRyLh41jo5qQ2HJsUKVmYyip4gb2M4sNLiZDEEerL4xfWXCeXoxhwj/ci0lf+oaQimn/MObFZPz44nvuOaAv6lc7+29OVzdeTxcLR6Ph1tkbAouSQ/zDOWwLOnuN5GsuNdiNmKYnNT063WDJBmnLgn8onmo9Alp1QX10fwVCSk9MlmEi9OJn1fp6vtP3RGOa3y67q9wF0/HPdnOrBRmtKE1dpHYPRZPterRdb0OP9WamOutpMksOYzbDQfo5GMijjeFXScFzhqhZ61xy7HOh622pZbNhH6VWf5tgWOqnX47ruj634ZZIDgtiOSpjOZZKpf4nLjBDpX9C9afpKc1ozSx0mYjvnkNLeS48cPV6KqoeqGjdLZDX62mg0S3RS9ejt+2u1YvWLNP/EM6DejS9Q+R6jQQpxLaYu2h/b7JrqMtJnBzViM08U5PwylMtEMSo9OdQnEQTRipptMbNeLYtnkglcjgWTyaTuqEn4/HsoWEL+7VoOIryjOVtOtPZrXVieHK0dCAbv9q166mO3p7OqZ3adpZlwF8GA88Oc2glTTnI0a8FqWHBOa2A+/aOXf6JNpIi2QSNJUjMMHI5om2BvWkI6JpAyup1CP5N9lUWXqRPvnjg4O3Lgk/Tky9/s/Xc5lUH3jovzu6TX6/45ezOI6OREP78jrW52OrY8edWlkZ6rI+PNl3+80rLD6tv6776+tjIuvvPXKEPP3Lnr6/NnBoYO3l5+sD+gQ2XtpfXbJo811Jq2nbymYO2e8/6bd2b3yA9masb7869f+HLf/rH55tf3PHVSyuvkA/61vxtf3e0+c0H86WP/vrx7Q3pWDqDm1dVP71w8fS9Wy8fOL2+OZza9OpPM8fe/v7dncRYtXFy70Ntj8/O/3x6ei/UPT/ftOz8H/et+B3O/wLqW+ZT

View File

@@ -1 +1 @@
eNrNWAtwXFUZLoI8qqMCFoRx4Li2pYG9m928k0ofJmknJSQl2VpIA+nZe8/unubuPbf3nLvJpuJYXoIU6FZAi6AiaQKxA+kAVmxBFCLMWCsjpRCqwICIIr6t4CCt/zn37jOPwozOmGknueee//yP7/sf91w1miYOp8w6bie1BHGwLuCBZ68adchGl3BxzUiKiCQzhld3dkfvcR06uSAphM2bKiuxTUPYEkmH2VQP6SxVmY5UpgjnOEH4cIwZmRdPbdoUSOHBPsH6icUDTZFwVU0wkNsTaFq3KeAwkwSaAi4nTiAY0BlYYQlYWJvE4jyOUhlk4RRZGrjychBkBjHhnW5i1yBatVarcWZZRGgmFmAsHCAYM72DpRjsFThNzUwfJ9jRk7CBWrYr+rieJCkcAPNsMJ84gkpzNgXAZycj/zAI1x1qy2jAId1KGqm3SDBkMtaPXFvqy9hSCxcOtRKBK+X5ugnG9RkshamlDsVWpjOubKKCpNTSFDF/ATsOzsBzfoflmqby3SBx7JoQmXXqodi65cikXCAWR75SaaJD5OG6QJ7n8hnE5atQr9VrIf9nDSdIJClHNnYgYEABNJAkVlNhRySEokmCJECIDNom1akwM8jnB0fUijMnhaUtKO6wFOI20Wmc6miAxDh4zNEiEkqEgqg3sIJaBtJNCtsJMrDAnoSFOQ4lWLo3UFHQW1WkNwWckLxE2ELMSWCLDnkKmYOAeDZEGA1Q4KkrfPUZiCs4RvyQlJlQbDOOSSm6OsksMFXZs9y2TaKMKZjTZqEYE0mkY054EGWYizjoMw1kyKilqEWUPmxLRjlUeejD4Stf1xsoeHq5tB1WsNQlsweWKsA/AwATZZCUINblAzlATRN1drRfJiNAPMOlAX74iZFXryGLIbAdAsmZ6+jEF44R5LPVCBXOb/GIhsCADggJWgTCfhBznIKwVRRJSP6Swf8P2vt2+Ewsof5/mfegDKcZBT5D+ZDRmYHuU7mWSwA9ia0EQTFYspjwEQQaS8QVIY6VDVJqAEqw5Houv0tTsDhZeD5FoDTOmh62zAU4MU3JAM/Zl5Qi/5P0KIKtkCklgfjvJEucmhJnGQEMj36vmSVz3l9WKNv5NDnh0a/PILZIliYEsdwU/BGIYU516CLYSGNLJ0bg8qkdZZacyMuXJkYzOAZtlef4D6A7zE0kob5xFTSPK6pyOgRYz2mayJe+/aVZ0utp6Q0g4DHiNAUQqE4ILVNlA5WGwgrV+4MIzMY0kRSwdwA7BqjjAzBhFEWm/PSc76Bgke9ZhVIlzTPJYE5X0MMHm3QIEBJy6IC1wmkOdkhxsgUlZ2ROw6KmMABjsJnhACH8swgprXvFzRvyM0HKipiPQgwGDIKt2YGJY5OTclhacokA+uMe4XL1SbiOJYNpEogFkEypB5IyCKxMWiTIoCiqYyVh7JYpwFDUcYnnr5itbKUpd8G6QnA8XQMqVwuHcpqwZAUBc0CaWEmJEXItA8AUwCEJeb5odCfZAFQlOBALFDOx3o+SECXuzUkm7SdBQLeIAn6ZgVbiglOyhHcRC1MYC6UW7AioLtNSpp3gNGzgaIWMcBlhUgzaQREDsJnnKbgK7FBRBOy5GwfXKCR/Gf6CpkifI6vy9PlqQJcKBgYI6Q/IaRSGX/idARQ/YOLKlXJ6tNMUFbxoTssXKNuNQadLAuslF6Di4EKBlxbHZeWbNlxqPMiXyBnJURiv1HkIBmLKitmQR9qbs9HyNsifAa5QheYtqG7mZicTw3sZoZkgXAFImbIS2cx2TexIJloU5vF8ToN5KOYwDFzz7FGI8GJ7er3og36Q7lUQgD7VSSwOnCqCuJBWM+RPp0KAN8E5gK8sQ1U1yQrprecGWlRvqMecykXVYW/BUwvPdbXG9M6WtY1yukmHp2daAgqyg02glww0/IpTSybHB+0QhXPKvmj8XpdvEjogm2DweSMzicG2lKqzM8TMK93+4VMqt0rEfNlu62huX9PS1rFSNiKQ8wuak+KqNJTUBY9fHq/Acf9vhbFDdEATnhSNHRInjmo8gLlvR3HuTxn2enMR9BtZCjv9RIB51EqDGpkC8EAg51gK8kp+nKgO4klB21GMn+ZYuQxnqlSTB9vMpELxmNvMkYcqbRtkR3MdRzKSpKU2yG/4+IYYxzJFMVCDh3AITkFaGhTD9N7RGfX6bgZGdh3L9MD+tyjm/VxpVXaAGaUhKJAN2Ob1ahgygF7ed26BSyy2gehCcrJ8xvbpQayEHOEKxMh16PwAEURYBwcBwKAaMoTjclHEIBmuuGsqQQWhnBFlF1aTtBoTkOpRXiXyJuWSgIVQmzcpM0t1NOA/vOBoTVe7N9dwi9q2AlXOrCwOFIEX/nyR80SNArDFn95SMBQkiIQ/WFRs/FmRe66oHunLB71aKu8Z4CiTyvsTsAWbmpQuDwLUV4WFFoOmWBSMNnkLkZuQYxJPXnTFEIIkvnI0SWQF5C/NOXUY8kZkd025cHkAlIE3GrF0Jjty9sHEELXBNRI3lQlDXBhjurwiUR9t2bF+QmwNS6sfHNS4/GyRpViZDvHO3gdU61vZ9oXWjhHv6Oy4HLlhDJDilRs4s3b6XUmTzJn6ekz2Fg3CZYns7uU5YytXZyD3LRQO1daEwuPFqk0Id3bEVu/3FL+wIWJwjuZfUGVHPOH7i/cwnt1xMdY7u0uOlIHM7sBOqq6mxEvHtaSj2dHm1VPV+S/z6karQ5EI/NtVcjLPWHp2hxrvduWhyMuMVYWrqrVwnRaO7C45G6ia0XQGKrJ3h+/PRdCEpBLJ7HBNTSR8LzDDBuKTq0dATrj8qmHAlOx7etS/Lftu50UFRpw13AL4Zh+NugBypB6tci0EumtRpKapqrYp0oBWXhzd2ezriU6L1K4oMJ1DimitOfqM6knXApqONU9LmcmFBZclxU05sWjqQk3zbvg0tZQdqQnLn8nzj7nfITIJpe6czML3IQMTTfYh6a+MdaQ+6ntd3zN53nTSwOwpJu5oUNouOPb+gom+zHnvR2YGExt6JgPTiefGc9+64ZoZQ5HfWbBruLqxsfEY585oUV3PJJpOsgxQz/X5s+wshtLbjWbdPSOIY77RGjWye+HvvnAk0nxJywC361sTkTU1G6vWWtFVuD39iHcbpQlJZNlrNU6g+FKRyU4GU3hQlqELqyO11XVgzeLcXVe3G2vxPu8XI+heJsyZjxQMLb4wkQZMzq9urKuuNapjGonFDa2msaFea2ysimixqqoGo6YhUl9j1N2Tpjg7BuUCJRhLmOQBPa7pGGZazUvm7GjLZR3LL25r3nmp1sViDACJYgDOgplwpJs4UECyY7rJXAMKt0NGmldoXcsvyz7UWBuprq6P6LUNYb2xOtaota7tGs+ldT5th2XVVxfum0e8oXDiuEPn3njyHPVzPPw/elR0/XjDoWWfeO82HHrs6Z+mv9Yl/rDjwPDm04YumNiSfurgtue2/HDx91pq33s78LPr7tj67E82sZf3ZvadcvVTZ+/54q8eqL9l6W/oO0cOH+aZ1W88ds3Koa2s89K7b7qw8/z922/eMm/hTacva/yKOf+6eek3G2M3hbr+uvff43v74vPwOuv5Q3t+PrRnYu5dX931qYNLO9d2v/b879/9130bTzrQf0B/54l5NRPrT7qC4Gd++fiy/Rv3r587/7Pf2LBvvXbW3uQN52zRr7kZH9j2nTduXbPo81uvHrk5eO+yj/7izJ57X5v31uPZL6/69Geqn2hpuG1z97cOzz9nVXz89s0nzs3e8fTfX5l7xYuv9NgLFpxx42vm6U9OvHrWyVXddwy+ed8pkRcaVt6F9z/z9YUnDpze+frnKq7feNr3L1pyyR7rpdHw1sjvFr97yT0T6PqPN//l2lc/ueT1eW997NmDH1l65vqhdc8daP3wntaTtMPvRYPz//SD/obl4Ybk0LP75h49Mhb+5j/fffLWNeMrzvjR7m0nVLb89uVrJyIHu2I7x++8ZXQJ3vrYCy/9Y/sC/dFHDn1p5zPZ/syfT/v2H9/eve3wbZ3tR7TRiSMrl/x67dHh6J090du333l237btPcPGjhNWDVfc0Fqx6uGH5ygEj59Td8K5W/72oTlz/gMJcU9B
eNqFVX1sU1UU3yBREIiJX4kY4VLY0GSve6/tuo8/MKVjfIxtyAYOcJbb927Xt76++3jvvm2lzAjjDwN+PWUQoghhXWeaMjYYERE1zkxZ+HCJBDIiQqKJMRElUYMxmnle18KQBftHc3vP75x7zu/8zumO3laiGzJV89OyyoiORQY/DGtHr062mMRgO5NRwsJUSqypq2/oNnV5rCDMmGZUFBdjTXZilYV1qsmiU6TR4lahOEoMAzcTIxGkUuxK/t9xRxS3BxiNENVwVCCBd3mKkCOHgptNcYdOFQInh2kQ3QFWkUIqKrOv2sKYGSgaQyqOkuccHU22M5WIYhtFBZsS4dycQVWVMM4FwXmXq9yOwShVsuFtVxvOcKusxAIGwboYDujEMBVmBFrA2XaQiCHqsmaXb4N9aAKHiNosqwRRsETlrURCIaojKFbTSRhqkltJEcKiaOqY2SdVQkw3DQbA7AtOtM4gIVPJOLaBD4pRE6kEEIyCg9FGdJTh2mYe4SA1GYJ4OlCASCt8Q4iVqga3RpiaioSCBOFceuCox5x2AbINCRhimEQxVBB3aNAYojM5Q3PckUFmTv8pdXIkOyWF0ggytQyLMS1DncF0WW12dHTAnS0MWSeSTW42aNMkKA22EJEBtKmjN0ywBPJ6MxGmBrMG7hHMUSCOaIwjqkgleMA60rxV1oqQREIK0JkS7b5mFGmlIoRoHFaA7+SEl9WPNU2RRWzbi+02prPC4exc7jWnbH1xIDuVWR/6cnkUr4mBvlXEO90ep6u/nTMYllUFBMopGFJKahn7x5MNGhYjEIfLzo6VnHDum4yhhtVTg8W6+rtC2kxbPViPej3HJ9/rpgr6Ilavf829z2WNd55zOwXBWTpwV2AjpopWTwgrBhm4TfJtlxTMhpvjvRwv9OVYUkDaLGx1l/BlH4BWNVAf6UxCSGYaOxLQEXLuTG92Ug/XVee6+V3eI4lK6I71SZUuFyFXKaonGrJnDwneCl6o8HjQ8pqGtD/7TMOUzRho0EH6IWjIslzze8WwqUaIlPJP2fYxx52y7GFTYB4Zl11T0Cz7p5Xw8Dw/VnhfpA4DIqv2iwl3eXn5/8QFZgizBu36OL6cc5U2TFRZ4tk4hqbynNh12XySdj6Q0aL7IO/kk0Oj+6Knzsfj2ZjKJs3JknUazgFecOvrlpWuEr2Vonc58QXXR6pWY6n2RDsnKtSUOAYLn3AZQbQzawy5giVBF1/mFjwCCXqDJWUlPC9K7qDXzQtC0OPqbpWxlRKcAmqmtFkhR/1VnB/DyuHqM7Kxeis31PpqVvrTjdxaGqTAXwMGnlWqkmQ90UGOVirzNAy4TpLgvta3wRosE8vFoFfkyyTscvOuEm4pzE1OQLcFkrC3Q+aPZXtyYiMN5y+Zv3tGXuYz/cW3qqun+2Z/9dmqM8NnBy/emHXr0tKH/yh85iEH+fb1G6P7pQMjg86ea03JP4fqr+8bX/jDzr7huf4ratXlby5vad9Y+1PfhqsPvrSkL33he27uxU+7Dx/e8/Pe0c2PBV6pGSk4kQ7zK6rnRZTT57k5X3YuOpfobGmoqVpndj7wFC5Ynz5Yd81aNX9425MlS26+W3le6dpGN885crNr6MK8Ga8W0LbruxY1mkOHpsX3Lvz97YX9czw7l00729/4xdewh0+9kx59fvXQJX91y4G6zYFZvxSMR45pgx0zbzniZ55eUTRv1+fCP3t9obpnH89r2tMZmfneE4FDC94PzC7sUh8dKcRp6+TV8BuvBXZX/rr/5IJ0eZzygZc3HRzdss/sPra4tvGF4x/5Rv7aVxP68bfFQNT4+PS8g+n+7afy8/L+BTWeI00=

View File

@@ -0,0 +1 @@
eNqNVktv20YQRi75HQtdlATiU9Tzpjh2GzgvVA6KICiIDTmStloumd2lHjF8aPq46y80rl0EaZtTb7n01EN/QX5NZ5emIhvuAxBAzrczs/P4ZqjX5wuQiuXixjsmNEiaaBTU5vW5hJclKP3dWQZ6lqenTx6Pj96Ukn2YaV2ooefRgrmaLhhfu0meeQqoTGanL/J0/fHGzeMGHsdzWDeGpKEXfO3oo3ud3mj8xXwf7pYvHz/rr9LO4aGG1TjyH84f9Rot0sAbpbVYzqgmTBE9A7IEig9JmCDjA6OV0VUsQZVcK9QNEanujlMo9MyY03RBRQKp0WYi4WUKcZpnlAlj8fwrhGF1LVxrU6GWIBGdUK5g50DSZZzkWCqhrzllGZ2Cqg9OzmdAU6zvD2+fKpDOaIpWm9+LNRZUOBcFVl7otvH32yhJMH5nXyR5ysR088v0FStaJIUJpxrOquPNmzvenbd7uRBgO7V5OwcoHMrZAn7dq+JyHoCY6tnmTdgN39XY0bqAzXtaFJwl1Fh6X6tc/Ix1LLDf8O2Z0lSX6vUp3g5//XmegVKYy4+PD+skvv8PX1evPw26fng2BokM2/wkpkysTu9hJpsPB5K1SNgjYyhI6IcRCbpDP8Af+ezh0T8kZ4n1DUYpsTYfb949rsnyb1xpTHLO82VcFrGttaF2YyhKzluNusOVVDcOSdDYcuv5cUMzzQEv+XLHMRXkQCK/mEpyvKOUHBXqqVgul+5FFGZAzGSgzpYxjeMmz6uiNYfkuCloBvjSvOS02SJNCdNKp7mH+U9yKRg1eJKXQsu1OXgqmIaUjLFxoEg+IaMMJHbEqCFlUKXdc3t9I1lPThCGboTz0tSvYpYaFxcW3oNcxSMxBQ7KWmOIXLMMYijyZIaaQS/sRZ2w3/Z3j40L00DHHzjYT38w9P3miYmylBLztSlyqjR2IMUo0yv+goFv/e1oXHXZH0YdE5KGrIgTY9lxu7U8QbnruwHKTMUpNWUJbJFEyrY1xh2jbY1LIdbGF0uqynpekoqr3boQvW606kYe+vSCoO0WYlpVPzVJB77vmzSXTKRxVpiEItevgbkFum6nBlKYSjB2vWALMWki2H+0b9wWSDlVSoizF9Z7EFpvW5iZcMOB2+9XaMKKOMsQ82s9g1itCpmVGcMKmHr0TbUSnpemtOZsAsAVZ3PYLecncKemJtJkxjivNNvbFCvQaHY6Nk1cERpxWFWa4RXQakYWTGFZ5PipqRQDN9rFrF5lvMCGzk2KQddea+SMGX4OycAi5cIw3L5OSyRQ1Ye+tbZA1QcMu3tygjOIgyVxlH13MOj1/aCHg767y80eOGl9mvjPcxzsNcHJgwT5aV4uj36LfJrMFnk6HhGHmKGgIjVMNmS6djtc0dkSrlTUU1Q4k/oCb2ZDuLQ+TFTOi7VjnuTg/8fmknqDHeXI6a10kWUQkXuIbh0+w2UJ0pD/iRFr7T2Oe1IDuTXCnW4W5m2yV006XyNrQvLH+wOXjHG/oTWuJks75ZJbtQPzkcGxHF6Oltw3fz2EPaKcjJgscqlt3LfRHQDJMDBysVTqJe9e6mkfP3ZRu3NdU6uVbj9zsSl+Y9h2o8HJ3wjDCKE=

View File

@@ -0,0 +1 @@
eNrtWH+QE9UdB/lRRVBwKlKldc2ABzab7K9kkysK4ThQ8Y7DOwbRO7eb3Zdkuc1u2N0kl8NzAIXB6qDBEQeVqUq40xM5UAQ7pxbaolIY20JbBQttBZTyo8zY3ghisd+3m+RyxwE6I+0/MEzu7fd9v9/3fd9fn/feorYUMkxF1/qvVTQLGaJkwYf55KI2A81LItN6uDWOrJgu52pm1NatThrKnrExy0qY5V6vmFA8ombFDD2hSB5Jj3tTtDeOTFOMIjMX1uXM3ssGz3fFxSbB0huRZrrKCZpiODfhKnAB5b75LkNXEYxcSRMZLpiVdDBFszApHRMtk7BiiEgjEf4YhKIRZmSiq6UBq9FlpGI2SRWTMiJZ0tQ1DVkkA8tQDBPE2ixdV/MLaWLcXsgSU4qaEUwkGlJMMJCZVC1TmAvCWEBGpmQoCewIzBwiHD4CaVFFQ4QOM3GlGclERDcI2HbCQDHYnZJCbkKUpKQhWnikyYRlJE0LGPMreIhZJookVVswDTJERk8SGgIOSwcBMw37s72OY0CIYT1pEaDPAGcQKAW/oOIOLQFUM6YnVZkII0IsmAeCRsaDN6BgFsGUYiguwg7muxIQImRYiu3w+S6b0x712mqpJmySquuNRDJhezGTsF1nWoaiRV0tLUDDKaIYSMbOzSttKGHVw3ORZAFrQ0tbDIkyJNrjuZhuWtkNZ6VOBzgOJSwSaZIuwwLZV6PNSsJNyCiigjvbJRxXOzez7Y0IJUhRBX+3OlLZ9WIioSqSiOe9OIxr8ylEYlvOnm7HmUZCAmpWdnOoYIe3JgOZrhGUh+U8zPom0rRERVMhVUlVBJNaE/Z8Z+lEQpQaQQ+Zr6JsqyO8rpRHN7NrqkRpRm0PldjT2TWiEfdzr5fSjaQG+YWybRU1Zy+Xn+xejvXQtIff0EOxmdGk7JqIqJpoQ9HJRZF2qA2WpPwkRa8reEmF1LZi2dU+OvAS5GoCsg891AoqraS5KAcRQTvfb8vX7Iszpheiub/fNbkpEJ3s21MNxU0wPFGLEgSuPYL2l1N0OUMT06rq1lbkl6nrMxgb6gxI/QgEpLIQ/DYpltQakdxe0WfY97i6t4WLTYV6tMh8w4Jg4c9sjqMoas/N5+U0oEAUDa+YY4PB4AX0gmeQld2I90dSQZLh65xd+rh79xB9STpdL29PK7YHLBpzHs5uewrcxHm5+7aHoe9tzxtNKnL2LRgLFD3tnmnBKn+suoql+ZkV98TT8ebqOu2NJlJS9aRMWtD6EWknRJOV3UNwwaAYZH0yI/MsoliRpX0RnqeDFO+XIsFgYHVKEbPttIcmoroeVVFHxVSyQoSWQ9baaZNtmzKnOlR1R8Xae8i79bAO/qsTwc+arqHWWmRAOmbb7aWhwA3UCuJ3h+ZkNwakoBT2sxQt8zwTiUjkZKibQgIVEySHu4MNMQtbnY60rf8NNz56eT/734C6J/bd+etJwxcLY3d2DV4eHy+f+FnX9wbOPLZ43Gvu7R+1apGPV+358/DP/7l86fNPmx/sTF/FTvp4wcJl6U3jXjn0zgMtfzn82br7Dj+9Y8Vm4egr7LgnzizdPZLp/OGiRaO+GhP4dMW27IJFHu7w1knHG+ZSwsHRS2uMhsPoheSgIUcmLRrz0/1LhNzDf4t/8kjwupWfyo9+/vLLVz17tPPZNz/9/O+Jd44/29Xy5oLLjH0PfPbBqyfI1Y927Z968ODi4wsHVI0pv7k/u2W8eGzrlkGrjtEREh0bd3rpsiMrjj5xR6z+6gMn1y0YOzo+hb/pTNczw96tndw+YujYE9fmTo2InNz+2jW3ZUc9dv+sIZOX/3LvnY8f4MELX389oN/OT64IRvv363duOH+7FM0dXLSh3IECB8X7XznfBdNCI8rY6JlSM6RVN8XHh2rvbqxEk5PzZswJNMm+6dMheWo5qqqxmseYUcAZVyl2ixoRgYKXFFPSAWNkMYNZ8SEhj5IgwLgLWCTI0N9jWIcop0AKwMYGOEkFvBdkHdeLDe4YeFBTn+QCtwOwQLVbY8mEIaaF7jNH71klnj+p2BNFKFvSPguDSMgGkU0ONBR7hZfxsPC/I+SgWmXfqNbqTGdX3+K95Rytbl0h9e/Kt2jGz1+gn36bDr74Arp6L5+juaCvUL5r8Hmo6QII4MMIcI7N9SjgvYM+LJ5MLpgwroiuqnpaSCaE4nnJVa4lVdXtKoTZ+SpEDzLBVUwwOApaioUPna7Z3SvVwkpTCyvBGkkDzpauQn2k02lP3ixcKrhGgKeYNq75ZarueK4MTlZl+KgJg7IeSsvcRJmBog5PWQU4AU6CmiJiuqQDthsZPDFLU/CpETdSZBJ6hAjFkQFhwWyQN8DC8h4+gL9sTSTNMB4OiqbMahYUGavIS3jv0k0hpEURnAxsaTBRxScIASV0KQacNM/wnI8JsFTpNFbRjSsEFSynqLIWbKVzGrW3qIqmBRGQwUq5lz7ACVtfCUdvlYFyzodNslA8IUhY0ufxF74j8O2nPDR8K6YAIcfztpM0WSn6GEOV7eOkpmWwLkVyPOv1SrLWO1r5T6+fa/JzXtDppWnWk9CijvdlvGkaMBdvM61oshBP4A1xHqpAaLQJfo+vQJBR1EBYjqeLJMXAFlRWV2K1cDUwzaSBhHjY1k4ztrYiWcHmMkFPIOBQJSUhxONAowp8mGJzOZRYMq6AB7A/AthbNoLiWRhHEFJNVWlEpe7sJpb4FFsqxRRVdTjZ4hYdIub0+extQp+wgI6aHE6mF9Hm5GyijNIJHe6QDiPt4UppNp8jnIKANuIt0n57WfwdV3B+lhNBm5JM4Qy3h1G4P+XjELClbYITBzDb39ICNQiFZUApUx44lgT4IAOFXtrQcR9ocXdXfBVMxNQMUaj8qSAtQaLat7Ie5eomKkIEWbh54iQ6uyt0zxUTLO4s4FW9/mAYyZJf9IX9KMjzwSAbCbPIx3IcYiXKLwfCEusX6UjYz/s4iZXlSNjH0pTISn7ZLwX4Hg1mdvdSRBh3TNO+PuK7cRwuVcWrJ5Ffv3hjjpxngx6PB3dtniN+tZ7w0fiXCXgIvz3yMfgXWHo6GQ6Qforry8tOj7XBR8BtxFUO95OW7+gt4fLrL9pbgpvoFhRNUwHcBJEe0hgy8vda/KAggH7M8M1eFOwLeY+r94UQDl+vFbmwWhLO7zPl4NwplbdPNUImXT03NS0jh6braRdcrkutP3vbvQ13bOvlmvvm1+PErodx/fkQrx7k6guCDvcl3LuEe5dw73+Oe/V21+qrai8qIvXRAC4yLtUnKSpMYWzKj/L4VKAzhRGIgFMaCk++uEML36iF4lZrCsgw9OLlELrqpfflS+/Ll96Xv9P35RxD8dx3+8DMX3pg/j88MPN9PTD7xAqmZjYT5Kw5986a1synKius6unnfGD2USzFRCQZwSAghVk6GGHpCBfgGF+EE8PixX1g9nFUgAt/uwfmpu4H5vqZ+7TdNUMfvHbHbGLs2GFzPjy0bNhTV066PNk5+SXmkT+NPvSbstGna15sP7Ex+tGTL1h1zzfM+fKr/8y57cXGDe+GvugMn/rHvuN74xM7Tnf9e0PzX2ddfeOuyKlr2Y3KwZXb2jdGd1KDJw7IHfnj7UPZ0ZN3/KhRnbBmxb82/2LwiE1vuX/bPm7HDTvXXxF2L+nafGDbrpNfuG97/PoTP1ky4Mnk6NDK8O6QXLX/+89tXHeTtH7Z0Oe2rBjoz+bm7R24/ve7BnZsmnD91RMDkzOrto6+tfbVA9elOl5nm7YbQ56ZcnMgcHTBlqZRweVr3p9Ej1HEWRNu1fa7O0a+0vZe56CdVW/8PIIOrPzk9V2dLdFEYteRytmHfjzqXa2/8OC8MSe2/uDk8OZBJ5qX/IEd/8jCsgP1u6nEV7OPbnvqygktv6shH+t8PxdaP3VFKjp+yJmGcV/f5R4mkdtPD919Suoaseqx7amRzSO3fdl6CzN0+5n31nVUt03Mv15XPXO/OPyyfv3+CwLsDoI=

View File

@@ -1 +1 @@
eNrNWX9wHFUdB6sMyI8RnCo/Rvu8Ak3x9pLLJWkSBiEkbQmmSZuklKap4d3uu9uX7O5b9u3e5dphOqDIjIzoKcIICCppQmMFKghUqDiDBSz++EemxB/AKCOMDKIMI8yg4vf7du9uL78KMzhjppncvX3f9/3x+Xx/7Ot1MwXmSS6c4/dzx2ce1X34IsvXzXjs6oBJ/0vTNvNNYUxtHhgavjvw+Nx5pu+7srOxkbo8RR3f9ITL9ZQu7MZCutFmUtI8k1NZYZR+d8aZuxM2nRzzxQRzZKIz3dTckkxU9iQ6d+xOeMJiic5EIJmXSCZ0AVY4PixcxpOkd41NLhXZzySuSVY3Uim59EFv/W5mWULtJf1cZ8QXxGbMJyURpNQxJvPUqsksF1dJkfsm4U5OeDZFnwl1DPiVRYgHCSR38sQ3GaEFyi2atRiRjHq6CWcIS6bINpP6pCgCy1CnWXxCHT/hiCKhWRH4F8eNXuAdiq+RxC4Rh9oM9u6EsAiDWfBMt2hgMC2jtWpSOA7zNYv6AAUcoJSrsKEY7PVpgVulsdA22MAdN/DHpG4ymyY6dydcAId5Psdg704Aol4JPxhM6h530W84ZCj0TD1FJywhJkjgor6Si1qk70E8wCE4X7fAuDFD2JQ76lDqlAZyyibuM1stLRCLFqjn0ZKKS7TgBJalfDdYjgYWRGaH+hK3rguCK30iciRSiiZ6DA/X/Qoq8B3E8VFq1Bl1SPSzVQIqJpfEpR4EDAhOiiZzOms70ikyDDAjQIRNuhbXuW+VSMR+WceQnCdsIl2m8xzXSZFlJXgsSQNL5VNJMprYwIFCusVhOyMG9Wko4VBJU3lRGE2sreltjum1gROYdUA/Irw8dfiuUKHwCKSVCxFWdAVWRepLFXqGIZlnQh2rkYuEbzaFA6Yqe7pc12LKmJo5vQ7JCkgInUomk4rS0lTsNjBqNndYmA4uMsrjysMIjkj5jtFEzdOdaDusUNSFtQGW1qoUk5CV9ZDUITYYAVnklkUG+vu2YwRYaDgaEIWfGVX1GnEEAdshkFIEns4i4SwjEVuNVO38npBoBAzoh5CQBhCOgljhFIRtbUwC+csm/z9oH9kRMbGO+h8w70EZLQgOfIbygdFZgu4LuVZJAN2kTp6RLCw5wo8QBBoj4ooQx8oGlCpCoUeuV/K7PgXjySKrKQKlcdn0cDEX4MQCZ0VZsc9Ekf9JesRgq2VKXSA+mGTJcQtxxghQ+Br1mmUy571lhbJdLpITIf3GDOb6Zn1CMCew4UMiSyXXoYtQo0AdnRmJnQs7yjI5UZWvT4xucAzaqqw2ZFN4IsibUN+kClrIFVU5PQasl7zA8GFkf32WjIZaRhMEeEwktwEC1QmhZaps4GgorHB9IknAbMrzpg97i9SrzguxyMw/veI7KGiIPFurVKF5Fpus6EqG+FCL7wKEfBypYK12mkc9Fk+2JHIGcxoWNYUBGEOtEgxHiKLDWH3dizdvyM88m1fEIhSyMGAw6iwPTI5aks2HpaeSCKA/FxKuUp/8wHMwmBaDWADJlHogqYDAqjHMZ5N+rI7VhXEIU0CQYS9gob/+cmWrwGUA1tWCE+oKB7XaoZLnHawgYA5IM8dEjEjgGAAmzJYGQl4tGkMmzHQ2aoeRL2tRfYKYECUZzkk4+SUB3RgFojIDrSQAp7CEDzKHchh6UQv1fKgui1Kmj9ECbJBkA0Z4HmFsAe0gxgBqVXlaDKdbjCJgL4McuMYh+efh73ObjXlYlRfPVwO6VDJRZGwigdMojPbwtwQovs/ExZX59OjjNvdlbE6rFig3yEKnM4H1yAWoOLRW4NHiHFa+RcOlxoNqiVySHLXxSp1HYCDmIs6GKtLhnE26eiF/ilKhCs3b57pVmZ0sCs8xQktBuAGQsrASucINLOohEx0O83g1p8E8kvUEBa6F9ihEZNye0TD6oB+kRxUEoE91EkcCp2IQ19JqifwZUAjITjgH8MUy1NxirkVvQzdIwzpDfa2obMg0hQuhWvje1mos7uy8tjGfbujw4kzLQ0H2qAX0wkDDnxx3MDneb4eonTPvjSbqddUmoQOyeQGvN5hJArbZqs4uEbOwdEeHL6jcKhGrZbu3v7tva09v/0ZsRCAXFTTPluFLYbwuhPwKeQWOR58Vxh7TAU34pmjssRzzVOMBzCM74rm/YNgbrUQwamQ29SaYD+ZxpwBqMAXgC4OcEzbkFb6cqA4SSkHbUYxf5FhchjNVquHBrrC4r3gsXeHhoUrbOHa0wPOQkayA2iC/C1CVDJItxWKgBg/fY9SGtDQ4hem9f2A47LslGNl1iulBo3dRKiek0qrsADPqQ1AjG7At7NUwZAC9wvfcGpdEdpzpPnJy/owd0YM5eRzhasSodOjqAJEkVAcHAcCkGjJ8L5B+jEEYrlxgKUEFIc6I2IXVJK3GBKJ6VFiJwkm5LmAp0htOysJRHQ34Dw8k2TrYF8410uGuq0DFmVXkcnhTUZkvKp6oUQC2RNObDUNBniH8yVixiWZFGbqiemQknwxrKd4zwFEWx9shsIVaGkrPDwLUV4WFloWmGAtGL95CVCbkLOIpY1cMKUjia2ZMhhVQPn/c6VOQN375wILrpPtAGXijMUcX2JHLD+R3cRdcYzlLmbBL+sasjlck6qWtPDvBmKtRtPqBSU3iawuWYmU6xLu8D6g2trH3ivX90+HR5ftx5IYxAMUbx6Vw9kddSUPmLHw8i71Fg3A5fvnhroqxjZtLkPsOaUq1tqSa7o+rtiDc5WlXPX80/sCFiME5WnT9Vp4Ohe+N7xGyvHcT1QeG6o7EQJb3Us9ua6nz0gscdLQ80715obroYVXdTCaVTsO/A3Uny5Kjl/eq8e5AFYqqzGxzU3NGa2rTmtIP150NVC1pugAV5e813VuJoAVJ5ZvlqZa2TPM9wAwXiM++OA1yfiCvmwJM2S+fnonuAr8/8PkaI86a6gF8y4eGAwA5vY5cHjgEdLeSdEtnc2tnuo1s3DS8vzvSM7woUgeGgekSUkRbX6HPjG4GDtB0tntRysydX3MZKW7hxKKpCzUtvL/U1FJ5uqUJf+YuOOZ+j2ESou6KzPnvQQYmmvKD6C/GOr1uOPK6dWRuzWLSwOwFJu5tV9o+e+z9NRMjmTXvRWYJE9tG5hKLiVfG88i6qZYlQ1HdWbNrKtPR0XGMc5e0qGVkjiwmOQ/Q0PVzl9kZhzLcTZbdvSSIs5HRGjfKj8HnsaZ0untLT1Gal27csql0ZXfPFn28MKhvPxjeRmk+Ehl7rSYZFF/ul8pzSZtOYhm6KJNuzbSBNRdW7rqGgmxP+Hp/IYHuZcGcebBmaPzCBA2YOzfT0ZZpNTJZjWVzhtbS0b5O6+hoTmvZ5uZ2o6U9va7FaLu7wGl5FsoFyQuRt9h9ek7TKcy0WpjM5Zme7f1dm3q791+pDYqsAECGKQDnwEw4PcQ8KCDlWd0SgQGF22PT3Ru0wa7t5Qc7WtOZzLp0c4YZ7R2ZbIe2ftvg/ZW0rqbtFFZ99d8J106HQ+Hh47esuvHE49TPCvh9912j/Kv+HV2nXD92+I3p2086cuowz39707PBJ797y1/HO+XNfQcPPPHc0NnrS0X/2ivIFbf8VP5rz59G2t94/qE/e2+/UXr73+5br/1l5PE9s4/87XZq75NXXfzppkevuuWhF3+2aeascx8Yv/PnHX//wsnn33THzG9P/sStN37t5KHT387+eHTnqc88+5UV+ZU3tD+1ambP+k/tefnNmY+VvnnK9cOv3HNoy0oxsuufs2fcWZ666J6z7TePHv7hH45su+CCVXd8uOGly/94Se9NxmXa609vXXN76Yzdl6x+YevqH3wjfevdg+f8aGCj952PvvaPnzz1zCm/P2HyW6995K49J7QdfeK5r/ddv7r7yCuHTnr8xC+/c9ovXmy45rYPvTUjD398W4e/aufr5MmrHxlpXPHyzcUzZfdtR895YeQ/r56677GjD74z4R08zTL3jD/5ysqvrv6NWLtv5efuWnU8xm3FcU2v/vq8l+DzfwFf21Ms
eNqFVVtsFFUYLjQEHwjXJvqiHDcIATrb2QvdbuVib0CF2tpWuQXXszNnd6ednTPMOdN2aUoCKAaqwTEEgiY8yHZXNxWoLSGKoIlNNMELGqLUEEN4gBhM0BBF5AH/M7tLCzS4D5uz5//+2/f9/9ld2S5iMY0aUwY1gxMLKxx+MGdX1iLbbML4a5kk4Qmqplua29qP2pY29kyCc5NVV1RgU/NigycsamqKV6HJii5fRZIwhuOEpaNUTf0y9fteTxL3RDjtJAbzVCOf7A+WI08RBTdbej0W1QmcPDYjlgesCoVSDC6uEhrSkihKo097+srROBQzpjEO2R/AryW6TlEt4FEjX8SQoSkEcYqShHCUorYXNSKVGos4gk5trOspZBCiCghkR9hIwZHqTFxYhJnUcG0YMS1p6gTFLQikGXEvWku7EbaICAoR4QpwKk6tur/OSVrqTmDOUBIS4yQB+FbBB1WJLoyKjm2VSAGJUcMgXPIDX7LfHxYx3MLyjAlXAee4S9NTEUawpSQiULCtcxbpAGfhoBKmWJopFBXgGpTHIWLENYMgCpakth26j1ELgX6mRRIgk9ZFyoEexbYwFydBgWUzDsBCBi96iZGYrbuO3eDjslAkEhusm1jIHR8xTAhHqc0RxLOAAkS64BtCNBom3LIEtXUVRYngOF8eOFopr2hAE5AIUxIkiaGDXo8Js0YsrrmT0+txke7pgVYnRhIl6ZR2Itt0WUyZLnWMWyCapw/k8ohZ1yyiCnILQbdOgNJoB1E4QLf2ZRMEq7Ax+9MJyrgz9NAOHAfiiMklYihUhQTOR/HtmlmOVBLTgc6cInR1l8zJdRJiSlgHvjN5L+cENk1dU7CwVwgZBwuDI4laHjbnxHxJsEkGd07VFOuoaEnByhpI9gaCXv+JHgk2RTN02DlJx1BSxnTtpycaTKx0Qhyp8Bw4mbzzsYkYypyBJqw0t90XUjDtDGArWRkcnnhv2QbMF3GydS0PpysYx9MFvD6fNzR0X2CWMhRnIIZ1RobukXzPJQe7EZDkSkn2HSuypMNo84RzNOQLf5BfX0Z2ZyAkt9muNChCvvk6W3h83m9eV1Tz15K56XpQxzmz2tLKkT+E2oiJxO4hX2W17KsO+tGapvbBukKa9knFGGq3YPRjIEhDUfyskrCNTqLm6iaVfcwz3pZYNh32kUuFlxfEEj+ddFCW5bGFj0RasCCaITKmA+Fw+H/iAjOEOyOiP0kOS/5Qe77LZcHNY2gyz/zzXagnI+qBihY8AjleTxGNHomevJ6gf3OuULSkqc5ncI7Ivhcbarv4ho6e+hhes622pTbVpqnm+pM9kqJTW5U4/IcRyR2IHu6MoYAcCIZUfywmK1gNh6pilWQZVn3hUDQoV/nl0NEuDTs5n9eH4pTGdXK8brVUh+HJkdrcsXGy9ZteqGlqrBvcKLXSKAX+2jHwbFCDZNqIBePo5NzUsOAWyYB7a80mZ6RKCSvRyqgfk6pQQPYvk2phb4oDdG9A0uJ1cP8rd2byL9Lov/P7HytxP6Xr93/7/JfPlb0eIcuvf5jWpgXW1cz+4a1nh+ZmDq/zBz+Zx25c3PNj96GyaRumn71Zeu3NJy7La7c8fv3UjdPJc9l9kSM3r1ddWrWyCa/8ovaac7Bs7z8HX/UN7JjyyoJ921/eOGNxy6w3MrfC5zuG3xld/+mSfmvLATKtf+HQ3KWXvrpNW38bOXD8yh9HPp4zf/fhtiMz7uxcX3bz7b+vXpxTcWFkntMqlx5WLnz3+/mGedOHm++896Sx7fbI7CZlhXNo1EQzT17d+2fnY33kyp7LVbfONJzdYcXrnxr+6d2Zo1NPLV7Sf+CvVUt/PnRuFjR5925pyefLN6zwTykp+Q/z/kl9

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrNWHtsHMUZDxAViiivllLaRgxXN4nh9ux72IlNkXDtOBgcO7GdpAmm7tzu3N3EezvLzuzZlyhVSamomlJyEo2iFlpKHBuciCSFFiECEmpBvCrS/oEwbUFCpVRCrUSDANFW6ffN7j39CEhUqnXW3e7ON9/j9/seO3tmCsyTXDhnHeGOYh41FVzI0p4Zj93qM6lun84zlRPW1MbB4ZGDvsfnvppTypWdLS3U5THqqJwnXG7GTJFvKcRb8kxKmmVyKi2s4quXrN8VydPJMSXGmSMjnfHWRCoaKa+JdN68K+IJm0U6I75kXiQaMQVY4Si4McyoZ+ZIRnhE5RiZYBS+PMIdMtwb2X0L7CIsZsNC06a+xYyk0WZI4ThMGTZVYDnspoSwAy0OzaMWRQvcLo5JvTcs4I7rqzFp5lieRjp3RVzwhXmKo227IhAAr4g/LCZNj7sYmqpl+ilRgthCjBPfRX1FF7VI5XEnG9mN+5s2GDdmiTzljt6UOsXBjLaJK5bXt+aJhTeo59EiXFdWOL5ta98tlqG+DWG6WV/UWtdFbC4VERkSKkUTPYabm4oEnuM1iOOj2Kgz6pDwb7NkEGwuiUs9CBjwgUzkmNNZXRGPkRFAA9EibNK1ucmVXSQhWSTAA4DlKdpCMp7IE+kyk2e4CQCmJXgsyWoWy8aiZDTSyx2LmDaH5YxYVNFAwqGSxrKiMBpprupN1OjNA0GQpIQ6RHhZ6vCdgUKgCrDQhQiTCQ6k9VWovghx1SwKQtJgQq3NNI1SfGNOOGCqtqfLdW2mjama0+eQtFA5YlLJZJQUhU8k6LMtYmHU8txhWh91kVEe1x6GcITKbx6NVD29BW2HOxR1YSrBrWbwzwLAVAMkdYgNhUBOcNsmgwP92zACLDAcDQjDz6yKeoM4ggidS1L4nslC4TQjIVutWHX/noBoBAwYgJCQ1SAcBrHMKQhbc40E8pdN/n/QPrQjZGId9T9h3oMyWhAc+AzlA6OzCN3nc62cAGaOOllG0nDLESpEEGiMiGtCnCkbUGoC6jFyvZzf9SlYmyyykiJQGpdMDxdzAXYscDYhy/blUOR/kh41sFUzpS4Qn0yyZLiNOGMEKFyGjWeJzPloWaFtlwvkREC/MYu5KlefEMzx8/AjkqaSm9BFqFWgjsmsyC3zO8oSOVGRr0+MbnAMeqws8x9A94SfzUF9kzpoAVd05fQYsF7yAsOHof31WTIaaBmN6MYseR4g0J0QWqbOBo6Gwh1ujkcJmE15Nqdg7QT1LFAnJ2DcqIlM4+5l30HB6tCzZq0KzbPZZFlXNMCH2nwnIKRwAoF71d086rHaZIsiZzCn4aahMQBjqF2UACF8HMbq615t84b8zLKGIhaikIYBg1FnaWAy1JasEZaeciKA/kxAuHJ9Ur7nYDBtBrEAkmn1QFIBgcWkJYpNqpo6VhfGYUwBQUY8nwX+qqXKVoFLH6yrBifQNaFztbqp5FkHKwiYA9LMySFGxHcsAFMBhxDyStEYzokJqEqwIVUkbVNznOQgSjKYk2w+zqKAbg0FwjIDrcQHp7CEDzGHcpgRUQv1FFSXBSnTz2gBFkjSixFuIExeQDuoYQC1KzwFV4EdOoqAvfQz4BqH5G/AX/E8G/OwKi+crxZ0qWhkgrHxCE6jMAnDdxFQ/JiJi3ca6dHP81zJmjmtUqBcPw2dLgesRy5AxaHVAo8WZ7DyLRguPR5USuSi5KiOV3o/AgMxF7VsqCAdzNmkqw/yZ0JqVKF5K27a5dnJpvAcI7QYhL2AlI2VyBWub1MPmehwmMcrOQ3mkbQnKHAtsEcjImvtGQ2iD/pBelRDAPp0J3EkcKoG4mpaLZI/gxoB2Qn7AL5YhhKpXDN6G7hBVq+x9GVZ5epka3AjUAvX7W3Wws42tI1GuqHDCzMtCwXZozbQCwMNXxnuYHJ83A5R3afhjSbsdZUmYQKyWQGvN5hJApbldZ1dJGZB6Q43n1e5dSJWynbfQHf/5p6+gfXYiEAuLGheXurSUFcXAn4FvALHw98aY4+ZgCZcaRp7LMM83XgA89CO2tyfN+yNliMYNrI89caZAvO4UwA1mAJwwSDnRB7yCl9OdAcJpKDtaMYvsC3ehj11quHGrrC50jyWrvBwU61tB3Y03/OQkayA2iC/4U0cYpwu1sRADx7KYzQPaWlxCtP7wOBI0HeLMLKbFNODhu+iVI5LrVXbAWbUh6BKNmBb0KthyAB6Be+5VS6J9A5mKuRk44wd0oM5WRzhqsQod+jKABEl1AQHAcCoHjKU50tVwyAMV8a3taCGEGdE7MJ6ktZjAtE9KqhEwaRcF7AY6QsmZeHojgb8hweSbB7qD+Ya6XDX1aDizCoyQBF4EM4XZU/0KABLwuktD0NBliH80ZpiE86KMnBF98hQPhrUUjxngK1sjocpYAu1DZRuDALUV42FkYamWBOMPjyFKE/IacRT1hwxxCCJd8/kGFZA+dqyS6Ygb1Tp+LzTl6OgDLwxmGMK7Milh7M7uQuusYytTdgplTVr4hGJfmkrzY4z5hoUrX540pD42oKlWJsO8S49CFQbW9+3Zd3AdLB16RiO3DAGoHjLDimcI2FXMpA58x/PYm8xIFyOKj3aVTa2ZWMRct8hrbG2VKz1WK1qG8Jdmnb188drH7gQMdjHCE+rStOB8EO1a4QsHdpAzcHhui0xkKVD1Mu3p+q89HwHHS3NdG+cry58WFE3k4zF4/A5XrezLDpm6ZAe745XoKjIzCZaE0mjtd1ojT9atzdQtWiYAlSUftH6UDmCNiSVypWmUqlE8gFghgvEZ9+dBjnlyz1TgCl78dmZ8Ojs/sGbqoy4YqoH8C09MeIDyPE15EbfIaC7jcRTnQn8kPUbRo50h3pGFkTq+AgwXUKKGOvK9Jkxc74DNJ3tXpAycyurLiPFbZxYDH2gZgTHfYa+VZpOteLf3NVnXO8xTELUXZZZ+RFkYKIpPYL+Yqzja0ZCrxPb51YtJA3MnmfiobVa2zVnXl81MZRZ9VFkFjExtX0uspB4eTwPrZtKLRqKysqqXVPJjo6OM+y7mEXxju1zZCHJBkAD15uWWFkLZbCaLLl6URBnQ6MNbpVOwO+x1ni8e1PPhNjaf4PjuyNu/Ib+ofbkTcnHgtMoQyGRsdcakkHx5apYmovm6SSWoeuS8bZkO1hzbfmsa9hP9wSv99cS6F42zJmPVQ2tPTBBA+aakh3tyTYrmTZYOmMZqY61a4yOjkTcSCcSa63U2vialNV+sMBpaRbKBckKkbXZUTNjmBRmWiNI5tJMz7aBrg193Ue+YQyJtABARigA58BMOD3MPCggpVnTFr4Fhdtj0929xlDXttIjHW3xZLKNsjilazuS6Q5j3dahY+W0rqTtFFZ9ffp+23QwFD59lnXl3vOW6b9z4P/0aWvTH288p+uC//z4qpYn+9njWw6fWvmDkRMb745EEj98+Pcr7tnwa/ng/l+ejj3+9E3vNDetWtWxau47h6/v+uzrKv7tr937p3u+9+EH71j7jx6dPlV46oWDn2+a8vfuvvyZ3GsXNy1/b/dTv/1bOnH39V//zWU7N37riecu+9GFvXccfLbnQC/bNPDMum17lr9w975LH5j53fsnb//5dbvPOz/2/onL2z7ccqRj7/dXvrL9hHfxlbmhO7vW7nzz5IG/HLj9K7eKG0aXd7+87oV/nXzu9cde/OZbrxxeEX2v+PLTk072wrt+curaU//82fPnnjCuuDy57/rpKyYzb/57u9j6vrVDnv2r29gzx/7xmb9evO8Pd92/4tMn/ZcuTb+x5dxUiW99blnPaKZp732rd7VcddH6zT/9+9X3frCiec8XB59/cPm4feern/py9O0/73/jSxflZk9/4a13H00YTT0XXJPdtOqOsz737kuH+x+47+0ndSjPWXbj4eLZ55+9bNl/AbrLDgQ=
eNqFVX1sE2UY3wTE8I8wM4mJH6+XTWKy6/qxdusSldExRDI3tiIwstS3d2/bW6/3Hve+t62bSxRQo0TxAtEYNU7oOlMHbDhCdBA1CiJgAokai4gfKPCHGgU/QgLB524tDFmwf7R37/N7vn7P73m7bribGEyhWumIonFiYInDC7PWDRtkrUkY35BNEZ6gcqa1pT28zTSUfGWCc53VV1djXXFhjScMqiuSS6Kp6m5PdYowhuOEZaJUTh+/aXa/kMK9EU6TRGNCPfK4vTVVSCii4GRNv2BQlcCTYDJiCGCVKJSicftoZQLzBQzxBEE9BMOPgRQNtTc9JAx02nGoTFQbJ6nYlInoExnVNMJFL+Rxe71BOxynVC1k0nDKycRxt6KmI4xgQ0pEDMJMlbNIFzjbDjJhkqHoNhM2uAFN4hDR4opGEAVLSukjMopRA0HfukES0J7STaoQliTTwNx+0mTEDZNxABYyuNAKRmKm6jj2gA9KUxNpBBCcggPrgf4c2u0hIBylJkcQzwA2EOmGbwixVNPhlCWoqcooShAulgeORtplN6DYkAiTEiSFoYN+QYcZEYMrDuP9goN0nv7T6tRIdkkqpUlk6g6Lad2hjnFD0eLCwACc2RpRDCLb5BaCdk6B0mgXkThAOweGEwTLoLRNmQRl3Bq7Tjs7gTiic5FoEpUhgbU93qfoVUgmMRXozEn2XB1xWrkkIbqIVeA7O+lljWJdVxUJ2/Zqe4wjBQ2Jdi3Xm3O21ERQoMatPQ3FOqpb0yB1DbldvhqXd7RXZBwrmgpaFVUMJWV1xz4x1aBjKQlxxMIaWdlJ5x1TMZRZQ81Yamm/JqTNtDWEjVSg5t2p54apgb6INRxqvT5dwXg1nc/l8bhqx64JzNKaZA3FsMrI2BWSr7jkYDd8ojsguj07iiypIG2esLb5PcG3Qas6qI+sz0JIbrJ1GZgIOXJwuLC0W1uWFad5sqQs0wjTsfY1GUoV8taidqIje/eQJ1Dv9tS7A2hJc3gkVEgTnnYYY2EDpB+DgSwuDn9YSphaksi50LRjzwtX27KXTYV95GLhxoJh2a9WpsbtdufvuyHSgAVRNDtjxhcMBv8nLjBDuDVu9ye6g6K3NjzZpb+mI4+m85y89gr1ZO16oKKKGyCv1lNEoxuip6/HHejIFYoWFdnaC88Rt+fh0PLuhse6Fre2xVem+jzJjh5fI2vc3StKKjVlkcPdT0RHEL3cyqO6On8s6CXRqOQP+GpxjEheH/H7PCQgByQpSrZ1K9jKeVweFKc0rpKdoSYxhOHKEdsd2VjDjasfbWheGhpZJbbRKAX+whh41qhGsu3EADlaOSc1LLhBsuDe1rDaGq+TglLULwf8xIu9sZgkLoK9KQroikAy9u3g/Mc8lZ28kfaXlt+z8ZYS5zMj/FLLso8Xzt3/wSOvnr/93JLn86+93Dfa1nqrIBzedcc3X/yq7vIlX7j3u86yeX/W0dOXTv9xc6LrqP7jvs/OdV3a/cCeJ1Y8+N6Xv43mT0zcxWbMydC17y+KTSxpnbv+zZmHT4yuKmvV5/j7jn6/t9nvOlw5W9hCrPL5q5eHBz+cVVZResl7UaicSctajgQP1izY/PP+82e2b/n0r1PNrm/fmDh5KDT+yuCTvyh/5zcd+OeZY0097nnBNGY/LbztueMVg7PPlh24u+b1Y4dKT219a9aajr2Ll+3A8x+/M1I+uuGrxvHNFyJb0x99XVnx7OD96Z3o8/KzP5zZaP3+4ieDG1cMoYoL72w/t/bizSUlly/PKDm+/Ok6XFpS8i/VnBjV

Some files were not shown because too many files have changed in this diff Show More