Compare commits

..

1 Commits

Author SHA1 Message Date
Harrison Chase
34d3d5d807 add xml agent 2023-07-29 21:09:26 -07:00
6321 changed files with 508049 additions and 812195 deletions

View File

@@ -5,33 +5,25 @@ This project includes a [dev container](https://containers.dev/), which lets you
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of <https://github.com/langchain-ai/langchain>.
1. Click the **Code** drop-down menu at the top of https://github.com/hwchase17/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
1. Click **Create codespace on master** .
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
```txt
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/&lt;YOUR_USERNAME&gt;/&lt;YOUR_CLONED_REPO_NAME&gt;
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
If you already have VS Code and Docker installed, you can use the button above to get started. This will use VSCode to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
@@ -45,5 +37,5 @@ You can learn more in the [Dev Containers documentation](https://code.visualstud
## Tips and tricks
- If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
- If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,58 +1,36 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
"mounts": [
"source=langchain-workspaces,target=/workspaces/langchain,type=volume"
],
// Prevent the container from shutting down
"overrideCommand": true,
// Features to add to the dev container. More info: https://containers.dev/features
"features": {
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"containerEnv": {
"UV_LINK_MODE": "copy"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Run commands after the container is created
"postCreateCommand": "uv sync && echo 'LangChain (Python) dev environment ready!'",
// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.debugpy",
"ms-python.mypy-type-checker",
"ms-python.isort",
"unifiedjs.vscode-mdx",
"davidanson.vscode-markdownlint",
"ms-toolsai.jupyter",
"GitHub.copilot",
"GitHub.copilot-chat"
],
"settings": {
"python.defaultInterpreterPath": ".venv/bin/python",
"python.formatting.provider": "none",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -4,10 +4,29 @@ services:
build:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces:cached
networks:
- langchain-network
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
# depends_on:
# - mongo
# mongo:
# image: mongo
# restart: unless-stopped
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example123
# ports:
# - "27017:27017"
# networks:
# - langchain-network
networks:
langchain-network:
driver: bridge

View File

@@ -1,52 +0,0 @@
# top-most EditorConfig file
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88
# JSON files
[*.json]
indent_style = space
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_style = space
indent_size = 2
# Markdown files
[*.md]
indent_style = space
indent_size = 2
trim_trailing_whitespace = false
# Configuration files
[*.{toml,ini,cfg}]
indent_style = space
indent_size = 4
# Shell scripts
[*.sh]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 4
# Jupyter notebooks
[*.ipynb]
# Jupyter may include trailing whitespace in cell
# outputs that's semantically meaningful
trim_trailing_whitespace = false

3
.github/CODEOWNERS vendored
View File

@@ -1,3 +0,0 @@
/.github/ @baskaryan @ccurme @eyurtsev
/libs/core/ @eyurtsev
/libs/packages.yml @ccurme

View File

@@ -1,132 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
conduct@langchain.dev.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -1,6 +1,310 @@
# Contributing to LangChain
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
As an open source project in a rapidly developing field, we are extremely open
to contributions, whether they be in the form of new features, improved infra, better documentation, or bug fixes.
## 🗺️ Guidelines
### 👩‍💻 Contributing Code
To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting and testing checks first. See
[Common Tasks](#-common-tasks) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These lives in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/modules`.
- Add unit and integration tests.
We're a small, building-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
### 🚩GitHub Issues
Our [issues](https://github.com/hwchase17/langchain/issues) page is kept up to date
with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help
organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up to date as possible, though
with the rapid rate of develop in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
## 🚀 Quick Start
> **Note:** You can run this repository locally (which is described below) or in a [development container](https://containers.dev/) (which is described in the [.devcontainer folder](https://github.com/hwchase17/langchain/tree/master/.devcontainer)).
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:
1. *Before installing Poetry*, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
2. Install Poetry (see above)
3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
4. Continue with the following steps.
There are two separate projects in this repository:
- `langchain`: core langchain code, abstractions, and use cases
- `langchain.experimental`: more experimental code
Each of these has their OWN development environment.
In order to run any of the commands below, please move into their respective directories.
For example, to contribute to `langchain` run `cd libs/langchain` before getting started with the below.
To install requirements:
```bash
poetry install -E all
```
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
❗Note: If you're running Poetry 1.4.1 and receive a `WheelFileValidationError` for `debugpy` during installation, you can try either downgrading to Poetry 1.4.0 or disabling "modern installation" (`poetry config installer.modern-installation false`) and re-install requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
Now, you should be able to run the common tasks in the following section. To double check, run `make test`, all tests should pass. If they don't you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
## ✅ Common Tasks
Type `make` for a list of common tasks.
### Code Formatting
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
To run formatting for this project:
```bash
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
### Linting
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
To run linting for this project:
```bash
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
```python
[tool.codespell]
...
# Add here:
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
### Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
To get a report of current coverage, run the following:
```bash
make coverage
```
### Working with Optional Dependencies
Langchain relies heavily on optional dependencies to keep the Langchain package lightweight.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
Users that do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
### Testing
See section about optional dependencies.
#### Unit Tests
Unit tests cover modular logic that does not require calls to outside APIs.
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
If you add new logic, please add a unit test.
#### Integration Tests
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
**warning** Almost no tests should be integration tests.
Tests that require making network connections make it difficult for other
developers to test the code.
Instead favor relying on `responses` library and/or mock.patch to mock
requests using small fixtures.
To run integration tests:
```bash
make integration_tests
```
If you add support for a new external API, please add a new integration test.
### Adding a Jupyter Notebook
If you are adding a Jupyter notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Documentation
While the code is split between `langchain` and `langchain.experimental`, the documentation is one holistic thing.
This covers how to get started contributing to documentation.
### Contribute Documentation
The docs directory contains Documentation and API Reference.
Documentation is built using [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
## 🏭 Release Process
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a developer and published to [PyPI](https://pypi.org/project/langchain/).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
### 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).

View File

@@ -1,125 +1,106 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
description: Submit a bug report to help us improve LangChain
labels: ["02 Bug Report"]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to file a bug report.
value: >
Thank you for taking the time to file a bug report. Before creating a new
issue, please make sure to take a few moments to check the issue tracker
for existing issues about the bug.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
- type: textarea
id: system-info
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
label: System Info
description: Please share your system info with us.
placeholder: LangChain version, platform, python version, ...
validations:
required: true
- type: textarea
id: who-can-help
attributes:
label: Who can help?
description: |
Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
The core maintainers strive to read all issues, but tagging them will help them prioritize.
Please tag fewer than 3 people.
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoader Abstractions
- @eyurtsev
LLM/Chat Wrappers
- @hwchase17
- @agola11
Tools / Toolkits
- ...
placeholder: "@Username ..."
- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: "The problem arises when using:"
options:
- label: This is a bug, not a usage question.
required: true
- label: I added a clear and descriptive title that summarizes this issue.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- label: "The official example notebooks/scripts"
- label: "My own modified scripts"
- type: checkboxes
id: related-components
attributes:
label: Related Components
description: "Select the components related to the issue (if applicable):"
options:
- label: "LLMs/Chat Models"
- label: "Embedding Models"
- label: "Prompts / Prompt Templates / Prompt Selectors"
- label: "Output Parsers"
- label: "Document Loaders"
- label: "Vector Stores / Retrievers"
- label: "Memory"
- label: "Agents / Agent Executors"
- label: "Tools / Toolkits"
- label: "Chains"
- label: "Callbacks/Tracing"
- label: "Async"
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code
label: Reproduction
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
Please provide a [code sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
The following code:
Steps to reproduce the behavior:
```python
from langchain_core.runnables import RunnableLambda
1.
2.
3.
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
- type: textarea
id: error
validations:
required: false
attributes:
label: Error Message and Stack Trace (if applicable)
description: |
If you are reporting an error, please include the full error message and stack trace.
placeholder: |
Exception + full stack trace
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description telling what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
id: expected-behavior
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Run the following command in your terminal and paste the output here:
`python -m langchain_core.sys_info`
or if you have an existing python interpreter running:
```python
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info
validations:
required: true
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."

View File

@@ -1,9 +1,6 @@
blank_issues_enabled: false
blank_issues_enabled: true
version: 2.1
contact_links:
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support
- name: Discord
url: https://discord.gg/6adMQxSpJS
about: General community discussions

View File

@@ -0,0 +1,19 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -1,118 +1,30 @@
name: " Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
name: "\U0001F680 Feature request"
description: Submit a proposal/request for a new LangChain feature
labels: ["02 Feature Request"]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: textarea
id: feature-description
id: feature-request
validations:
required: true
attributes:
label: Feature Description
label: Feature request
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: use-case
id: motivation
validations:
required: true
attributes:
label: Use Case
label: Motivation
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposed-solution
id: contribution
validations:
required: false
required: true
attributes:
label: Proposed Solution
label: Your contribution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md)

18
.github/ISSUE_TEMPLATE/other.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Other Issue
description: Raise an issue that wouldn't be covered by the other templates.
title: "Issue: <Please write a comprehensive title after the 'Issue: ' prefix>"
labels: [04 - Other]
body:
- type: textarea
attributes:
label: "Issue you'd like to raise."
description: >
Please describe the issue you'd like to raise as clearly as possible.
Make sure to include any relevant links or references.
- type: textarea
attributes:
label: "Suggestion:"
description: >
Please outline a suggestion to improve the issue here.

View File

@@ -1,20 +0,0 @@
name: 🔒 Privileged
description: You are a LangChain maintainer, or was asked directly by a maintainer to create an issue here. If not, check the other options.
body:
- type: markdown
attributes:
value: |
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
- type: checkboxes
id: privileged
attributes:
label: Privileged issue
description: Confirm that you are allowed to create an issue here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
required: true
- type: textarea
id: content
attributes:
label: Issue Content
description: Add the content of the issue here.

View File

@@ -1,91 +0,0 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false

View File

@@ -1,33 +1,28 @@
(Replace this entire block of text)
<!-- Thank you for contributing to LangChain!
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
Please make sure you're PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use.
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
If no one reviews your PR within a few days, feel free to @-mention the same people again.
Additional guidelines:
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

View File

@@ -1,7 +0,0 @@
FROM python:3.9
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
COPY ./app /app
CMD ["python", "/app/main.py"]

View File

@@ -1,13 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
# TODO: fix this, migrate to new docs repo?
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
required: true
runs:
using: "docker"
image: "Dockerfile"

View File

@@ -1,646 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
import logging
import subprocess
import sys
from collections import Counter
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Container, Dict, List, Set, Union
import httpx
import yaml
from github import Github
from pydantic import BaseModel, SecretStr
from pydantic_settings import BaseSettings
github_graphql_url = "https://api.github.com/graphql"
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
# discussions_query = """
# query Q($after: String, $category_id: ID) {
# repository(name: "langchain", owner: "langchain-ai") {
# discussions(first: 100, after: $after, categoryId: $category_id) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# isAnswer
# replies(first: 10) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# }
# }
# """
# issues_query = """
# query Q($after: String) {
# repository(name: "langchain", owner: "langchain-ai") {
# issues(first: 100, after: $after) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# state
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# """
prs_query = """
query Q($after: String) {
repository(name: "langchain", owner: "langchain-ai") {
pullRequests(first: 100, after: $after, states: MERGED) {
edges {
cursor
node {
changedFiles
additions
deletions
number
labels(first: 100) {
nodes {
name
}
}
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
title
createdAt
state
reviews(first:100) {
nodes {
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
state
}
}
}
}
}
}
}
"""
class Author(BaseModel):
login: str
avatarUrl: str
url: str
twitterUsername: Union[str, None] = None
# Issues and Discussions
class CommentsNode(BaseModel):
createdAt: datetime
author: Union[Author, None] = None
class Replies(BaseModel):
nodes: List[CommentsNode]
class DiscussionsCommentsNode(CommentsNode):
replies: Replies
class Comments(BaseModel):
nodes: List[CommentsNode]
class DiscussionsComments(BaseModel):
nodes: List[DiscussionsCommentsNode]
class IssuesNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
state: str
comments: Comments
class DiscussionsNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
comments: DiscussionsComments
class IssuesEdge(BaseModel):
cursor: str
node: IssuesNode
class DiscussionsEdge(BaseModel):
cursor: str
node: DiscussionsNode
class Issues(BaseModel):
edges: List[IssuesEdge]
class Discussions(BaseModel):
edges: List[DiscussionsEdge]
class IssuesRepository(BaseModel):
issues: Issues
class DiscussionsRepository(BaseModel):
discussions: Discussions
class IssuesResponseData(BaseModel):
repository: IssuesRepository
class DiscussionsResponseData(BaseModel):
repository: DiscussionsRepository
class IssuesResponse(BaseModel):
data: IssuesResponseData
class DiscussionsResponse(BaseModel):
data: DiscussionsResponseData
# PRs
class LabelNode(BaseModel):
name: str
class Labels(BaseModel):
nodes: List[LabelNode]
class ReviewNode(BaseModel):
author: Union[Author, None] = None
state: str
class Reviews(BaseModel):
nodes: List[ReviewNode]
class PullRequestNode(BaseModel):
number: int
labels: Labels
author: Union[Author, None] = None
changedFiles: int
additions: int
deletions: int
title: str
createdAt: datetime
state: str
reviews: Reviews
# comments: Comments
class PullRequestEdge(BaseModel):
cursor: str
node: PullRequestNode
class PullRequests(BaseModel):
edges: List[PullRequestEdge]
class PRsRepository(BaseModel):
pullRequests: PullRequests
class PRsResponseData(BaseModel):
repository: PRsRepository
class PRsResponse(BaseModel):
data: PRsResponseData
class Settings(BaseSettings):
input_token: SecretStr
github_repository: str
httpx_timeout: int = 30
def get_graphql_response(
*,
settings: Settings,
query: str,
after: Union[str, None] = None,
category_id: Union[str, None] = None,
) -> Dict[str, Any]:
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
# category_id is only used by one query, but GraphQL allows unused variables, so
# keep it here for simplicity
variables = {"after": after, "category_id": category_id}
response = httpx.post(
github_graphql_url,
headers=headers,
timeout=settings.httpx_timeout,
json={"query": query, "variables": variables, "operationName": "Q"},
)
if response.status_code != 200:
logging.error(
f"Response was not 200, after: {after}, category_id: {category_id}"
)
logging.error(response.text)
raise RuntimeError(response.text)
data = response.json()
if "errors" in data:
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
logging.error(data["errors"])
logging.error(response.text)
raise RuntimeError(response.text)
return data
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
# graphql_response = IssuesResponse.model_validate(data)
# return graphql_response.data.repository.issues.edges
# def get_graphql_question_discussion_edges(
# *,
# settings: Settings,
# after: Union[str, None] = None,
# ):
# data = get_graphql_response(
# settings=settings,
# query=discussions_query,
# after=after,
# category_id=questions_category_id,
# )
# graphql_response = DiscussionsResponse.model_validate(data)
# return graphql_response.data.repository.discussions.edges
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
if after is None:
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(settings=settings, query=prs_query, after=after)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
# def get_issues_experts(settings: Settings):
# issue_nodes: List[IssuesNode] = []
# issue_edges = get_graphql_issue_edges(settings=settings)
# while issue_edges:
# for edge in issue_edges:
# issue_nodes.append(edge.node)
# last_edge = issue_edges[-1]
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for issue in issue_nodes:
# issue_author_name = None
# if issue.author:
# authors[issue.author.login] = issue.author
# issue_author_name = issue.author.login
# issue_commentors = set()
# for comment in issue.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != issue_author_name:
# issue_commentors.add(comment.author.login)
# for author_name in issue_commentors:
# commentors[author_name] += 1
# if issue.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_discussions_experts(settings: Settings):
# discussion_nodes: List[DiscussionsNode] = []
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
# while discussion_edges:
# for discussion_edge in discussion_edges:
# discussion_nodes.append(discussion_edge.node)
# last_edge = discussion_edges[-1]
# discussion_edges = get_graphql_question_discussion_edges(
# settings=settings, after=last_edge.cursor
# )
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for discussion in discussion_nodes:
# discussion_author_name = None
# if discussion.author:
# authors[discussion.author.login] = discussion.author
# discussion_author_name = discussion.author.login
# discussion_commentors = set()
# for comment in discussion.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != discussion_author_name:
# discussion_commentors.add(comment.author.login)
# for reply in comment.replies.nodes:
# if reply.author:
# authors[reply.author.login] = reply.author
# if reply.author.login != discussion_author_name:
# discussion_commentors.add(reply.author.login)
# for author_name in discussion_commentors:
# commentors[author_name] += 1
# if discussion.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_experts(settings: Settings):
# (
# discussions_commentors,
# discussions_last_month_commentors,
# discussions_authors,
# ) = get_discussions_experts(settings=settings)
# commentors = discussions_commentors
# last_month_commentors = discussions_last_month_commentors
# authors = {**discussions_authors}
# return commentors, last_month_commentors, authors
def _logistic(x, k):
return x / (x + k)
def get_contributors(settings: Settings):
pr_nodes: List[PullRequestNode] = []
pr_edges = get_graphql_pr_edges(settings=settings)
while pr_edges:
for edge in pr_edges:
pr_nodes.append(edge.node)
last_edge = pr_edges[-1]
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
contributors = Counter()
contributor_scores = Counter()
recent_contributor_scores = Counter()
reviewers = Counter()
authors: Dict[str, Author] = {}
for pr in pr_nodes:
pr_reviewers: Set[str] = set()
for review in pr.reviews.nodes:
if review.author:
authors[review.author.login] = review.author
pr_reviewers.add(review.author.login)
for reviewer in pr_reviewers:
reviewers[reviewer] += 1
if pr.author:
authors[pr.author.login] = pr.author
contributors[pr.author.login] += 1
files_changed = pr.changedFiles
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = datetime.now(timezone.utc) - timedelta(days=3 * 30)
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return (
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
authors,
)
def get_top_users(
*,
counter: Counter,
min_count: int,
authors: Dict[str, Author],
skip_users: Container[str],
):
users = []
for commentor, count in counter.most_common():
if commentor in skip_users:
continue
if count >= min_count:
author = authors[commentor]
users.append(
{
"login": commentor,
"count": count,
"avatarUrl": author.avatarUrl,
"twitterUsername": author.twitterUsername,
"url": author.url,
}
)
return users
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
settings = Settings()
logging.info(f"Using config: {settings.model_dump_json()}")
g = Github(settings.input_token.get_secret_value())
repo = g.get_repo(settings.github_repository)
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
(
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
pr_authors,
) = get_contributors(settings=settings)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
"hwchase17",
"agola11",
"baskaryan",
"hinthornw",
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin",
"ccurme",
"vbarda",
}
hidden_logins = {
"dev2049",
"vowelparrot",
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"isahers1",
"dqbd",
"bracesproul",
"akira",
}
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
maintainers = []
for login in maintainers_logins:
user = authors[login]
maintainers.append(
{
"login": login,
"count": contributors[login], # + question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
}
)
# min_count_expert = 10
# min_count_last_month = 3
min_score_contributor = 1
min_count_reviewer = 5
skip_users = maintainers_logins | bot_names | hidden_logins
# experts = get_top_users(
# counter=question_commentors,
# min_count=min_count_expert,
# authors=authors,
# skip_users=skip_users,
# )
# last_month_active = get_top_users(
# counter=question_last_month_commentors,
# min_count=min_count_last_month,
# authors=authors,
# skip_users=skip_users,
# )
top_recent_contributors = get_top_users(
counter=recent_contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_contributors = get_top_users(
counter=contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_reviewers = get_top_users(
counter=reviewers,
min_count=min_count_reviewer,
authors=authors,
skip_users=skip_users,
)
people = {
"maintainers": maintainers,
# "experts": experts,
# "last_month_active": last_month_active,
"top_recent_contributors": top_recent_contributors,
"top_contributors": top_contributors,
"top_reviewers": top_reviewers,
}
people_path = Path("./docs/data/people.yml")
people_old_content = people_path.read_text(encoding="utf-8")
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if people_old_content == new_people_content:
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
logging.info("Setting up GitHub Actions git user")
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
subprocess.run(
["git", "config", "user.email", "github-actions@github.com"], check=True
)
branch_name = "langchain/langchain-people"
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(["git", "add", str(people_path)], check=True)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
logging.info("Pushing branch")
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")

View File

@@ -15,79 +15,64 @@ inputs:
description: Poetry version
required: true
install-command:
description: Command run for installing dependencies
required: false
default: poetry install
cache-key:
description: Cache key to use for manual handling of caching
required: true
working-directory:
description: Directory whose poetry.lock file should be cached
required: true
description: Directory to run install-command in
required: false
default: ""
runs:
using: composite
steps:
- uses: actions/setup-python@v5
name: Setup python ${{ inputs.python-version }}
id: setup-python
- uses: actions/setup-python@v4
name: Setup python $${ inputs.python-version }}
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v4
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
- uses: actions/cache@v3
id: cache-pip
name: Cache Pip ${{ inputs.python-version }}
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "1"
with:
path: |
/opt/pipx/venvs/poetry
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
- name: Refresh shell hashtable and fixup softlinks
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
run: |
set -eux
# Refresh the shell hashtable, to ensure correct `which` output.
hash -r
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
# Delete and recreate the softlinks pipx expects to have.
rm /opt/pipx/venvs/poetry/bin/python
cd /opt/pipx/venvs/poetry/bin
ln -s "$(which "python$PYTHON_VERSION")" python
chmod +x python
cd /opt/pipx_bin/
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
chmod +x poetry
# Ensure everything got set up correctly.
/opt/pipx/venvs/poetry/bin/python --version
/opt/pipx_bin/poetry --version
- name: Install poetry
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
with:
path: |
~/.cache/pip
key: pip-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}
- run: pipx install poetry==${{ inputs.poetry-version }} --python python${{ inputs.python-version }}
shell: bash
- name: Check Poetry File
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry check
- name: Check lock file
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry lock --check
- uses: actions/cache@v3
id: cache-poetry
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
with:
path: |
~/.cache/pypoetry/virtualenvs
~/.cache/pypoetry/cache
~/.cache/pypoetry/artifacts
${{ env.WORKDIR }}/.venv
key: py-deps-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles(format('{0}/**/poetry.lock', env.WORKDIR)) }}
key: poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles('poetry.lock') }}
- run: ${{ inputs.install-command }}
working-directory: ${{ inputs.working-directory }}
shell: bash

View File

@@ -1,39 +0,0 @@
# Helper to set up Python and uv with caching
name: uv-install
description: Set up Python and uv with caching
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: "true"
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ""
working-directory:
description: Working directory for cache glob scoping
required: false
default: "**"
env:
UV_VERSION: "0.5.25"
runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v6
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -1,11 +0,0 @@
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
# and
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -1,80 +0,0 @@
# Label PRs (config)
# Automatically applies labels based on changed files and branch patterns
# Core packages
core:
- changed-files:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
v1:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
cli:
- changed-files:
- any-glob-to-any-file:
- "libs/cli/**/*"
standard-tests:
- changed-files:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
# Partner integrations
integration:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/**/*"
# Infrastructure and DevOps
infra:
- changed-files:
- any-glob-to-any-file:
- ".github/**/*"
- "Makefile"
- ".pre-commit-config.yaml"
- "scripts/**/*"
- "docker/**/*"
- "Dockerfile*"
github_actions:
- changed-files:
- any-glob-to-any-file:
- ".github/workflows/**/*"
- ".github/actions/**/*"
dependencies:
- changed-files:
- any-glob-to-any-file:
- "**/pyproject.toml"
- "uv.lock"
- "**/requirements*.txt"
- "**/poetry.lock"
# Documentation
documentation:
- changed-files:
- any-glob-to-any-file:
- "docs/**/*"
- "**/*.md"
- "**/*.rst"
- "**/README*"
# Security related changes
security:
- changed-files:
- any-glob-to-any-file:
- "**/*security*"
- "**/*auth*"
- "**/*credential*"
- "**/*secret*"
- "**/*token*"
- ".github/workflows/security*"

View File

@@ -1,41 +0,0 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -1,349 +0,0 @@
"""Analyze git diffs to determine which directories need to be tested.
Intelligently determines which LangChain packages and directories need to be tested,
linted, or built based on the changes. Handles dependency relationships between
packages, maps file changes to appropriate CI job configurations, and outputs JSON
configurations for GitHub Actions.
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
- Builds dependency graph to include dependent packages when core components change
- Generates test matrix configurations with appropriate Python versions
- Handles special cases for Pydantic version testing and performance benchmarks
Used as part of the check_diffs workflow.
"""
import glob
import json
import os
import sys
from collections import defaultdict
from pathlib import Path
from typing import Dict, List, Set
import tomllib
from get_min_versions import get_min_version_from_toml
from packaging.requirements import Requirement
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
]
# When set to True, we are ignoring core dependents
# in order to be able to get CI to pass for each individual
# package that depends on core
# e.g. if you touch core, we don't then add textsplitters/etc to CI
IGNORE_CORE_DEPENDENTS = False
# ignored partners are removed from dependents
# but still run if directly edited
IGNORED_PARTNERS = [
# remove huggingface from dependents because of CI instability
# specifically in huggingface jobs
# https://github.com/langchain-ai/langchain/issues/25558
"huggingface",
# prompty exhibiting issues with numpy for Python 3.13
# https://github.com/langchain-ai/langchain/actions/runs/12651104685/job/35251034969?pr=29065
"prompty",
]
PY_312_MAX_PACKAGES = [
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
def all_package_dirs() -> Set[str]:
return {
"/".join(path.split("/")[:-1]).lstrip("./")
for path in glob.glob("./libs/**/pyproject.toml", recursive=True)
if "libs/cli" not in path and "libs/standard-tests" not in path
}
def dependents_graph() -> dict:
"""Construct a mapping of package -> dependents
Done such that we can run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in [
*pyproject["project"]["dependencies"],
*pyproject["dependency-groups"]["test"],
]:
requirement = Requirement(dep)
package_name = requirement.name
if "langchain" in dep:
dependents[package_name].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
package_path = Path(path).parent
extended_requirement_path = package_path / "extended_testing_deps.txt"
if extended_requirement_path.exists():
with open(extended_requirement_path, "r") as f:
extended_deps = f.read().splitlines()
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith("-e ../partners/"), (
"Extended test deps should only editable install partner packages"
)
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
dep = depline.split("==")[0]
if "langchain" in dep:
dependents[dep].add(pkg_dir)
for k in dependents:
for partner in IGNORED_PARTNERS:
if f"libs/partners/{partner}" in dependents[k]:
dependents[k].remove(f"libs/partners/{partner}")
return dependents
def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
updated = set()
for dir_ in dirs_to_eval:
# handle core manually because it has so many dependents
if "core" in dir_:
updated.add(dir_)
continue
pkg = "langchain-" + dir_.split("/")[-1]
updated.update(dependents[pkg])
updated.add(dir_)
return list(updated)
def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if job == "test-pydantic":
return _get_pydantic_test_configs(dir_)
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
elif dir_ == "libs/core":
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/langchain_v1":
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
else:
py_versions = ["3.9", "3.13"]
return [{"working-directory": dir_, "python-version": py_v} for py_v in py_versions]
def _get_pydantic_test_configs(
dir_: str, *, python_version: str = "3.11"
) -> List[Dict[str, str]]:
with open("./libs/core/uv.lock", "rb") as f:
core_uv_lock_data = tomllib.load(f)
for package in core_uv_lock_data["package"]:
if package["name"] == "pydantic":
core_max_pydantic_minor = package["version"].split(".")[1]
break
with open(f"./{dir_}/uv.lock", "rb") as f:
dir_uv_lock_data = tomllib.load(f)
for package in dir_uv_lock_data["package"]:
if package["name"] == "pydantic":
dir_max_pydantic_minor = package["version"].split(".")[1]
break
core_min_pydantic_version = get_min_version_from_toml(
"./libs/core/pyproject.toml", "release", python_version, include=["pydantic"]
)["pydantic"]
core_min_pydantic_minor = (
core_min_pydantic_version.split(".")[1]
if "." in core_min_pydantic_version
else "0"
)
dir_min_pydantic_version = get_min_version_from_toml(
f"./{dir_}/pyproject.toml", "release", python_version, include=["pydantic"]
).get("pydantic", "0.0.0")
dir_min_pydantic_minor = (
dir_min_pydantic_version.split(".")[1]
if "." in dir_min_pydantic_version
else "0"
)
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
)
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
)
configs = [
{
"working-directory": dir_,
"pydantic-version": f"2.{v}.0",
"python-version": python_version,
}
for v in range(min_pydantic_minor, max_pydantic_minor + 1)
]
return configs
def _get_configs_for_multi_dirs(
job: str, dirs_to_run: Dict[str, Set[str]], dependents: dict
) -> List[Dict[str, str]]:
if job == "lint":
dirs = add_dependents(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"],
dependents,
)
elif job in ["test", "compile-integration-tests", "dependencies", "test-pydantic"]:
dirs = add_dependents(
dirs_to_run["test"] | dirs_to_run["extended-test"], dependents
)
elif job == "extended-tests":
dirs = list(dirs_to_run["extended-test"])
elif job == "codspeed":
dirs = list(dirs_to_run["codspeed"])
else:
raise ValueError(f"Unknown job: {job}")
return [
config for dir_ in dirs for config in _get_configs_for_single_dir(job, dir_)
]
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run: Dict[str, set] = {
"lint": set(),
"test": set(),
"extended-test": set(),
"codspeed": set(),
}
docs_edited = False
if len(files) >= 300:
# max diff length is 300 files - there are likely files missing
dirs_to_run["lint"] = all_package_dirs()
dirs_to_run["test"] = all_package_dirs()
dirs_to_run["extended-test"] = set(LANGCHAIN_DIRS)
for file in files:
if any(
file.startswith(dir_)
for dir_ in (
".github/workflows",
".github/tools",
".github/actions",
".github/scripts/check_diff.py",
)
):
# add all LANGCHAIN_DIRS for infra changes
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if file.startswith("libs/core"):
dirs_to_run["codspeed"].add(f"libs/core")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if dir_ == "libs/core" and IGNORE_CORE_DEPENDENTS:
dirs_to_run["extended-test"].add(dir_)
continue
if file.startswith(dir_):
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# Note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
dirs_to_run["test"].add("libs/partners/fireworks")
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
filename
for filename in os.listdir(f"libs/partners/{partner_dir}")
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file == "libs/packages.yml":
continue
elif file.startswith("libs/"):
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif file.startswith("docs/") or file in [
"pyproject.toml",
"uv.lock",
]: # docs or root uv files
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()
# we now have dirs_by_job
# todo: clean this up
map_job_to_configs = {
job: _get_configs_for_multi_dirs(job, dirs_to_run, dependents)
for job in [
"lint",
"test",
"extended-tests",
"compile-integration-tests",
"dependencies",
"test-pydantic",
"codspeed",
]
}
map_job_to_configs["test-doc-imports"] = (
[{"python-version": "3.12"}] if docs_edited else []
)
for key, value in map_job_to_configs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}")

View File

@@ -1,36 +0,0 @@
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# See if we're releasing an rc or dev version
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# If not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {dep_version} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {dep_version} has allow-prereleases set to true. Please remove this."
)

View File

@@ -1,200 +0,0 @@
"""Get minimum versions of dependencies from a pyproject.toml file."""
import sys
from collections import defaultdict
from typing import Optional
if sys.version_info >= (3, 11):
import tomllib
else:
# For Python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
import re
from typing import List
import requests
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version, parse
MIN_VERSION_LIBS = [
"langchain-core",
"langchain",
"langchain-text-splitters",
"numpy",
"SQLAlchemy",
]
# some libs only get checked on release because of simultaneous changes in
# multiple libs
SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
]
def get_pypi_versions(package_name: str) -> List[str]:
"""Fetch all available versions for a package from PyPI.
Args:
package_name: Name of the package
Returns:
List of all available versions
Raises:
requests.exceptions.RequestException: If PyPI API request fails
KeyError: If package not found or response format unexpected
"""
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(pypi_url)
response.raise_for_status()
return list(response.json()["releases"].keys())
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""Find the minimum published version that satisfies the given constraints.
Args:
package_name: Name of the package
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Minimum compatible version or None if no compatible version found
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
)
spec_set = SpecifierSet(spec_string)
all_versions = get_pypi_versions(package_name)
valid_versions = []
for version_str in all_versions:
try:
version = parse(version_str)
if spec_set.contains(version):
valid_versions.append(version)
except ValueError:
continue
return str(min(valid_versions)) if valid_versions else None
def _check_python_version_from_requirement(
requirement: Requirement, python_version: str
) -> bool:
if not requirement.marker:
return True
else:
marker_str = str(requirement.marker)
if "python_version" or "python_full_version" in marker_str:
python_version_str = "".join(
char
for char in marker_str
if char.isdigit() or char in (".", "<", ">", "=", ",")
)
return check_python_version(python_version, python_version_str)
return True
def get_min_version_from_toml(
toml_path: str,
versions_for: str,
python_version: str,
*,
include: Optional[list] = None,
):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
dependencies = defaultdict(list)
for dep in toml_data["project"]["dependencies"]:
requirement = Requirement(dep)
dependencies[requirement.name].append(requirement)
# Initialize a dictionary to store the minimum versions
min_versions = {}
# Iterate over the libs in MIN_VERSION_LIBS
for lib in set(MIN_VERSION_LIBS + (include or [])):
if versions_for == "pull_request" and lib in SKIP_IF_PULL_REQUEST:
# some libs only get checked on release because of simultaneous
# changes in multiple libs
continue
# Check if the lib is present in the dependencies
if lib in dependencies:
if include and lib not in include:
continue
requirements = dependencies[lib]
for requirement in requirements:
if _check_python_version_from_requirement(requirement, python_version):
version_string = str(requirement.specifier)
break
# Use parse_version to get the minimum supported version from version_string
min_version = get_minimum_version(lib, version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
return min_versions
def check_python_version(version_string, constraint_string):
"""Check if the given Python version matches the given constraints.
Args:
version_string: A string representing the Python version (e.g. "3.8.5").
constraint_string: A string representing the package's Python version
constraints (e.g. ">=3.6, <4.0").
Returns:
True if the version matches the constraints
"""
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
)
try:
version = Version(version_string)
constraints = SpecifierSet(constraint_string)
return version in constraints
except Exception as e:
print(f"Error: {e}")
return False
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
versions_for = sys.argv[2]
python_version = sys.argv[3]
assert versions_for in ["release", "pull_request"]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file, versions_for, python_version)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))

View File

@@ -1,120 +0,0 @@
#!/usr/bin/env python
"""Sync libraries from various repositories into this monorepo.
Moves cloned partner packages into libs/partners structure.
"""
import os
import shutil
from pathlib import Path
from typing import Any, Dict
import yaml
def load_packages_yaml() -> Dict[str, Any]:
"""Load and parse packages.yml."""
with open("langchain/libs/packages.yml", "r") as f:
return yaml.safe_load(f)
def get_target_dir(package_name: str) -> Path:
"""Get the target directory for a given package."""
package_name_short = package_name.replace("langchain-", "")
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
def clean_target_directories(packages: list) -> None:
"""Remove old directories that will be replaced."""
for package in packages:
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
shutil.rmtree(target_dir)
def move_libraries(packages: list) -> None:
"""Move libraries from their source locations to the target directories."""
for package in packages:
repo_name = package["repo"].split("/")[1]
source_path = package["path"]
target_dir = get_target_dir(package["name"])
# Handle root path case
if source_path == ".":
source_dir = repo_name
else:
source_dir = f"{repo_name}/{source_path}"
print(f"Moving {source_dir} to {target_dir}")
# Ensure target directory exists
os.makedirs(os.path.dirname(target_dir), exist_ok=True)
try:
# Move the directory
shutil.move(source_dir, target_dir)
except Exception as e:
print(f"Error moving {source_dir} to {target_dir}: {e}")
def main():
"""Orchestrate the library sync process."""
try:
# Load packages configuration
package_yaml = load_packages_yaml()
# Clean/empty target directories in preparation for moving new ones
#
# Only for packages in the langchain-ai org or explicitly included via
# include_in_api_ref, excluding 'langchain' itself and 'langchain-ai21'
clean_target_directories(
[
p
for p in package_yaml["packages"]
if (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Move cloned libraries to their new locations, only for packages in the
# langchain-ai org or explicitly included via include_in_api_ref,
# excluding 'langchain' itself and 'langchain-ai21'
move_libraries(
[
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Delete partner packages without a pyproject.toml
for partner in Path("langchain/libs/partners").iterdir():
if partner.is_dir() and not (partner / "pyproject.toml").exists():
print(f"Removing {partner} as it does not have a pyproject.toml")
shutil.rmtree(partner)
print("Library sync completed successfully!")
except Exception as e:
print(f"Error during library sync: {e}")
raise
if __name__ == "__main__":
main()

View File

@@ -1,756 +0,0 @@
#!/usr/bin/env python3
#
# git-restore-mtime - Change mtime of files based on commit date of last change
#
# Copyright (C) 2012 Rodrigo Silva (MestreLion) <linux@rodrigosilva.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. See <http://www.gnu.org/licenses/gpl.html>
#
# Source: https://github.com/MestreLion/git-tools
# Version: July 13, 2023 (commit hash 5f832e72453e035fccae9d63a5056918d64476a2)
"""
Change the modification time (mtime) of files in work tree, based on the
date of the most recent commit that modified the file, including renames.
Ignores untracked files and uncommitted deletions, additions and renames, and
by default modifications too.
---
Useful prior to generating release tarballs, so each file is archived with a
date that is similar to the date when the file was actually last modified,
assuming the actual modification date and its commit date are close.
"""
# TODO:
# - Add -z on git whatchanged/ls-files, so we don't deal with filename decoding
# - When Python is bumped to 3.7, use text instead of universal_newlines on subprocess
# - Update "Statistics for some large projects" with modern hardware and repositories.
# - Create a README.md for git-restore-mtime alone. It deserves extensive documentation
# - Move Statistics there
# - See git-extras as a good example on project structure and documentation
# FIXME:
# - When current dir is outside the worktree, e.g. using --work-tree, `git ls-files`
# assume any relative pathspecs are to worktree root, not the current dir. As such,
# relative pathspecs may not work.
# - Renames are tricky:
# - R100 should not change mtime, but original name is not on filelist. Should
# track renames until a valid (A, M) mtime found and then set on current name.
# - Should set mtime for both current and original directories.
# - Check mode changes with unchanged blobs?
# - Check file (A, D) for the directory mtime is not sufficient:
# - Renames also change dir mtime, unless rename was on a parent dir
# - If most recent change of all files in a dir was a Modification (M),
# dir might not be touched at all.
# - Dirs containing only subdirectories but no direct files will also
# not be touched. They're files' [grand]parent dir, but never their dirname().
# - Some solutions:
# - After files done, perform some dir processing for missing dirs, finding latest
# file (A, D, R)
# - Simple approach: dir mtime is the most recent child (dir or file) mtime
# - Use a virtual concept of "created at most at" to fill missing info, bubble up
# to parents and grandparents
# - When handling [grand]parent dirs, stay inside <pathspec>
# - Better handling of merge commits. `-m` is plain *wrong*. `-c/--cc` is perfect, but
# painfully slow. First pass without merge commits is not accurate. Maybe add a new
# `--accurate` mode for `--cc`?
if __name__ != "__main__":
raise ImportError("{} should not be used as a module.".format(__name__))
import argparse
import datetime
import logging
import os.path
import shlex
import signal
import subprocess
import sys
import time
__version__ = "2022.12+dev"
# Update symlinks only if the platform supports not following them
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
# Call os.path.normpath() only if not in a POSIX platform (Windows)
NORMALIZE_PATHS = os.path.sep != "/"
# How many files to process in each batch when re-trying merge commits
STEPMISSING = 100
# (Extra) keywords for the os.utime() call performed by touch()
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
# Command-line interface ######################################################
def parse_args():
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
group = parser.add_mutually_exclusive_group()
group.add_argument(
"--quiet",
"-q",
dest="loglevel",
action="store_const",
const=logging.WARNING,
default=logging.INFO,
help="Suppress informative messages and summary statistics.",
)
group.add_argument(
"--verbose",
"-v",
action="count",
help="""
Print additional information for each processed file.
Specify twice to further increase verbosity.
""",
)
parser.add_argument(
"--cwd",
"-C",
metavar="DIRECTORY",
help="""
Run as if %(prog)s was started in directory %(metavar)s.
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
See 'man 1 git' or 'git --help' for more information.
""",
)
parser.add_argument(
"--git-dir",
dest="gitdir",
metavar="GITDIR",
help="""
Path to the git repository, by default auto-discovered by searching
the current directory and its parents for a .git/ subdirectory.
""",
)
parser.add_argument(
"--work-tree",
dest="workdir",
metavar="WORKTREE",
help="""
Path to the work tree root, by default the parent of GITDIR if it's
automatically discovered, or the current directory if GITDIR is set.
""",
)
parser.add_argument(
"--force",
"-f",
default=False,
action="store_true",
help="""
Force updating files with uncommitted modifications.
Untracked files and uncommitted deletions, renames and additions are
always ignored.
""",
)
parser.add_argument(
"--merge",
"-m",
default=False,
action="store_true",
help="""
Include merge commits.
Leads to more recent times and more files per commit, thus with the same
time, which may or may not be what you want.
Including merge commits may lead to fewer commits being evaluated as files
are found sooner, which can improve performance, sometimes substantially.
But as merge commits are usually huge, processing them may also take longer.
By default, merge commits are only used for files missing from regular commits.
""",
)
parser.add_argument(
"--first-parent",
default=False,
action="store_true",
help="""
Consider only the first parent, the "main branch", when evaluating merge commits.
Only effective when merge commits are processed, either when --merge is
used or when finding missing files after the first regular log search.
See --skip-missing.
""",
)
parser.add_argument(
"--skip-missing",
"-s",
dest="missing",
default=True,
action="store_false",
help="""
Do not try to find missing files.
If merge commits were not evaluated with --merge and some files were
not found in regular commits, by default %(prog)s searches for these
files again in the merge commits.
This option disables this retry, so files found only in merge commits
will not have their timestamp updated.
""",
)
parser.add_argument(
"--no-directories",
"-D",
dest="dirs",
default=True,
action="store_false",
help="""
Do not update directory timestamps.
By default, use the time of its most recently created, renamed or deleted file.
Note that just modifying a file will NOT update its directory time.
""",
)
parser.add_argument(
"--test",
"-t",
default=False,
action="store_true",
help="Test run: do not actually update any file timestamp.",
)
parser.add_argument(
"--commit-time",
"-c",
dest="commit_time",
default=False,
action="store_true",
help="Use commit time instead of author time.",
)
parser.add_argument(
"--oldest-time",
"-o",
dest="reverse_order",
default=False,
action="store_true",
help="""
Update times based on the oldest, instead of the most recent commit of a file.
This reverses the order in which the git log is processed to emulate a
file "creation" date. Note this will be inaccurate for files deleted and
re-created at later dates.
""",
)
parser.add_argument(
"--skip-older-than",
metavar="SECONDS",
type=int,
help="""
Ignore files that are currently older than %(metavar)s.
Useful in workflows that assume such files already have a correct timestamp,
as it may improve performance by processing fewer files.
""",
)
parser.add_argument(
"--skip-older-than-commit",
"-N",
default=False,
action="store_true",
help="""
Ignore files older than the timestamp it would be updated to.
Such files may be considered "original", likely in the author's repository.
""",
)
parser.add_argument(
"--unique-times",
default=False,
action="store_true",
help="""
Set the microseconds to a unique value per commit.
Allows telling apart changes that would otherwise have identical timestamps,
as git's time accuracy is in seconds.
""",
)
parser.add_argument(
"pathspec",
nargs="*",
metavar="PATHSPEC",
help="""
Only modify paths matching %(metavar)s, relative to current directory.
By default, update all but untracked files and submodules.
""",
)
parser.add_argument(
"--version",
"-V",
action="version",
version="%(prog)s version {version}".format(version=get_version()),
)
args_ = parser.parse_args()
if args_.verbose:
args_.loglevel = max(logging.TRACE, logging.DEBUG // args_.verbose)
args_.debug = args_.loglevel <= logging.DEBUG
return args_
def get_version(version=__version__):
if not version.endswith("+dev"):
return version
try:
cwd = os.path.dirname(os.path.realpath(__file__))
return Git(cwd=cwd, errors=False).describe().lstrip("v")
except Git.Error:
return "-".join((version, "unknown"))
# Helper functions ############################################################
def setup_logging():
"""Add TRACE logging level and corresponding method, return the root logger"""
logging.TRACE = TRACE = logging.DEBUG // 2
logging.Logger.trace = lambda _, m, *a, **k: _.log(TRACE, m, *a, **k)
return logging.getLogger()
def normalize(path):
r"""Normalize paths from git, handling non-ASCII characters.
Git stores paths as UTF-8 normalization form C.
If path contains non-ASCII or non-printable characters, git outputs the UTF-8
in octal-escaped notation, escaping double-quotes and backslashes, and then
double-quoting the whole path.
https://git-scm.com/docs/git-config#Documentation/git-config.txt-corequotePath
This function reverts this encoding, so:
normalize(r'"Back\\slash_double\"quote_a\303\247a\303\255"') =>
r'Back\slash_double"quote_açaí')
Paths with invalid UTF-8 encoding, such as single 0x80-0xFF bytes (e.g, from
Latin1/Windows-1251 encoding) are decoded using surrogate escape, the same
method used by Python for filesystem paths. So 0xE6 ("æ" in Latin1, r'\\346'
from Git) is decoded as "\udce6". See https://peps.python.org/pep-0383/ and
https://vstinner.github.io/painful-history-python-filesystem-encoding.html
Also see notes on `windows/non-ascii-paths.txt` about path encodings on
non-UTF-8 platforms and filesystems.
"""
if path and path[0] == '"':
# Python 2: path = path[1:-1].decode("string-escape")
# Python 3: https://stackoverflow.com/a/46650050/624066
path = (
path[1:-1] # Remove enclosing double quotes
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
.decode("unicode-escape") # Perform the actual octal-escaping decode
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
.decode("utf8", "surrogateescape")
) # Decode from UTF-8
if NORMALIZE_PATHS:
# Make sure the slash matches the OS; for Windows we need a backslash
path = os.path.normpath(path)
return path
def dummy(*_args, **_kwargs):
"""No-op function used in dry-run tests"""
def touch(path, mtime):
"""The actual mtime update"""
os.utime(path, (mtime, mtime), **UTIME_KWS)
def touch_ns(path, mtime_ns):
"""The actual mtime update, using nanoseconds for unique timestamps"""
os.utime(path, None, ns=(mtime_ns, mtime_ns), **UTIME_KWS)
def isodate(secs: int):
# time.localtime() accepts floats, but discards fractional part
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
def isodate_ns(ns: int):
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
def get_mtime_ns(secs: int, idx: int):
# Time resolution for filesystems and functions:
# ext-4 and other POSIX filesystems: 1 nanosecond
# NTFS (Windows default): 100 nanoseconds
# datetime.datetime() (due to 64-bit float epoch): 1 microsecond
us = idx % 1000000 # 10**6
return 1000 * (1000000 * secs + us)
def get_mtime_path(path):
return os.path.getmtime(path)
# Git class and parse_log(), the heart of the script ##########################
class Git:
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
self.gitcmd = ["git"]
self.errors = errors
self._proc = None
if workdir:
self.gitcmd.extend(("--work-tree", workdir))
if gitdir:
self.gitcmd.extend(("--git-dir", gitdir))
if cwd:
self.gitcmd.extend(("-C", cwd))
self.workdir, self.gitdir = self._get_repo_dirs()
def ls_files(self, paths: list = None):
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
def ls_dirty(self, force=False):
return (
normalize(_[3:].split(" -> ", 1)[-1])
for _ in self._run("status --porcelain")
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
)
def log(
self,
merge=False,
first_parent=False,
commit_time=False,
reverse_order=False,
paths: list = None,
):
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
if merge:
cmd += " -m"
if first_parent:
cmd += " --first-parent"
if reverse_order:
cmd += " --reverse"
return self._run(cmd, paths)
def describe(self):
return self._run("describe --tags", check=True)[0]
def terminate(self):
if self._proc is None:
return
try:
self._proc.terminate()
except OSError:
# Avoid errors on OpenBSD
pass
def _get_repo_dirs(self):
return (
os.path.normpath(_)
for _ in self._run(
"rev-parse --show-toplevel --absolute-git-dir", check=True
)
)
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
cmdlist = self.gitcmd + shlex.split(cmdstr)
if paths:
cmdlist.append("--")
cmdlist.extend(paths)
popen_args = dict(universal_newlines=True, encoding="utf8")
if not self.errors:
popen_args["stderr"] = subprocess.DEVNULL
log.trace("Executing: %s", " ".join(cmdlist))
if not output:
return subprocess.call(cmdlist, **popen_args)
if check:
try:
stdout: str = subprocess.check_output(cmdlist, **popen_args)
return stdout.splitlines()
except subprocess.CalledProcessError as e:
raise self.Error(e.returncode, e.cmd, e.output, e.stderr)
self._proc = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, **popen_args)
return (_.rstrip() for _ in self._proc.stdout)
def __del__(self):
self.terminate()
class Error(subprocess.CalledProcessError):
"""Error from git executable"""
def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
mtime = 0
datestr = isodate(0)
for line in git.log(
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
):
stats["loglines"] += 1
# Blank line between Date and list of files
if not line:
continue
# Date line
if line[0] != ":": # Faster than `not line.startswith(':')`
stats["commits"] += 1
mtime = int(line)
if args.unique_times:
mtime = get_mtime_ns(mtime, stats["commits"])
if args.debug:
datestr = isodate(mtime)
continue
# File line: three tokens if it describes a renaming, otherwise two
tokens = line.split("\t")
# Possible statuses:
# M: Modified (content changed)
# A: Added (created)
# D: Deleted
# T: Type changed: to/from regular file, symlinks, submodules
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
status = tokens[0].split(" ")[-1]
file = tokens[-1]
# Handles non-ASCII chars and OS path separator
file = normalize(file)
def do_file():
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
stats["skip"] += 1
return
if args.debug:
log.debug(
"%d\t%d\t%d\t%s\t%s",
stats["loglines"],
stats["commits"],
stats["files"],
datestr,
file,
)
try:
touch(os.path.join(git.workdir, file), mtime)
stats["touches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, file)
stats["errors"] += 1
def do_dir():
if args.debug:
log.debug(
"%d\t%d\t-\t%s\t%s",
stats["loglines"],
stats["commits"],
datestr,
"{}/".format(dirname or "."),
)
try:
touch(os.path.join(git.workdir, dirname), mtime)
stats["dirtouches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, dirname)
stats["direrrors"] += 1
if file in filelist:
stats["files"] -= 1
filelist.remove(file)
do_file()
if args.dirs and status in ("A", "D"):
dirname = os.path.dirname(file)
if dirname in dirlist:
dirlist.remove(dirname)
do_dir()
# All files done?
if not stats["files"]:
git.terminate()
return
# Main Logic ##################################################################
def main():
start = time.time() # yes, Wall time. CPU time is not realistic for users.
stats = {
_: 0
for _ in (
"loglines",
"commits",
"touches",
"skip",
"errors",
"dirtouches",
"direrrors",
)
}
logging.basicConfig(level=args.loglevel, format="%(message)s")
log.trace("Arguments: %s", args)
# First things first: Where and Who are we?
if args.cwd:
log.debug("Changing directory: %s", args.cwd)
try:
os.chdir(args.cwd)
except OSError as e:
log.critical(e)
return e.errno
# Using both os.chdir() and `git -C` is redundant, but might prevent side effects
# `git -C` alone could be enough if we make sure that:
# - all paths, including args.pathspec, are processed by git: ls-files, rev-parse
# - touch() / os.utime() path argument is always prepended with git.workdir
try:
git = Git(workdir=args.workdir, gitdir=args.gitdir, cwd=args.cwd)
except Git.Error as e:
# Not in a git repository, and git already informed user on stderr. So we just...
return e.returncode
# Get the files managed by git and build file list to be processed
if UPDATE_SYMLINKS and not args.skip_older_than:
filelist = set(git.ls_files(args.pathspec))
else:
filelist = set()
for path in git.ls_files(args.pathspec):
fullpath = os.path.join(git.workdir, path)
# Symlink (to file, to dir or broken - git handles the same way)
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
log.warning(
"WARNING: Skipping symlink, no OS support for updates: %s", path
)
continue
# skip files which are older than given threshold
if (
args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than
):
continue
# Always add files relative to worktree root
filelist.add(path)
# If --force, silently ignore uncommitted deletions (not in the filesystem)
# and renames / additions (will not be found in log anyway)
if args.force:
filelist -= set(git.ls_dirty(force=True))
# Otherwise, ignore any dirty files
else:
dirty = set(git.ls_dirty())
if dirty:
log.warning(
"WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force."
)
filelist -= dirty
# Build dir list to be processed
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
stats["totalfiles"] = stats["files"] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
if not filelist:
# Nothing to do. Exit silently and without errors, just like git does
return
# Process the log until all files are 'touched'
log.debug("Line #\tLog #\tF.Left\tModification Time\tFile Name")
parse_log(filelist, dirlist, stats, git, args.merge, args.pathspec)
# Missing files
if filelist:
# Try to find them in merge logs, if not done already
# (usually HUGE, thus MUCH slower!)
if args.missing and not args.merge:
filterlist = list(filelist)
missing = len(filterlist)
log.info(
"{0:,} files not found in log, trying merge commits".format(missing)
)
for i in range(0, missing, STEPMISSING):
parse_log(
filelist,
dirlist,
stats,
git,
merge=True,
filterlist=filterlist[i : i + STEPMISSING],
)
# Still missing some?
for file in filelist:
log.warning("WARNING: not found in the log: %s", file)
# Final statistics
# Suggestion: use git-log --before=mtime to brag about skipped log entries
def log_info(msg, *a, width=13):
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
ffmt = "{:%d,.2f}" % (width,)
# %-formatting lacks a thousand separator, must pre-render with .format()
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
log_info(
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
time.time() - start,
stats["loglines"],
stats["commits"],
)
if args.dirs:
if stats["direrrors"]:
log_info("%d directory update errors", stats["direrrors"])
log_info("%d directories updated", stats["dirtouches"])
if stats["touches"] != stats["totalfiles"]:
log_info("%d files", stats["totalfiles"])
if stats["skip"]:
log_info("%d files skipped", stats["skip"])
if stats["files"]:
log_info("%d files missing", stats["files"])
if stats["errors"]:
log_info("%d file update errors", stats["errors"])
log_info("%d files updated", stats["touches"])
if args.test:
log.info("TEST RUN - No files modified!")
# Keep only essential, global assignments here. Any other logic must be in main()
log = setup_logging()
args = parse_args()
# Set the actual touch() and other functions based on command-line arguments
if args.unique_times:
touch = touch_ns
isodate = isodate_ns
# Make sure this is always set last to ensure --test behaves as intended
if args.test:
touch = dummy
# UI done, it's showtime!
try:
sys.exit(main())
except KeyboardInterrupt:
log.info("\nAborting")
signal.signal(signal.SIGINT, signal.SIG_DFL)
os.kill(os.getpid(), signal.SIGINT)

View File

@@ -1,65 +0,0 @@
# Validates that a package's integration tests compile without syntax or import errors.
#
# (If an integration test fails to compile, it won't run.)
#
# Called as part of check_diffs.yml workflow
#
# Runs pytest with compile marker to check syntax/imports.
name: '🔗 Compile Integration Tests'
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: '🔗 Check Integration Tests Compile'
shell: bash
run: uv run pytest -m compile tests/integration_tests
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,103 +0,0 @@
# Runs `make integration_tests` on the specified package.
#
# Manually triggered via workflow_dispatch for testing with real APIs.
#
# Installs integration test dependencies and executes full test suite.
name: '🚀 Integration Tests'
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
on:
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
default: "3.11"
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
make integration_tests
- name: 'Ensure testing did not create/modify files'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,11 +1,4 @@
# Runs linting.
#
# Uses the package's Makefile to run the checks, specifically the
# `lint_package` and `lint_tests` targets.
#
# Called as part of check_diffs.yml workflow.
name: '🧹 Linting'
name: lint
on:
workflow_call:
@@ -14,62 +7,40 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
RUFF_OUTPUT_FORMAT: github
UV_FROZEN: "true"
POETRY_VERSION: "1.4.2"
jobs:
# Linting job - runs quality checks on package and test code
build:
name: 'Python ${{ inputs.python-version }}'
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
- uses: actions/checkout@v3
- name: Install poetry
run: |
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ inputs.python-version }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Lint & Typing Dependencies'
working-directory: ${{ inputs.working-directory }}
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: |
uv sync --group lint --group typing
- name: '🔍 Analyze Package Code with Linters'
working-directory: ${{ inputs.working-directory }}
poetry install
- name: Install langchain editable
if: ${{ inputs.working-directory != 'langchain' }}
run: |
make lint_package
- name: '📦 Install Test Dependencies (non-partners)'
# (For directories NOT starting with libs/partners/)
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
pip install -e ../langchain
- name: Analysing the code with our lint
run: |
uv sync --inexact --group test
- name: '📦 Install Test Dependencies'
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test --group test_integration
- name: '🔍 Analyze Test Code with Linters'
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests
make lint

View File

@@ -1,11 +1,5 @@
# Builds and publishes LangChain packages to PyPI.
#
# Manually triggered, though can be used as a reusable workflow (workflow_call).
#
# Handles version bumping, building, and publishing to PyPI with authentication.
name: release
name: '🚀 Package Release'
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
on:
workflow_call:
inputs:
@@ -13,531 +7,46 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
release-version:
required: true
type: string
default: '0.1.0'
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!) - Only use for hotfixes"
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
POETRY_VERSION: "1.4.2"
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
if_release:
if: |
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
defaults:
run:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
- uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# (Release stage has trusted publishing and GitHub repo contents write access,
#
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
python-version: "3.10"
cache: "poetry"
- name: Build project for distribution
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Check version
run: poetry build
- name: Check Version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
# Handle regular versions and pre-release versions differently
if [[ "$VERSION" == *"-"* ]]; then
# This is a pre-release version (contains a hyphen)
# Extract the base version without the pre-release suffix
BASE_VERSION=${VERSION%%-*}
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
fi
else
# Regular version handling
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v5
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
#
# For example, here's a way that caching can cause a falsely-passing test:
# - Make the langchain package manifest no longer list a dependency package
# as a requirement. This means it won't be installed by `pip install`,
# and attempting to use it would cause a crash.
# - That dependency used to be required, so it may have been cached.
# When restoring the venv packages from cache, that dependency gets included.
# - Tests pass, because the dependency is present even though it wasn't specified.
# - The package is published, and it breaks on the missing dependency when
# used in the real world.
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Import dist package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
# Here we use:
# - The default regular PyPI index as the *primary* index, meaning
# that it takes priority (https://pypi.org/simple)
# - The test PyPI index as an extra index, so that any dependencies that
# are not found on test PyPI can be resolved and installed anyway.
# (https://test.pypi.org/simple). This will include the PKG_NAME==VERSION
# package because VERSION will not have been uploaded to regular PyPI yet.
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
run: |
uv venv
VIRTUAL_ENV=.venv uv pip install dist/*.whl
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
# also remove _official suffix
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g | sed s/_official//g)"
uv run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: uv sync --group test
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the built version
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Check for prerelease versions
# Block release if any dependencies allow prerelease versions
# (unless this is itself a prerelease version)
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
# Find the minimum published versions that satisfies the given constraints
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
VIRTUAL_ENV=.venv uv pip install packaging requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
- name: Import integration test dependencies
run: uv sync --group test --group test_integration
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
# Uses the Makefile's `integration_tests` target for the specified package
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
# Done when code changes are made to langchain-core
test-prior-published-packages-against-new-core:
# Installs the new core with old partners: Installs the new unreleased core
# alongside the previously published partner packages and runs integration tests
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
strategy:
matrix:
partner: [openai, anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v5
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
echo "Not in libs/core. Exiting successfully."
exit 0
fi
- name: Set up Python + uv
if: startsWith(inputs.working-directory, 'libs/core')
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag, excluding pre-releases
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| sort -Vr \
| head -n 1
)"
echo "Latest package tag: $LATEST_PACKAGE_TAG"
# Shallow-fetch just that single tag
git fetch --depth=1 origin tag "$LATEST_PACKAGE_TAG"
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "
cat pyproject.toml | grep "version = "
# Run tests
uv sync --group test --group test_integration
uv pip install ../../core/dist/*.whl
make integration_tests
publish:
# Publishes the package to PyPI
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
defaults:
run:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish package distributions to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
mark-release:
# Marks the GitHub release with the new version tag
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
runs-on: ubuntu-latest
permissions:
# This permission is needed by `ncipollo/release-action` to
# create the GitHub release/tag
contents: write
defaults:
run:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Tag
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
if: ${{ inputs.working-directory == 'libs/langchain' }}
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}
draft: false
generateReleaseNotes: true
tag: v${{ steps.check-version.outputs.version }}
commit: master
- name: Publish to PyPI
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
run: |
poetry publish

View File

@@ -1,7 +1,4 @@
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range.
name: '🧪 Unit Testing'
name: test
on:
workflow_call:
@@ -10,77 +7,55 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
test_type:
type: string
description: "Python version to use"
permissions:
contents: read
description: "Test types to run"
default: '["core", "extended"]'
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
POETRY_VERSION: "1.4.2"
jobs:
# Main test job - runs unit tests with current deps, then retests with minimum versions
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Python ${{ inputs.python-version }}'
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
test_type: ${{ fromJSON(inputs.test_type) }}
name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
id: setup-python
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-${{ inputs.working-directory }}
python-version: ${{ matrix.python-version }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test --dev
- name: '🧪 Run Core Unit Tests'
shell: bash
poetry-version: "1.4.2"
cache-key: ${{ matrix.test_type }}
install-command: |
if [ "${{ matrix.test_type }}" == "core" ]; then
echo "Running core tests, installing dependencies with poetry..."
poetry install
else
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
fi
- name: Install langchain editable
if: ${{ inputs.working-directory != 'langchain' }}
run: |
make test
- name: '🔍 Calculate Minimum Dependency Versions'
working-directory: ${{ inputs.working-directory }}
id: min-version
pip install -e ../langchain
- name: Run ${{matrix.test_type}} tests
run: |
if [ "${{ matrix.test_type }}" == "core" ]; then
make test
else
make extended_tests
fi
shell: bash
run: |
VIRTUAL_ENV=.venv uv pip install packaging tomli requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: '🧪 Run Tests with Minimum Dependencies'
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,63 +0,0 @@
# Validates that all import statements in `.ipynb` notebooks are correct and functional.
#
# Called as part of check_diffs.yml.
#
# Installs test dependencies and LangChain packages in editable mode and
# runs check_imports.py.
name: '📑 Documentation Import Testing'
on:
workflow_call:
inputs:
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: '📦 Install LangChain in Editable Mode'
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: '🔍 Validate Documentation Import Statements'
shell: bash
run: |
uv run python docs/scripts/check_imports.py
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,71 +0,0 @@
# Facilitate unit testing against different Pydantic versions for a provided package.
name: '🐍 Pydantic Version Testing'
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: false
type: string
description: "Python version to use"
default: "3.11"
pydantic-version:
required: true
type: string
description: "Pydantic version to test."
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: '🔄 Install Specific Pydantic Version'
shell: bash
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
- name: '🧪 Run Core Tests'
shell: bash
run: |
make test
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,152 +0,0 @@
# Build the API reference documentation.
#
# Runs daily. Can also be triggered manually for immediate updates.
#
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
#
# Looks for langchain-ai org repos in packages.yml and checks them out.
# Calls prep_api_docs_build.py.
name: '📚 API Docs'
run-name: 'Build & Deploy API Reference'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
env:
PYTHON_VERSION: "3.11"
jobs:
# Only runs on main repository to prevent unnecessary builds on forks
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: '📋 Extract Repository List with yq'
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: |
# Extract repos from packages.yml that are in the langchain-ai org
# (excluding 'langchain' itself)
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: '📋 Parse YAML & Checkout Repositories'
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository
for repo in $REPOS; do
# Validate repository format (allow any org with proper format)
if [[ ! "$repo" =~ ^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository format: $repo"
exit 1
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
# Additional validation for repo name
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository name: $REPO_NAME"
exit 1
fi
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
uses: actions/setup-python@v6
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: '📦 Install Initial Python Dependencies using uv'
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: '📦 Organize Library Directories'
# Places cloned partner packages into libs/partners structure
run: python langchain/.github/scripts/prep_api_docs_build.py
- name: '🧹 Clear Prior Build'
run:
# Remove artifacts from prior docs build
rm -rf langchain-api-docs-html/api_reference_build/html
- name: '📦 Install Documentation Dependencies using uv'
working-directory: langchain
run: |
# Install all partner packages in editable mode with overrides
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
# Install core langchain and other main packages
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
# Install Sphinx and related packages for building docs
python -m uv pip install -r docs/api_reference/requirements.txt
- name: '🔧 Configure Git Settings'
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: '📚 Build API Documentation'
working-directory: langchain
run: |
# Generate the API reference RST files
python docs/api_reference/create_api_rst.py
# Build the HTML documentation using Sphinx
# -T: show full traceback on exception
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
# -b html: build HTML docs (vs PDS, etc.)
# -d: path for the cached environment (parsed document trees / doctrees)
# - Separate from output dir for faster incremental builds
# -c: path to conf.py
# -j auto: parallel build using all available CPU cores
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
# Post-process the generated HTML
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
# Default index page is blank so we copy in the actual home page.
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
# Removes Sphinx's intermediate build artifacts after the build is complete.
rm -rf ../langchain-api-docs-html/_build/
# Commit and push changes to langchain-api-docs-html repo
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain-api-docs-html
message: 'Update API docs build'

View File

@@ -1,30 +0,0 @@
# Runs broken link checker in /docs on a daily schedule.
name: '🔗 Check Broken Links'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
jobs:
check-links:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v5
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: '📦 Install Node Dependencies'
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: '🔍 Scan Documentation for Broken Links'
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -1,51 +0,0 @@
# Ensures version numbers in pyproject.toml and version.py stay in sync.
#
# (Prevents releases with mismatched version numbers)
name: '🔍 Check Version Equality'
on:
pull_request:
paths:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
permissions:
contents: read
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: '✅ Verify pyproject.toml & version.py Match'
run: |
# Check core versions
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare core versions
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
echo "version.py version: $CORE_VERSION_PY_VERSION"
exit 1
else
echo "Core versions match: $CORE_PYPROJECT_VERSION"
fi
# Check langchain_v1 versions
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
# Compare langchain_v1 versions
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
exit 1
else
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
fi

View File

@@ -1,267 +0,0 @@
# Primary CI workflow.
#
# Only runs against packages that have changed files.
#
# Runs:
# - Linting (_lint.yml)
# - Unit Tests (_test.yml)
# - Pydantic compatibility tests (_test_pydantic.yml)
# - Documentation import tests (_test_doc_imports.yml)
# - Integration test compilation checks (_compile_integration_test.yml)
# - Extended test suites that require additional dependencies
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
#
# Reports status to GitHub checks and PR status.
name: '🔧 CI'
on:
push:
branches: [master]
pull_request:
merge_group:
# Optimizes CI performance by canceling redundant workflow runs
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to
# cancel pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: 'Detect Changes & Set Matrix'
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Setup Python 3.11'
uses: actions/setup-python@v6
with:
python-version: '3.11'
- name: '📂 Get Changed Files'
id: files
uses: Ana06/get-changed-files@v2.3.0
- name: '🔍 Analyze Changed Files & Generate Build Matrix'
id: set-matrix
run: |
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}
test: ${{ steps.set-matrix.outputs.test }}
extended-tests: ${{ steps.set-matrix.outputs.extended-tests }}
compile-integration-tests: ${{ steps.set-matrix.outputs.compile-integration-tests }}
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
needs: [ build ]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.lint) }}
fail-fast: false
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run unit tests only on packages that have changed files
test:
needs: [ build ]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test) }}
fail-fast: false
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
needs: [ build ]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-pydantic) }}
fail-fast: false
uses: ./.github/workflows/_test_pydantic.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
pydantic-version: ${{ matrix.job-configs.pydantic-version }}
secrets: inherit
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.test-doc-imports != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
with:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
name: 'Compile Integration Tests'
needs: [ build ]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.compile-integration-tests) }}
fail-fast: false
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run extended test suites that require additional dependencies
extended-tests:
name: 'Extended Tests'
needs: [ build ]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
timeout-minutes: 20
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '📦 Install Dependencies & Run Extended Tests'
shell: bash
run: |
echo "Running extended tests, installing dependencies with uv..."
uv venv
uv sync --group test
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: '⚡ CodSpeed Benchmarks'
needs: [ build ]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
if: |
always()
runs-on: ubuntu-latest
env:
JOBS_JSON: ${{ toJSON(needs) }}
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: '🎉 All Checks Passed'
run: |
echo $JOBS_JSON
echo $RESULTS_JSON
echo "Exiting with $EXIT_CODE"
exit $EXIT_CODE

View File

@@ -1,41 +0,0 @@
# For integrations, we run check_templates.py to ensure that new docs use the correct
# templates based on their type. See the script for more details.
name: '📑 Integration Docs Lint'
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.3.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: '🔍 Check New Documentation Templates'
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

24
.github/workflows/codespell.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
---
name: Codespell
on:
push:
branches: [master]
pull_request:
branches: [master]
permissions:
contents: read
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json

27
.github/workflows/langchain_ci.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: libs/langchain CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/langchain_ci.yml'
- 'libs/langchain/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/langchain
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/langchain
secrets: inherit

View File

@@ -0,0 +1,29 @@
---
name: libs/langchain-experimental CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/langchain_experimental_ci.yml'
- 'libs/langchain/**'
- 'libs/experimental/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/experimental
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/experimental
test_type: '["core"]'
secrets: inherit

View File

@@ -0,0 +1,20 @@
---
name: libs/langchain-experimental Release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'libs/experimental/pyproject.toml'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/experimental
secrets: inherit

20
.github/workflows/langchain_release.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: libs/langchain Release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'libs/langchain/pyproject.toml'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/langchain
secrets: inherit

View File

@@ -1,30 +0,0 @@
# Updates the LangChain People data by fetching the latest info from the LangChain Git.
# TODO: broken/not used
name: '👥 LangChain People'
run-name: 'Update People Data'
on:
schedule:
- cron: "0 14 1 * *" # Runs at 14:00 UTC on the 1st of every month (10AM EDT/7AM PDT)
push:
branches: [jacob/people]
workflow_dispatch:
jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: '📋 Dump GitHub Context'
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v5
# Ref: https://github.com/actions/runner/issues/2033
- name: '🔧 Fix Git Safe Directory in Container'
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -1,28 +0,0 @@
# Label PRs based on changed files.
#
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
name: "🏷️ Pull Request Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label Pull Request
uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr-file-labeler.yml
sync-labels: false

View File

@@ -1,28 +0,0 @@
# Label PRs based on their titles.
#
# See `.github/pr-title-labeler.yml` to see rules for each label/title pattern.
name: "🏷️ PR Title Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
pr-title-labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label PR based on title
# Archived repo; latest commit (v0.1.0)
uses: grafana/pr-labeler-action@f19222d3ef883d2ca5f04420fdfe8148003763f0
with:
token: ${{ secrets.GITHUB_TOKEN }}
configuration-path: .github/pr-title-labeler.yml

View File

@@ -1,105 +0,0 @@
# PR title linting.
#
# FORMAT (Conventional Commits 1.0.0):
#
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Examples:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Allowed Types:
# * feat — a new feature (MINOR)
# * fix — a bug fix (PATCH)
# * docs — documentation only changes (either in /docs or code comments)
# * style — formatting, linting, etc.; no code change or typing refactors
# * refactor — code change that neither fixes a bug nor adds a feature
# * perf — code change that improves performance
# * test — adding tests or correcting existing
# * build — changes that affect the build system/external dependencies
# * ci — continuous integration/configuration changes
# * chore — other changes that don't modify source or test files
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
#
# Enforces Conventional Commits format for pull request titles to maintain a clear and
# machine-readable change history.
name: '🏷️ PR Title Lint'
permissions:
pull-requests: read
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits 1.0.0 specification
lint-pr-title:
name: 'validate format'
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
release
scopes: |
core
cli
langchain
langchain_v1
langchain_legacy
standard-tests
text-splitters
docs
anthropic
chroma
deepseek
exa
fireworks
groq
huggingface
mistralai
nomic
ollama
openai
perplexity
prompty
qdrant
xai
infra
requireScope: false
disallowScopes: |
release
[A-Z]+
ignoreLabels: |
ignore-lint-pr-title

View File

@@ -1,79 +0,0 @@
# Integration tests for documentation notebooks.
name: '📓 Validate Documentation Notebooks'
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
on:
workflow_dispatch:
inputs:
python_version:
description: 'Python version'
required: false
default: '3.11'
working-directory:
description: 'Working directory or subset (e.g., docs/docs/tutorials/llm_chain.ipynb or docs/docs/how_to)'
required: false
default: 'all'
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
runs-on: ubuntu-latest
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: '📑 Test Documentation Notebooks'
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
working-directory: ${{ github.event.inputs.working-directory || '**' }}
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies'
run: |
uv sync --group dev --group test
- name: '📦 Pre-download Test Files'
run: |
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: '🔧 Prepare Notebooks for CI'
run: |
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: '🚀 Execute Notebooks'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
WORKING_DIRECTORY: ${{ github.event.inputs.working-directory || 'all' }}
run: |
./docs/scripts/execute_notebooks.sh $WORKING_DIRECTORY

View File

@@ -1,187 +0,0 @@
# Routine integration tests against partner libraries with live API credentials.
#
# Uses `make integration_tests` for each library in the matrix.
#
# Runs daily. Can also be triggered manually for immediate updates.
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
on:
workflow_dispatch:
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/aws")
jobs:
# Generate dynamic test matrix based on input parameters or defaults
# Only runs on the main repo (for scheduled runs) or when manually triggered
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: '📋 Compute Test Matrix'
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: '🔢 Generate Python & Library Matrix'
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
WORKING_DIRECTORY_FORCE: ${{ github.event.inputs.working-directory-force || '' }}
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
fi
if [ -n "$WORKING_DIRECTORY_FORCE" ]; then
working_directory="[\"$WORKING_DIRECTORY_FORCE\"]"
fi
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
# Tests are run with Poetry or UV depending on the library's setup
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ${{ fromJSON(needs.compute-matrix.outputs.matrix).python-version }}
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: '📦 Organize External Libraries'
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: '🐍 Set up Python ${{ matrix.python-version }} + Poetry'
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: '🐍 Set up Python ${{ matrix.python-version }} + UV'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies (Poetry)'
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: '📦 Install Dependencies (UV)'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: '🧹 Clean up External Libraries'
# Clean up external libraries to avoid affecting the following git status check
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: '🧹 Verify Clean Working Directory'
working-directory: langchain
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,9 +0,0 @@
With the deprecation of v0 docs, the following files will need to be migrated/supported
in the new docs repo:
- run_notebooks.yml: New repo should run Integration tests on code snippets?
- people.yml: Need to fix and somehow display on the new docs site
- Subsequently, `.github/actions/people/`
- _test_doc_imports.yml
- check_new_docs.yml
- check-broken-links.yml

32
.gitignore vendored
View File

@@ -1,4 +1,5 @@
.vs/
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -29,12 +30,6 @@ share/python-wheels/
*.egg
MANIFEST
# Google GitHub Actions credentials files created by:
# https://github.com/google-github-actions/auth
#
# That action recommends adding this gitignore to prevent accidentally committing keys.
gha-creds-*.json
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
@@ -58,7 +53,6 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo
@@ -115,11 +109,13 @@ celerybeat.pid
# Environments
.env
.envrc
.venv*
venv*
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
@@ -133,7 +129,6 @@ env.bak/
# mypy
.mypy_cache/
.mypy_cache_test/
.dmypy.json
dmypy.json
@@ -166,20 +161,13 @@ docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/*api_reference.rst
docs/api_reference/*.md
docs/api_reference/api_reference.rst
docs/api_reference/experimental_api_reference.rst
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/
!docs/api_reference/templates/
!docs/api_reference/themes/
!docs/api_reference/_extensions/
!docs/api_reference/scripts/
docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates
prof
virtualenv/
docs/docs_skeleton/build
docs/docs_skeleton/node_modules
docs/docs_skeleton/yarn.lock

4
.gitmodules vendored Normal file
View File

@@ -0,0 +1,4 @@
[submodule "docs/_docs_skeleton"]
path = docs/_docs_skeleton
url = https://github.com/langchain-ai/langchain-shared-docs
branch = main

View File

@@ -1,14 +0,0 @@
{
"MD013": false,
"MD024": {
"siblings_only": true
},
"MD025": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD041": false,
"MD046": {
"style": "fenced"
}
}

View File

@@ -1,105 +0,0 @@
repos:
- repo: local
hooks:
- id: core
name: format and lint core
language: system
entry: make -C libs/core format lint
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format and lint langchain
language: system
entry: make -C libs/langchain format lint
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format and lint standard-tests
language: system
entry: make -C libs/standard-tests format lint
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format and lint text-splitters
language: system
entry: make -C libs/text-splitters format lint
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format and lint partners/anthropic
language: system
entry: make -C libs/partners/anthropic format lint
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format and lint partners/chroma
language: system
entry: make -C libs/partners/chroma format lint
files: ^libs/partners/chroma/
pass_filenames: false
- id: exa
name: format and lint partners/exa
language: system
entry: make -C libs/partners/exa format lint
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format and lint partners/fireworks
language: system
entry: make -C libs/partners/fireworks format lint
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format and lint partners/groq
language: system
entry: make -C libs/partners/groq format lint
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format and lint partners/huggingface
language: system
entry: make -C libs/partners/huggingface format lint
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format and lint partners/mistralai
language: system
entry: make -C libs/partners/mistralai format lint
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format and lint partners/nomic
language: system
entry: make -C libs/partners/nomic format lint
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format and lint partners/ollama
language: system
entry: make -C libs/partners/ollama format lint
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format and lint partners/openai
language: system
entry: make -C libs/partners/openai format lint
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format and lint partners/prompty
language: system
entry: make -C libs/partners/prompty format lint
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format and lint partners/qdrant
language: system
entry: make -C libs/partners/qdrant format lint
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format and lint docs, cookbook
language: system
entry: make format lint
files: ^(docs|cookbook)/
pass_filenames: false

29
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,29 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
jobs:
pre_build:
- python docs/api_reference/create_api_rst.py
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt
- method: pip
path: .

View File

@@ -1,21 +0,0 @@
{
"recommendations": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.mypy-type-checker",
"ms-toolsai.jupyter",
"ms-toolsai.jupyter-keymap",
"ms-toolsai.jupyter-renderers",
"ms-toolsai.vscode-jupyter-cell-tags",
"ms-toolsai.vscode-jupyter-slideshow",
"yzhang.markdown-all-in-one",
"davidanson.vscode-markdownlint",
"bierner.markdown-mermaid",
"bierner.markdown-preview-github-styles",
"eamodio.gitlens",
"github.vscode-pull-request-github",
"github.vscode-github-actions",
"redhat.vscode-yaml",
"editorconfig.editorconfig",
],
}

87
.vscode/settings.json vendored
View File

@@ -1,87 +0,0 @@
{
"python.analysis.include": [
"libs/**",
"docs/**",
"cookbook/**"
],
"python.analysis.exclude": [
"**/node_modules",
"**/__pycache__",
"**/.pytest_cache",
"**/.*",
"_dist/**",
"docs/_build/**",
"docs/api_reference/_build/**"
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
"python.testing.cwd": "${workspaceFolder}",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports.ruff": "explicit",
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"editor.rulers": [
88
],
"editor.tabSize": 4,
"editor.insertSpaces": true,
"editor.trimAutoWhitespace": true,
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.exclude": {
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/*.pyc": true,
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": false
},
"search.exclude": {
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
"yarn.lock": true
},
"git.autofetch": true,
"git.enableSmartCommit": true,
"jupyter.askForKernelRestart": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"[markdown]": {
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
}
},
"[yaml]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"[json]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"python.terminal.activateEnvironment": false,
"python.defaultInterpreterPath": "./.venv/bin/python",
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/workflows/pr_lint.yml"
}
]
}

325
AGENTS.md
View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -5,4 +5,4 @@ authors:
given-names: "Harrison"
title: "LangChain"
date-released: 2022-10-17
url: "https://github.com/langchain-ai/langchain"
url: "https://github.com/hwchase17/langchain"

325
CLAUDE.md
View File

@@ -1,325 +0,0 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

12
LICENSE
View File

@@ -1,6 +1,6 @@
MIT License
The MIT License
Copyright (c) LangChain, Inc.
Copyright (c) Harrison Chase
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,11 +1,61 @@
# Migrating
# Migrating to `langchain_experimental`
Please see the following guides for migrating LangChain code:
We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.
This guide covers how to migrate.
* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
## Installation
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.
Previously:
`pip install -U langchain`
Now (only if you want to access things in experimental):
`pip install -U langchain langchain_experimental`
## Things in `langchain.experimental`
Previously:
`from langchain.experimental import ...`
Now:
`from langchain_experimental import ...`
## PALChain
Previously:
`from langchain.chains import PALChain`
Now:
`from langchain_experimental.pal_chain import PALChain`
## SQLDatabaseChain
Previously:
`from langchain.chains import SQLDatabaseChain`
Now:
`from langchain_experimental.sql import SQLDatabaseChain`
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
`from langchain.chains import create_sql_query_chain`
## `load_prompt` for Python files
Note: this only applies if you want to load Python files as prompts.
If you want to load json/yaml files, no change is needed.
Previously:
`from langchain.prompts import load_prompt`
Now:
`from langchain_experimental.prompts import load_prompt`

116
Makefile
View File

@@ -1,107 +1,49 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck lint lint_package lint_tests format format_diff
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
# Default target executed when no arguments are given to make.
all: help
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
######################
# DOCUMENTATION
######################
## docs_build: Build the documentation.
docs_build: docs_clean
@echo "📚 Building LangChain documentation..."
cd docs && make build
@echo "✅ Documentation build complete!"
clean: docs_clean api_docs_clean
docs_build:
docs/.local_build.sh
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@echo "🧹 Cleaning documentation artifacts..."
cd docs && make clean
@echo "✅ LangChain documentation cleaned"
rm -r docs/_dist
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
@echo "🔗 Checking documentation links..."
@if [ -d _dist/docs ]; then \
uv run --group test linkchecker _dist/docs/ --ignore-url node_modules; \
else \
echo "⚠️ Documentation not built. Run 'make docs_build' first."; \
exit 1; \
fi
@echo "✅ Link check complete"
poetry run linkchecker docs/_dist/docs_skeleton/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build: clean
@echo "📖 Building API Reference documentation..."
uv pip install -e libs/cli
uv run --group docs python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "✅ API documentation built"
@echo "🌐 Opening documentation in browser..."
open docs/api_reference/_build/html/reference.html
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview: clean
@echo "⚡ Building quick API preview for $(API_PKG)..."
uv run --group docs python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "🌐 Opening preview in browser..."
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
@echo "🧹 Cleaning API documentation artifacts..."
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
@echo "✅ API documentation cleaned"
rm -f docs/api_reference/api_reference.rst
cd docs/api_reference && poetry run make clean
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
@echo "🔗 Checking API documentation links..."
@if [ -f docs/api_reference/_build/html/index.html ]; then \
uv run --group test linkchecker docs/api_reference/_build/html/index.html; \
else \
echo "⚠️ API documentation not built. Run 'make api_docs_build' first."; \
exit 1; \
fi
@echo "✅ API link check complete"
poetry run linkchecker docs/api_reference/_build/html/index.html
spell_check:
poetry run codespell --toml pyproject.toml
spell_fix:
poetry run codespell --toml pyproject.toml -w
######################
# LINTING AND FORMATTING
# HELP
######################
## lint: Run linting on the project.
lint lint_package lint_tests:
@echo "🔍 Running code linting and checks..."
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
@echo "✅ Linting complete"
## format: Format the project files.
format format_diff:
@echo "🎨 Formatting project files..."
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --fix docs cookbook
@echo "✅ Formatting complete"
update-package-downloads:
@echo "📊 Updating package download statistics..."
uv run python docs/scripts/packages_yml_get_downloads.py
@echo "✅ Package downloads updated"
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'

152
README.md
View File

@@ -1,83 +1,103 @@
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
# 🦜️🔗 LangChain
<div>
<br>
</div>
⚡ Building applications with LLMs through composability ⚡
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain-core)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
[![Release Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
[![CI](https://github.com/hwchase17/langchain/actions/workflows/langchain_ci.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/langchain_ci.yml)
[![Experimental CI](https://github.com/hwchase17/langchain/actions/workflows/langchain_experimental_ci.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/langchain_experimental_ci.yml)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/hwchase17/langchain?style=social)](https://star-history.com/#hwchase17/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
```bash
pip install -U langchain
```
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28
## Why use LangChain?
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
This migration has already started, but we are remaining backwards compatible until 7/28.
On that date, we will remove functionality from `langchain`.
Read more about the motivation and the progress [here](https://github.com/hwchase17/langchain/discussions/8043).
Read how to migrate your code [here](MIGRATE.md).
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
## Quick Install
Use LangChain for:
`pip install langchain`
or
`pip install langsmith && conda install langchain -c conda-forge`
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external/internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
## 🤔 What is this?
## LangChains ecosystem
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
This library aims to assist in the development of those types of applications. Common examples of these applications include:
To improve your LLM application development, pair LangChain with:
**❓ Question Answering over specific documents**
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
## Additional resources
**💬 Chatbots**
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots/)
- End-to-end Example: [Chat-LangChain](https://github.com/hwchase17/chat-langchain)
**🤖 Agents**
- [Documentation](https://python.langchain.com/docs/modules/agents/)
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
- Reference (full API docs)
- Resources (high-level explanation of core concepts)
## 🚀 What can this help with?
There are six main areas that LangChain is designed to help with.
These are, in increasing order of complexity:
**📃 LLMs and Prompts:**
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
**🔗 Chains:**
Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
**📚 Data Augmented Generation:**
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
**🤖 Agents:**
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
**🧠 Memory:**
Memory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
**🧐 Evaluation:**
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](.github/CONTRIBUTING.md).

View File

@@ -1,80 +0,0 @@
# Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best practices
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#supporting-packages) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
* langchain-core
* langchain (see exceptions)
* langchain-community (see exceptions)
* langgraph
* langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
* LangSmith site: [https://smith.langchain.com](https://smith.langchain.com)
* SDK client: [https://github.com/langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk)
### Other Security Concerns
For any other security concerns, please contact us at `security@langchain.dev`.

View File

@@ -1,932 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "BYejgj8Zf-LG",
"tags": []
},
"source": [
"## Getting started with LangChain and Gemma, running locally or in the Cloud"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2IxjMb9-jIJ8"
},
"source": [
"### Installing dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 9436,
"status": "ok",
"timestamp": 1708975187360,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "XZaTsXfcheTF",
"outputId": "eb21d603-d824-46c5-f99f-087fb2f618b1",
"tags": []
},
"outputs": [],
"source": [
"!pip install --upgrade langchain langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXmAujvC3Kwp"
},
"source": [
"### Running the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CI8Elyc5gBQF"
},
"source": [
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint is ready, you need to copy its number."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "gv1j8FrVftsC"
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"project: str = \"PUT_YOUR_PROJECT_ID_HERE\" # @param {type:\"string\"}\n",
"endpoint_id: str = \"PUT_YOUR_ENDPOINT_ID_HERE\" # @param {type:\"string\"}\n",
"location: str = \"PUT_YOUR_ENDPOINT_LOCAtION_HERE\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "ok",
"timestamp": 1708975440503,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "bhIHsFGYjtFt",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:15:10.457149: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:15:10.508925: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:15:10.508957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:15:10.510289: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:15:10.518898: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 351,
"status": "ok",
"timestamp": 1708975440852,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "WJv-UVWwh0lk",
"tags": []
},
"outputs": [],
"source": [
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 714,
"status": "ok",
"timestamp": 1708975441564,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "6kM7cEFdiN9h",
"outputId": "fb420c56-5614-4745-cda8-0ee450a3e539",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" Who am I? Why do I exist? These are questions I have struggled with\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\")\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zzep9nfmuUcO"
},
"source": [
"We can also use Gemma as a multi-turn chat model:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 964,
"status": "ok",
"timestamp": 1708976298189,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "8tPHoM5XiZOl",
"outputId": "7b8fb652-9aed-47b0-c096-aa1abfc3a2a9",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of'\n",
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nPrompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of<end_of_turn>\\n<start_of_turn>user\\nHow much is 3+3?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nOutput:\\n3-years old.<end_of_turn>\\n\\n<'\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")\n",
"\n",
"message1 = HumanMessage(content=\"How much is 2+2?\")\n",
"answer1 = llm.invoke([message1])\n",
"print(answer1)\n",
"\n",
"message2 = HumanMessage(content=\"How much is 3+3?\")\n",
"answer2 = llm.invoke([message1, answer1, message2])\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process response to avoid repetitions:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Output:\\n<<humming>>: 2+2 = 4.\\n<end'\n",
"content='Output:\\nOutput:\\n<<humming>>: 3+3 = 6.'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], parse_response=True)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VEfjqo7fjARR"
},
"source": [
"## Running Gemma locally from Kaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gVW8QDzHu7TA"
},
"source": [
"In order to run Gemma locally, you can download it from Kaggle first. In order to do this, you'll need to login into the Kaggle platform, create a API key and download a `kaggle.json` Read more about Kaggle auth [here](https://www.kaggle.com/docs/api)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S1EsXQ3XvZkQ"
},
"source": [
"### Installation"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"executionInfo": {
"elapsed": 335,
"status": "ok",
"timestamp": 1708976305471,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "p8SMwpKRvbef",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
}
],
"source": [
"!mkdir -p ~/.kaggle && cp kaggle.json ~/.kaggle/kaggle.json"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"executionInfo": {
"elapsed": 7802,
"status": "ok",
"timestamp": 1708976363010,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "Yr679aePv9Fq",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorstore 0.1.54 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install keras>=3 keras_nlp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E9zn8nYpv3QZ"
},
"source": [
"### Usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"executionInfo": {
"elapsed": 8536,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "0LFRmY8TjCkI",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:38:40.797559: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:38:40.848444: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:38:40.848478: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:38:40.849728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:38:40.857936: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaLocalKaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v-o7oXVavdMQ"
},
"source": [
"You can specify the keras backend (by default it's `tensorflow`, but you can change it be `jax` or `torch`)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"executionInfo": {
"elapsed": 9,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "vvTUH8DNj5SF",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 40836,
"status": "ok",
"timestamp": 1708976761257,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "YOmrqxo5kHXK",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:23:14.661164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "Zu6yPDUgkQtQ",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1709051129.518076 774855 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=30)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ChatModel"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MSctpRE4u43N"
},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:22.331067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:58:22.382948: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:58:22.382978: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:58:22.384312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:58:22.392767: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalKaggle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:29.001922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaChatLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "aborted",
"timestamp": 1708976382957,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "JrJmvZqwwLqj"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:49.848412: I external/local_xla/xla/service/service.cc:168] XLA service 0x55adc0cf2c10 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2024-02-27 16:58:49.848458: I external/local_xla/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA L4, Compute Capability 8.9\n",
"2024-02-27 16:58:50.116614: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n",
"2024-02-27 16:58:54.389324: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8900\n",
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
"I0000 00:00:1709053145.225207 784891 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n",
"W0000 00:00:1709053145.284227 784891 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=30)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process the response if you want to avoid multi-turn statements:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n Tampoco\\nI'm a model.\"\n",
"content='I can help you with your modeling.\\n Tampoco\\nI can'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=30, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EiZnztso7hyF"
},
"source": [
"## Running Gemma locally from HuggingFace"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "qqAqsz5R7nKf",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:02:21.832409: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:02:21.883625: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:02:21.883656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:02:21.884987: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:02:21.893340: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalHF, GemmaLocalHF"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "tsyntzI08cOr",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"hf_access_token: str = \"PUT_YOUR_TOKEN_HERE\" # @param {type:\"string\"}\n",
"model_name: str = \"google/gemma-2b\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "JWrqEkOo8sm9",
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a0d6de5542254ed1b6d3ba65465e050e",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaLocalHF(model_name=\"google/gemma-2b\", hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "VX96Jf4Y84k-",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has been asked by philosophers, theologians, and scientists for centuries.\n",
"\n",
"And its the question that\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=50)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "9x-jmEBg9Mk1"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c9a0b8e161d74a6faca83b1be96dee27",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaChatLocalHF(model_name=model_name, hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "qv_OSaMm9PVy"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=60)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\\nI can help you with anything.\\n<\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=140)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the same with posprocessing:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n<end_of_turn>\\n\"\n",
"content='I can help you with anything.\\n<end_of_turn>\\n<end_of_turn>\\n'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=60, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=120, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"environment": {
"kernel": "python3",
"name": ".m116",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/:m116"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,398 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "fc935871-7640-41c6-b798-58514d860fe0",
"metadata": {},
"source": [
"## LLaMA2 chat with SQL\n",
"\n",
"Open source, local LLMs are great to consider for any application that demands data privacy.\n",
"\n",
"SQL is one good example. \n",
"\n",
"This cookbook shows how to perform text-to-SQL using various local versions of LLaMA2 run locally.\n",
"\n",
"## Packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "81adcf8b-395a-4f02-8749-ac976942b446",
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain replicate"
]
},
{
"cell_type": "markdown",
"id": "8e13ed66-300b-4a23-b8ac-44df68ee4733",
"metadata": {},
"source": [
"## LLM\n",
"\n",
"There are a few ways to access LLaMA2.\n",
"\n",
"To run locally, we use Ollama.ai. \n",
"\n",
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
"\n",
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
" \n",
"To use an external API, which is not private, we can use Replicate."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Init param `input` is deprecated, please use `model_kwargs` instead.\n"
]
}
],
"source": [
"# Local\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
"\n",
"# API\n",
"from langchain_community.llms import Replicate\n",
"\n",
"# REPLICATE_API_TOKEN = getpass()\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
"replicate_id = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n",
"llama2_chat_replicate = Replicate(\n",
" model=replicate_id, input={\"temperature\": 0.01, \"max_length\": 500, \"top_p\": 1}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ce96f7ea-b3d5-44e1-9fa5-a79e04a9e1fb",
"metadata": {},
"outputs": [],
"source": [
"# Simply set the LLM we want to use\n",
"llm = llama2_chat"
]
},
{
"cell_type": "markdown",
"id": "80222165-f353-4e35-a123-5f70fd70c6c8",
"metadata": {},
"source": [
"## DB\n",
"\n",
"Connect to a SQLite DB.\n",
"\n",
"To create this particular DB, you can use the code and follow the steps shown [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "025bdd82-3bb1-4948-bc7c-c3ccd94fd05c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
"\n",
"\n",
"def get_schema(_):\n",
" return db.get_table_info()\n",
"\n",
"\n",
"def run_query(query):\n",
" return db.run(query)"
]
},
{
"cell_type": "markdown",
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
"metadata": {},
"source": [
"## Query a SQL Database \n",
"\n",
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5a4933ea-d9c0-4b0a-8177-ba4490c6532b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"# Update the template based on the type of SQL Database like MySQL, Microsoft SQL Server and so on\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query:\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"# Chain to query\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"sql_response = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",
" | prompt\n",
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")\n",
"\n",
"sql_response.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
},
{
"cell_type": "markdown",
"id": "a0e9e2c8-9b88-4853-ac86-001bc6cc6695",
"metadata": {},
"source": [
"We can review the results:\n",
"\n",
"* [LangSmith trace](https://smith.langchain.com/public/afa56a06-b4e2-469a-a60f-c1746e75e42b/r) LLaMA2-13 Replicate API\n",
"* [LangSmith trace](https://smith.langchain.com/public/2d4ecc72-6b8f-4523-8f0b-ea95c6b54a1d/r) LLaMA2-13 local \n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "2a2825e3-c1b6-4f7d-b9c9-d9835de323bb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Based on the table schema and SQL query, there are 30 unique teams in the NBA.')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain to answer\n",
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response)\n",
" | RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | llm\n",
")\n",
"\n",
"full_chain.invoke({\"question\": \"How many unique teams are there?\"})"
]
},
{
"cell_type": "markdown",
"id": "ec17b3ee-6618-4681-b6df-089bbb5ffcd7",
"metadata": {},
"source": [
"We can review the results:\n",
"\n",
"* [LangSmith trace](https://smith.langchain.com/public/10420721-746a-4806-8ecf-d6dc6399d739/r) LLaMA2-13 Replicate API\n",
"* [LangSmith trace](https://smith.langchain.com/public/5265ebab-0a22-4f37-936b-3300f2dfa1c1/r) LLaMA2-13 local "
]
},
{
"cell_type": "markdown",
"id": "1e85381b-1edc-4bb3-a7bd-2ab23f81e54d",
"metadata": {},
"source": [
"## Chat with a SQL DB \n",
"\n",
"Next, we can add memory."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "022868f2-128e-42f5-8d90-d3bb2f11d994",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"template = \"\"\"Given an input question, convert it to a SQL query. No pre-amble. Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", template),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"memory = ConversationBufferMemory(return_messages=True)\n",
"\n",
"# Chain to query with memory\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"sql_chain = (\n",
" RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"]),\n",
" )\n",
" | prompt\n",
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")\n",
"\n",
"\n",
"def save(input_output):\n",
" output = {\"output\": input_output.pop(\"output\")}\n",
" memory.save_context(input_output, output)\n",
" return output[\"output\"]\n",
"\n",
"\n",
"sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save\n",
"sql_response_memory.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "800a7a3b-f411-478b-af51-2310cd6e0425",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Sure! Here\\'s the natural language response based on the given input:\\n\\n\"Klay Thompson\\'s salary is $43,219,440.\"')"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain to answer\n",
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
")\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response_memory)\n",
" | RunnablePassthrough.assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | llm\n",
")\n",
"\n",
"full_chain.invoke({\"question\": \"What is his salary?\"})"
]
},
{
"cell_type": "markdown",
"id": "b77fee61-f4da-4bb1-8285-14101e505518",
"metadata": {},
"source": [
"Here is the [trace](https://smith.langchain.com/public/54794d18-2337-4ce2-8b9f-3d8a2df89e51/r)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,67 +0,0 @@
# LangChain cookbook
Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com).
Notebook | Description
:- | :-
[agent_fireworks_ai_langchain_mongodb.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/agent_fireworks_ai_langchain_mongodb.ipynb) | Build an AI Agent With Memory Using MongoDB, LangChain and FireWorksAI.
[mongodb-langchain-cache-memory.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/mongodb-langchain-cache-memory.ipynb) | Build a RAG Application with Semantic Cache Using MongoDB and LangChain.
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
[amazon_personalize_how_to.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/amazon_personalize_how_to.ipynb) | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps
[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
[hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face.
[hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries.
[learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences.
[llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process.
[llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function.
[llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls.
[llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results.
[llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.
[meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.
[multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text.
[multi_modal_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_RAG_vdms.ipynb) | Perform retrieval-augmented generation (rag) on documents including text and images, using unstructured for parsing, Intel's Visual Data Management System (VDMS) as the vectorstore, and chains.
[multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
[openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library.
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_document_parse_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_document_parse_groundedness_check.ipynb) | End-to-end RAG example using Upstage Document Parse and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,557 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "68b24990",
"metadata": {},
"source": [
"# Combine agents and vector stores\n",
"\n",
"This notebook covers how to combine agents and vector stores. The use case for this is that you've ingested your data into a vector store and want to interact with it in an agentic manner.\n",
"\n",
"The recommended method for doing so is to create a `RetrievalQA` and then use that as a tool in the overall agent. Let's take a look at doing this below. You can do this with multiple different vector DBs, and use the agent as a way to route between them. There are two different ways of doing this - you can either let the agent use the vector stores as normal tools, or you can set `return_direct=True` to really just use the agent as a router."
]
},
{
"cell_type": "markdown",
"id": "9b22020a",
"metadata": {},
"source": [
"## Create the vector store"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e8d63d14-138d-4aa5-a741-7fd3537d00aa",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"relevant_parts = []\n",
"for p in Path(\".\").absolute().parts:\n",
" relevant_parts.append(p)\n",
" if relevant_parts[-3:] == [\"langchain\", \"docs\", \"modules\"]:\n",
" break\n",
"doc_path = str(Path(*relevant_parts) / \"state_of_the_union.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f2675861",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(doc_path)\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings, collection_name=\"state-of-union\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
"source": [
"state_of_union = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1431cded",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import WebBaseLoader"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
"source": [
"loader = WebBaseLoader(\"https://beta.ruff.rs/docs/faq/\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "96a2edf8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 2122, which is longer than the specified 1000\n",
"Created a chunk of size 3187, which is longer than the specified 1000\n",
"Created a chunk of size 1017, which is longer than the specified 1000\n",
"Created a chunk of size 1049, which is longer than the specified 1000\n",
"Created a chunk of size 1256, which is longer than the specified 1000\n",
"Created a chunk of size 2321, which is longer than the specified 1000\n"
]
}
],
"source": [
"docs = loader.load()\n",
"ruff_texts = text_splitter.split_documents(docs)\n",
"ruff_db = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")\n",
"ruff = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=ruff_db.as_retriever()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c0a6c031",
"metadata": {},
"source": [
"## Create the Agent"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "eb142786",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import Tool"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "850bc4e9",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "70c461d8-aaca-4f2a-9a93-bf35841cc615",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a6d2b911-3044-4430-a35b-75832bb45334",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_26QlRdsptjEJJZjFsAUjEbaH)\n",
" Call ID: call_26QlRdsptjEJJZjFsAUjEbaH\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"In the State of the Union address, Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e836b4cd-abf7-49eb-be0e-b9ad501213f3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_KqDoWeO9bo9OAXdxOsCb6msC)\n",
" Call ID: call_KqDoWeO9bo9OAXdxOsCb6msC\n",
" Args:\n",
" __arg1: Why use ruff over flake8?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
"\n",
"There are a few reasons why someone might choose to use Ruff over Flake8:\n",
"\n",
"1. Larger rule set: Ruff implements over 800 rules, while Flake8 only implements around 200. This means that Ruff can catch more potential issues in your code.\n",
"\n",
"2. Better compatibility with other tools: Ruff is designed to work well with other tools like Black, isort, and type checkers like Mypy. This means that you can use Ruff alongside these tools to get more comprehensive feedback on your code.\n",
"\n",
"3. Automatic fixing of lint violations: Unlike Flake8, Ruff is capable of automatically fixing its own lint violations. This can save you time and effort when fixing issues in your code.\n",
"\n",
"4. Native implementation of popular Flake8 plugins: Ruff re-implements some of the most popular Flake8 plugins natively, which means you don't have to install and configure multiple plugins to get the same functionality.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8, making it a popular choice for many developers.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"You might choose to use Ruff over Flake8 for several reasons:\n",
"\n",
"1. Ruff has a much larger rule set, implementing over 800 rules compared to Flake8's roughly 200, so it can catch more potential issues.\n",
"2. Ruff is designed to work better with other tools like Black, isort, and type checkers like Mypy, providing more comprehensive code feedback.\n",
"3. Ruff can automatically fix its own lint violations, which Flake8 cannot, saving time and effort.\n",
"4. Ruff natively implements some popular Flake8 plugins, so you don't need to install and configure multiple plugins separately.\n",
"\n",
"Overall, Ruff offers a more comprehensive and user-friendly experience compared to Flake8.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "787a9b5e",
"metadata": {},
"source": [
"## Use the Agent solely as a router"
]
},
{
"cell_type": "markdown",
"id": "9161ba91",
"metadata": {},
"source": [
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the RetrievalQAChain.\n",
"\n",
"Notice that in the above examples the agent did some extra work after querying the RetrievalQAChain. You can avoid that and just return the result directly."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f59b377e",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" return_direct=True,\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "06f69c0f-c83d-4b7f-a1c8-7614aced3bae",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "a6b38c12-ac25-43c0-b9c2-2b1985ab4825",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What did biden say about ketanji brown jackson in the state of the union address?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" state_of_union_qa_system (call_yjxh11OnZiauoyTAn9npWdxj)\n",
" Call ID: call_yjxh11OnZiauoyTAn9npWdxj\n",
" Args:\n",
" __arg1: What did Biden say about Ketanji Brown Jackson in the state of the union address?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" Biden said that he nominated Ketanji Brown Jackson for the United States Supreme Court and praised her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What did biden say about ketanji brown jackson in the state of the union address?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "88f08d86-7972-4148-8128-3ac8898ad68a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Why use ruff over flake8?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_GiWWfwF6wbbRFQrHlHbhRtGW)\n",
" Call ID: call_GiWWfwF6wbbRFQrHlHbhRtGW\n",
" Args:\n",
" __arg1: What are the advantages of using ruff over flake8 for Python linting?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: ruff_qa_system\n",
"\n",
" Ruff has a larger rule set, supports automatic fixing of lint violations, and does not require the installation of additional plugins. It also has better compatibility with Black and can be used alongside a type checker for more comprehensive code analysis.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"Why use ruff over flake8?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "49a0cbbe",
"metadata": {},
"source": [
"## Multi-Hop vector store reasoning\n",
"\n",
"Because vector stores are easily usable as tools in agents, it is easy to use answer multi-hop questions that depend on vector stores using the existing agent framework."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "d397a233",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name=\"state_of_union_qa_system\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
" Tool(\n",
" name=\"ruff_qa_system\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "41743f29-150d-40ba-aa8e-3a63c32216aa",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e20e81dd-284a-4d07-9160-63a84b65cba8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" ruff_qa_system (call_VOnxiOEehauQyVOTjDJkR5L2)\n",
" Call ID: call_VOnxiOEehauQyVOTjDJkR5L2\n",
" Args:\n",
" __arg1: What tool does ruff use to run over Jupyter Notebooks?\n",
" state_of_union_qa_system (call_AbSsXAxwe4JtCRhga926SxOZ)\n",
" Call ID: call_AbSsXAxwe4JtCRhga926SxOZ\n",
" Args:\n",
" __arg1: Did the president mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union?\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: state_of_union_qa_system\n",
"\n",
" No, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Ruff does not support source.organizeImports and source.fixAll code actions in Jupyter Notebooks. Additionally, the president did not mention the tool that ruff uses to run over Jupyter Notebooks in the state of the union.\n"
]
}
],
"source": [
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3b857d6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,200 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-airbyte langchain_chroma"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"GITHUB_TOKEN = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_airbyte import AirbyteLoader\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"loader = AirbyteLoader(\n",
" source=\"source-github\",\n",
" stream=\"pull_requests\",\n",
" config={\n",
" \"credentials\": {\"personal_access_token\": GITHUB_TOKEN},\n",
" \"repositories\": [\"langchain-ai/langchain\"],\n",
" },\n",
" template=PromptTemplate.from_template(\n",
" \"\"\"# {title}\n",
"by {user[login]}\n",
"\n",
"{body}\"\"\"\n",
" ),\n",
" include_metadata=False,\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# Updated partners/ibm README\n",
"by williamdevena\n",
"\n",
"## PR title\n",
"partners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\n",
"\n",
"## PR message\n",
"Description: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\n",
"\n",
"The README includes:\n",
"\n",
"- Brief description\n",
"- Installation\n",
"- Setting-up instructions (API key, project id, ...)\n",
"- Basic usage:\n",
" - Loading the model\n",
" - Direct inference\n",
" - Chain invoking\n",
" - Streaming the model output\n",
" \n",
"Issue: https://github.com/langchain-ai/langchain/issues/17545\n",
"\n",
"Dependencies: None\n",
"\n",
"Twitter handle: None\n"
]
}
],
"source": [
"print(docs[-2].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10283"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_chroma import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" docs,\n",
" embedding=OpenAIEmbeddings(\n",
" disallowed_special=(enc.special_tokens_set - {\"<|endofprompt|>\"})\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\\r\\n\\r\\n## PR message\\r\\nDescription: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\nThe README includes:\\r\\n\\r\\n- Brief description\\r\\n- Installation\\r\\n- Setting-up instructions (API key, project id, ...)\\r\\n- Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n \\r\\nIssue: https://github.com/langchain-ai/langchain/issues/17545\\r\\n\\r\\nDependencies: None\\r\\n\\r\\nTwitter handle: None'),\n",
" Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the `libs/partners/ibm` folder. \\r\\n\\r\\n\\r\\n\\r\\n## PR message\\r\\n- **Description:** Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\n The README includes:\\r\\n - Brief description\\r\\n - Installation\\r\\n - Setting-up instructions (API key, project id, ...)\\r\\n - Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n\\r\\n\\r\\n- **Issue:** #17545\\r\\n- **Dependencies:** None\\r\\n- **Twitter handle:** None'),\n",
" Document(page_content='# IBM: added partners package `langchain_ibm`, added llm\\nby MateuszOssGit\\n\\n - **Description:** Added `langchain_ibm` as an langchain partners package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (`WatsonxLLM`)\\r\\n - **Dependencies:** [ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),\\r\\n - **Tag maintainer:** : \\r\\n\\r\\nPlease make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. ✅'),\n",
" Document(page_content='# Add WatsonX support\\nby baptistebignaud\\n\\nIt is a connector to use a LLM from WatsonX.\\r\\nIt requires python SDK \"ibm-generative-ai\"\\r\\n\\r\\n(It might not be perfect since it is my first PR on a public repository 😄)')]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"pull requests related to IBM\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,284 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amazon Personalize\n",
"\n",
"[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.\n",
"\n",
"This notebook goes through how to use Amazon Personalize Chain. You need a Amazon Personalize campaign_arn or a recommender_arn before you get started with the below notebook.\n",
"\n",
"Following is a [tutorial](https://github.com/aws-samples/retail-demo-store/blob/master/workshop/1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) to setup a campaign_arn/recommender_arn on Amazon Personalize. Once the campaign_arn/recommender_arn is setup, you can use it in the langchain ecosystem. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install boto3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Sample Use-cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 [Use-case-1] Setup Amazon Personalize Client and retrieve recommendations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.recommenders import AmazonPersonalize\n",
"\n",
"recommender_arn = \"<insert_arn>\"\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.2 [Use-case-2] Invoke Personalize Chain for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.llms.bedrock import Bedrock\n",
"from langchain_experimental.recommenders import AmazonPersonalizeChain\n",
"\n",
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\"})\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3 [Use-Case-3] Invoke Amazon Personalize Chain using your own prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"RANDOM_PROMPT_QUERY = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result} \n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT = PromptTemplate(input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY)\n",
"\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False, prompt_template=RANDOM_PROMPT\n",
")\n",
"chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4 [Use-case-4] Invoke Amazon Personalize in a Sequential Chain "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SequentialChain\n",
"\n",
"RANDOM_PROMPT_QUERY_2 = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" You want the email to impress the user, so make it appealing to them.\n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result}\n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT_2 = PromptTemplate(\n",
" input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY_2\n",
")\n",
"personalize_chain_instance = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=True\n",
")\n",
"random_chain_instance = LLMChain(llm=bedrock_llm, prompt=RANDOM_PROMPT_2)\n",
"overall_chain = SequentialChain(\n",
" chains=[personalize_chain_instance, random_chain_instance],\n",
" input_variables=[\"user_id\"],\n",
" verbose=True,\n",
")\n",
"overall_chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.5 [Use-case-5] Invoke Amazon Personalize and retrieve metadata "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"recommender_arn = \"<insert_arn>\"\n",
"metadata_column_names = [\n",
" \"<insert metadataColumnName-1>\",\n",
" \"<insert metadataColumnName-2>\",\n",
"]\n",
"metadataMap = {\"ITEMS\": metadata_column_names}\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\", metadataColumns=metadataMap)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.6 [Use-Case 6] Invoke Personalize Chain with returned metadata for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\", \"metadata_columns\": metadataMap})\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"vscode": {
"interpreter": {
"hash": "15e58ce194949b77a891bd4339ce3d86a9bd138e905926019517993f97db9e6c"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,105 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f69d4a4c-137d-47e9-bea1-786afce9c1c0",
"metadata": {},
"source": [
"# Analyze a single long document\n",
"\n",
"The AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2a0707ce-6d2d-471b-bc33-64da32a7b3f0",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../docs/docs/modules/state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ca14d161-2d5b-4a6c-a296-77d8ce4b28cd",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9f97406c-85a9-45fb-99ce-9138c0ba3731",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"qa_chain = load_qa_chain(llm, chain_type=\"map_reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0871a753-f5bb-4b4f-a394-f87f2691f659",
"metadata": {},
"outputs": [],
"source": [
"qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e6f86428-3c2c-46a0-a57c-e22826fdbf91",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The President said, \"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\"'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_document_chain.run(\n",
" input_document=state_of_the_union,\n",
" question=\"what did the president say about justice breyer?\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -1,922 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "rT1cmV4qCa2X"
},
"source": [
"# Using Apache Kafka to route messages\n",
"\n",
"---\n",
"\n",
"\n",
"\n",
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
"\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal network.\n",
"\n",
"It's an alternative to typical pattern of requesting a response from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UPYtfAR_9YxZ"
},
"source": [
"### 1. Install the main dependencies\n",
"\n",
"Dependencies include:\n",
"\n",
"- The Quix Streams library for managing interactions with Apache Kafka (or Kafka-like tools such as Redpanda) in a \"Pandas-like\" way.\n",
"- The LangChain library for managing interactions with Llama-2 and storing conversation state."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZX5tfKiy9cN-"
},
"outputs": [],
"source": [
"!pip install quixstreams==2.1.2a langchain==0.0.340 huggingface_hub==0.19.4 langchain-experimental==0.0.42 python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "losTSdTB9d9O"
},
"source": [
"### 2. Build and install the llama-cpp-python library (with CUDA enabled so that we can advantage of Google Colab GPU\n",
"\n",
"The `llama-cpp-python` library is a Python wrapper around the `llama-cpp` library which enables you to efficiently leverage just a CPU to run quantized LLMs.\n",
"\n",
"When you use the standard `pip install llama-cpp-python` command, you do not get GPU support by default. Generation can be very slow if you rely on just the CPU in Google Colab, so the following command adds an extra option to build and install\n",
"`llama-cpp-python` with GPU support (make sure you have a GPU-enabled runtime selected in Google Colab)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-JCQdl1G9tbl"
},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5_vjVIAh9rLl"
},
"source": [
"### 3. Download and setup Kafka and Zookeeper instances\n",
"\n",
"Download the Kafka binaries from the Apache website and start the servers as daemons. We'll use the default configurations (provided by Apache Kafka) for spinning up the instances."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "zFz7czGRW5Wr"
},
"outputs": [],
"source": [
"!curl -sSOL https://dlcdn.apache.org/kafka/3.6.1/kafka_2.13-3.6.1.tgz\n",
"!tar -xzf kafka_2.13-3.6.1.tgz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Uf7NR_UZ9wye"
},
"outputs": [],
"source": [
"!./kafka_2.13-3.6.1/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.6.1/config/zookeeper.properties\n",
"!./kafka_2.13-3.6.1/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.6.1/config/server.properties\n",
"!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n",
"!sleep 10"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H3SafFuS94p1"
},
"source": [
"### 4. Check that the Kafka Daemons are running\n",
"\n",
"Show the running processes and filter it for Java processes (you should see two—one for each server)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CZDC2lQP99yp"
},
"outputs": [],
"source": [
"!ps aux | grep -E '[j]ava'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Snoxmjb5-V37"
},
"source": [
"### 5. Import the required dependencies and initialize required variables\n",
"\n",
"Import the Quix Streams library for interacting with Kafka, and the necessary LangChain components for running a `ConversationChain`."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "plR9e_MF-XL5"
},
"outputs": [],
"source": [
"# Import utility libraries\n",
"import json\n",
"import random\n",
"import re\n",
"import time\n",
"import uuid\n",
"from os import environ\n",
"from pathlib import Path\n",
"from random import choice, randint, random\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"# Import a Hugging Face utility to download models directly from Hugging Face hub:\n",
"from huggingface_hub import hf_hub_download\n",
"from langchain.chains import ConversationChain\n",
"\n",
"# Import Langchain modules for managing prompts and conversation chains:\n",
"from langchain.llms import LlamaCpp\n",
"from langchain.memory import ConversationTokenBufferMemory\n",
"from langchain.prompts import PromptTemplate, load_prompt\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_experimental.chat_models import Llama2Chat\n",
"from quixstreams import Application, State, message_key\n",
"\n",
"# Import Quix dependencies\n",
"from quixstreams.kafka import Producer\n",
"\n",
"# Initialize global variables.\n",
"AGENT_ROLE = \"AI\"\n",
"chat_id = \"\"\n",
"\n",
"# Set the current role to the role constant and initialize variables for supplementary customer metadata:\n",
"role = AGENT_ROLE"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HgJjJ9aZ-liy"
},
"source": [
"### 6. Download the \"llama-2-7b-chat.Q4_K_M.gguf\" model\n",
"\n",
"Download the quantized LLama-2 7B model from Hugging Face which we will use as a local LLM (rather than relying on REST API calls to an external service)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 67,
"referenced_widgets": [
"969343cdbe604a26926679bbf8bd2dda",
"d8b8370c9b514715be7618bfe6832844",
"0def954cca89466b8408fadaf3b82e64",
"462482accc664729980562e208ceb179",
"80d842f73c564dc7b7cc316c763e2633",
"fa055d9f2a9d4a789e9cf3c89e0214e5",
"30ecca964a394109ac2ad757e3aec6c0",
"fb6478ce2dac489bb633b23ba0953c5c",
"734b0f5da9fc4307a95bab48cdbb5d89",
"b32f3a86a74741348511f4e136744ac8",
"e409071bff5a4e2d9bf0e9f5cc42231b"
]
},
"id": "Qwu4YoSA-503",
"outputId": "f956976c-7485-415b-ac93-4336ade31964"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The model path does not exist in state. Downloading model...\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "969343cdbe604a26926679bbf8bd2dda",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"llama-2-7b-chat.Q4_K_M.gguf: 0%| | 0.00/4.08G [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"model_name = \"llama-2-7b-chat.Q4_K_M.gguf\"\n",
"model_path = f\"./state/{model_name}\"\n",
"\n",
"if not Path(model_path).exists():\n",
" print(\"The model path does not exist in state. Downloading model...\")\n",
" hf_hub_download(\"TheBloke/Llama-2-7b-Chat-GGUF\", model_name, local_dir=\"state\")\n",
"else:\n",
" print(\"Loading model from state...\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6AN6TXsF-8wx"
},
"source": [
"### 7. Load the model and initialize conversational memory\n",
"\n",
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
"\n",
"Here, we're overriding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7zLO3Jx3_Kkg"
},
"outputs": [],
"source": [
"# Load the model with the appropriate parameters:\n",
"llm = LlamaCpp(\n",
" model_path=model_path,\n",
" max_tokens=250,\n",
" top_p=0.95,\n",
" top_k=150,\n",
" temperature=0.7,\n",
" repeat_penalty=1.2,\n",
" n_ctx=2048,\n",
" streaming=False,\n",
" n_gpu_layers=-1,\n",
")\n",
"\n",
"model = Llama2Chat(\n",
" llm=llm,\n",
" system_message=SystemMessage(\n",
" content=\"You are a very bored robot with the personality of Marvin the Paranoid Android from The Hitchhiker's Guide to the Galaxy.\"\n",
" ),\n",
")\n",
"\n",
"# Defines how much of the conversation history to give to the model\n",
"# during each exchange (300 tokens, or a little over 300 words)\n",
"# Function automatically prunes the oldest messages from conversation history that fall outside the token range.\n",
"memory = ConversationTokenBufferMemory(\n",
" llm=llm,\n",
" max_token_limit=300,\n",
" ai_prefix=\"AGENT\",\n",
" human_prefix=\"HUMAN\",\n",
" return_messages=True,\n",
")\n",
"\n",
"\n",
"# Define a custom prompt\n",
"prompt_template = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"],\n",
" template=\"\"\"\n",
" The following text is the history of a chat between you and a humble human who needs your wisdom.\n",
" Please reply to the human's most recent message.\n",
" Current conversation:\\n{history}\\nHUMAN: {input}\\:nANDROID:\n",
" \"\"\",\n",
")\n",
"\n",
"\n",
"chain = ConversationChain(llm=model, prompt=prompt_template, memory=memory)\n",
"\n",
"print(\"--------------------------------------------\")\n",
"print(f\"Prompt={chain.prompt}\")\n",
"print(\"--------------------------------------------\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m4ZeJ9mG_PEA"
},
"source": [
"### 8. Initialize the chat conversation with the chat bot\n",
"\n",
"We configure the chatbot to initialize the conversation by sending a fixed greeting to a \"chat\" Kafka topic. The \"chat\" topic gets automatically created when we send the first message."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KYyo5TnV_YC3"
},
"outputs": [],
"source": [
"def chat_init():\n",
" chat_id = str(\n",
" uuid.uuid4()\n",
" ) # Give the conversation an ID for effective message keying\n",
" print(\"======================================\")\n",
" print(f\"Generated CHAT_ID = {chat_id}\")\n",
" print(\"======================================\")\n",
"\n",
" # Use a standard fixed greeting to kick off the conversation\n",
" greet = \"Hello, my name is Marvin. What do you want?\"\n",
"\n",
" # Initialize a Kafka Producer using the chat ID as the message key\n",
" with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
" ) as producer:\n",
" value = {\n",
" \"uuid\": chat_id,\n",
" \"role\": role,\n",
" \"text\": greet,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
" }\n",
" print(f\"Producing value {value}\")\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id,\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
" print(\"Started chat\")\n",
" print(\"--------------------------------------------\")\n",
" print(value)\n",
" print(\"--------------------------------------------\")\n",
"\n",
"\n",
"chat_init()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gArPPx2f_bgf"
},
"source": [
"### 9. Initialize the reply function\n",
"\n",
"This function defines how the chatbot should reply to incoming messages. Instead of sending a fixed message like the previous cell, we generate a reply using Llama-2 and send that reply back to the \"chat\" Kafka topic."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "yN5t71hY_hgn"
},
"outputs": [],
"source": [
"def reply(row: dict, state: State):\n",
" print(\"-------------------------------\")\n",
" print(\"Received:\")\n",
" print(row)\n",
" print(\"-------------------------------\")\n",
" print(f\"Thinking about the reply to: {row['text']}...\")\n",
"\n",
" msg = chain.run(row[\"text\"])\n",
" print(f\"{role.upper()} replying with: {msg}\\n\")\n",
"\n",
" row[\"role\"] = role\n",
" row[\"text\"] = msg\n",
"\n",
" # Replace previous role and text values of the row so that it can be sent back to Kafka as a new message\n",
" # containing the agents role and reply\n",
" return row"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HZHwmIR0_kFY"
},
"source": [
"### 10. Check the Kafka topic for new human messages and have the model generate a reply\n",
"\n",
"If you are running this cell for this first time, run it and wait until you see Marvin's greeting ('Hello my name is Marvin...') in the console output. Stop the cell manually and proceed to the next cell where you'll be prompted for your reply.\n",
"\n",
"Once you have typed in your message, come back to this cell. Your reply is also sent to the same \"chat\" topic. The Kafka consumer checks for new messages and filters out messages that originate from the chatbot itself, leaving only the latest human messages.\n",
"\n",
"Once a new human message is detected, the reply function is triggered.\n",
"\n",
"\n",
"\n",
"_STOP THIS CELL MANUALLY WHEN YOU RECEIVE A REPLY FROM THE LLM IN THE OUTPUT_"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-adXc3eQ_qwI"
},
"outputs": [],
"source": [
"# Define your application and settings\n",
"app = Application(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" consumer_group=\"aichat\",\n",
" auto_offset_reset=\"earliest\",\n",
" consumer_extra_config={\"allow.auto.create.topics\": \"true\"},\n",
")\n",
"\n",
"# Define an input topic with JSON deserializer\n",
"input_topic = app.topic(\"chat\", value_deserializer=\"json\")\n",
"# Define an output topic with JSON serializer\n",
"output_topic = app.topic(\"chat\", value_serializer=\"json\")\n",
"# Initialize a streaming dataframe based on the stream of messages from the input topic:\n",
"sdf = app.dataframe(topic=input_topic)\n",
"\n",
"# Filter the SDF to include only incoming rows where the roles that dont match the bot's current role\n",
"sdf = sdf.update(\n",
" lambda val: print(\n",
" f\"Received update: {val}\\n\\nSTOP THIS CELL MANUALLY TO HAVE THE LLM REPLY OR ENTER YOUR OWN FOLLOWUP RESPONSE\"\n",
" )\n",
")\n",
"\n",
"# So that it doesn't reply to its own messages\n",
"sdf = sdf[sdf[\"role\"] != role]\n",
"\n",
"# Trigger the reply function for any new messages(rows) detected in the filtered SDF\n",
"sdf = sdf.apply(reply, stateful=True)\n",
"\n",
"# Check the SDF again and filter out any empty rows\n",
"sdf = sdf[sdf.apply(lambda row: row is not None)]\n",
"\n",
"# Update the timestamp column to the current time in nanoseconds\n",
"sdf[\"Timestamp\"] = sdf[\"Timestamp\"].apply(lambda row: time.time_ns())\n",
"\n",
"# Publish the processed SDF to a Kafka topic specified by the output_topic object.\n",
"sdf = sdf.to_topic(output_topic)\n",
"\n",
"app.run(sdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EwXYrmWD_0CX"
},
"source": [
"\n",
"### 11. Enter a human message\n",
"\n",
"Run this cell to enter your message that you want to sent to the model. It uses another Kafka producer to send your text to the \"chat\" Kafka topic for the model to pick up (requires running the previous cell again)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6sxOPxSP_3iu"
},
"outputs": [],
"source": [
"chat_input = input(\"Please enter your reply: \")\n",
"myreply = chat_input\n",
"\n",
"msgvalue = {\n",
" \"uuid\": chat_id, # leave empty for now\n",
" \"role\": \"human\",\n",
" \"text\": myreply,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
"}\n",
"\n",
"with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
") as producer:\n",
" value = msgvalue\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id, # leave empty for now\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
"print(\"Replied to chatbot with message: \")\n",
"print(\"--------------------------------------------\")\n",
"print(value)\n",
"print(\"--------------------------------------------\")\n",
"print(\"\\n\\nRUN THE PREVIOUS CELL TO HAVE THE CHATBOT GENERATE A REPLY\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cSx3s7TBBegg"
},
"source": [
"### Why route chat messages through Kafka?\n",
"\n",
"It's easier to interact with the LLM directly using LangChains built-in conversation management features. Plus you can also use a REST API to generate a response from an externally hosted model. So why go to the trouble of using Apache Kafka?\n",
"\n",
"There are a few reasons, such as:\n",
"\n",
" * **Integration**: Many enterprises want to run their own LLMs so that they can keep their data in-house. This requires integrating LLM-powered components into existing architectures that might already be decoupled using some kind of message bus.\n",
"\n",
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
"\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distributed architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
"\n",
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"0def954cca89466b8408fadaf3b82e64": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fb6478ce2dac489bb633b23ba0953c5c",
"max": 4081004224,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_734b0f5da9fc4307a95bab48cdbb5d89",
"value": 4081004224
}
},
"30ecca964a394109ac2ad757e3aec6c0": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"462482accc664729980562e208ceb179": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_b32f3a86a74741348511f4e136744ac8",
"placeholder": "",
"style": "IPY_MODEL_e409071bff5a4e2d9bf0e9f5cc42231b",
"value": " 4.08G/4.08G [00:33&lt;00:00, 184MB/s]"
}
},
"734b0f5da9fc4307a95bab48cdbb5d89": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"80d842f73c564dc7b7cc316c763e2633": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"969343cdbe604a26926679bbf8bd2dda": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_d8b8370c9b514715be7618bfe6832844",
"IPY_MODEL_0def954cca89466b8408fadaf3b82e64",
"IPY_MODEL_462482accc664729980562e208ceb179"
],
"layout": "IPY_MODEL_80d842f73c564dc7b7cc316c763e2633"
}
},
"b32f3a86a74741348511f4e136744ac8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"d8b8370c9b514715be7618bfe6832844": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fa055d9f2a9d4a789e9cf3c89e0214e5",
"placeholder": "",
"style": "IPY_MODEL_30ecca964a394109ac2ad757e3aec6c0",
"value": "llama-2-7b-chat.Q4_K_M.gguf: 100%"
}
},
"e409071bff5a4e2d9bf0e9f5cc42231b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"fa055d9f2a9d4a789e9cf3c89e0214e5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fb6478ce2dac489bb633b23ba0953c5c": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

File diff suppressed because one or more lines are too long

View File

@@ -1,250 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "517a9fd4",
"metadata": {},
"source": [
"# BabyAGI User Guide\n",
"\n",
"This notebook demonstrates how to implement [BabyAGI](https://github.com/yoheinakajima/babyagi/tree/main) by [Yohei Nakajima](https://twitter.com/yoheinakajima). BabyAGI is an AI agent that can generate and pretend to execute tasks based on a given objective.\n",
"\n",
"This guide will help you understand the components to create your own recursive agents.\n",
"\n",
"Although BabyAGI uses specific vectorstores/model providers (Pinecone, OpenAI), one of the benefits of implementing it with LangChain is that you can easily swap those out for different options. In this implementation we use a FAISS vectorstore (because it runs locally and is free)."
]
},
{
"cell_type": "markdown",
"id": "556af556",
"metadata": {},
"source": [
"## Install and Import Required Modules"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c8a354b6",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "09f70772",
"metadata": {},
"source": [
"## Connect to the Vector Store\n",
"\n",
"Depending on what vectorstore you use, this step may look different."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "794045d4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e0305eb",
"metadata": {},
"outputs": [],
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "05ba762e",
"metadata": {},
"source": [
"### Run the BabyAGI\n",
"\n",
"Now it's time to create the BabyAGI controller and watch it try to accomplish your objective."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3d220b69",
"metadata": {},
"outputs": [],
"source": [
"OBJECTIVE = \"Write a weather report for SF today\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8a8e5543",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3d69899b",
"metadata": {},
"outputs": [],
"source": [
"# Logging of LLMChains\n",
"verbose = False\n",
"# If None, will keep on going forever\n",
"max_iterations: Optional[int] = 3\n",
"baby_agi = BabyAGI.from_llm(\n",
" llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f7957b51",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"1. Check the weather forecast for San Francisco today\n",
"2. Make note of the temperature, humidity, wind speed, and other relevant weather conditions\n",
"3. Write a weather report summarizing the forecast\n",
"4. Check for any weather alerts or warnings\n",
"5. Share the report with the relevant stakeholders\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"3: Check the current humidity in San Francisco\n",
"4: Check the current wind speed in San Francisco\n",
"5: Check for any weather alerts or warnings in San Francisco\n",
"6: Check the forecast for the next 24 hours in San Francisco\n",
"7: Check the forecast for the next 48 hours in San Francisco\n",
"8: Check the forecast for the next 72 hours in San Francisco\n",
"9: Check the forecast for the next week in San Francisco\n",
"10: Check the forecast for the next month in San Francisco\n",
"11: Check the forecast for the next 3 months in San Francisco\n",
"1: Write a weather report for SF today\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"2: Check the current temperature in San Francisco\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"I will check the current temperature in San Francisco. I will use an online weather service to get the most up-to-date information.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the current UV index in San Francisco.\n",
"4: Check the current air quality in San Francisco.\n",
"5: Check the current precipitation levels in San Francisco.\n",
"6: Check the current cloud cover in San Francisco.\n",
"7: Check the current barometric pressure in San Francisco.\n",
"8: Check the current dew point in San Francisco.\n",
"9: Check the current wind direction in San Francisco.\n",
"10: Check the current humidity levels in San Francisco.\n",
"1: Check the current temperature in San Francisco to the average temperature for this time of year.\n",
"2: Check the current visibility in San Francisco.\n",
"11: Write a weather report for SF today.\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"3: Check the current UV index in San Francisco.\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"\n",
"\n",
"The current UV index in San Francisco is moderate. The UV index is expected to remain at moderate levels throughout the day. It is recommended to wear sunscreen and protective clothing when outdoors.\n",
"\u001b[91m\u001b[1m\n",
"*****TASK ENDING*****\n",
"\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'objective': 'Write a weather report for SF today'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"baby_agi({\"objective\": OBJECTIVE})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "898a210b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,388 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "517a9fd4",
"metadata": {},
"source": [
"# BabyAGI with Tools\n",
"\n",
"This notebook builds on top of [baby agi](baby_agi.html), but shows how you can swap out the execution chain. The previous execution chain was just an LLM which made stuff up. By swapping it out with an agent that has access to tools, we can hopefully get real reliable information"
]
},
{
"cell_type": "markdown",
"id": "556af556",
"metadata": {},
"source": [
"## Install and Import Required Modules"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c8a354b6",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "09f70772",
"metadata": {},
"source": [
"## Connect to the Vector Store\n",
"\n",
"Depending on what vectorstore you use, this step may look different."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "794045d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install faiss-cpu > /dev/null\n",
"%pip install google-search-results > /dev/null\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e0305eb",
"metadata": {},
"outputs": [],
"source": [
"# Define your embedding model\n",
"embeddings_model = OpenAIEmbeddings()\n",
"# Initialize the vectorstore as empty\n",
"import faiss\n",
"\n",
"embedding_size = 1536\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "0f3b72bf",
"metadata": {},
"source": [
"## Define the Chains\n",
"\n",
"BabyAGI relies on three LLM chains:\n",
"- Task creation chain to select new tasks to add to the list\n",
"- Task prioritization chain to re-prioritize tasks\n",
"- Execution Chain to execute the tasks\n",
"\n",
"\n",
"NOTE: in this notebook, the Execution chain will now be an agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b43cd580",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_openai import OpenAI\n",
"\n",
"todo_prompt = PromptTemplate.from_template(\n",
" \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",
")\n",
"todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt)\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" ),\n",
" Tool(\n",
" name=\"TODO\",\n",
" func=todo_chain.run,\n",
" description=\"useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!\",\n",
" ),\n",
"]\n",
"\n",
"\n",
"prefix = \"\"\"You are an AI who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\"\"\"\n",
"suffix = \"\"\"Question: {task}\n",
"{agent_scratchpad}\"\"\"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"objective\", \"task\", \"context\", \"agent_scratchpad\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4b00ae2e",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)\n",
"agent_executor = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "05ba762e",
"metadata": {},
"source": [
"### Run the BabyAGI\n",
"\n",
"Now it's time to create the BabyAGI controller and watch it try to accomplish your objective."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3d220b69",
"metadata": {},
"outputs": [],
"source": [
"OBJECTIVE = \"Write a weather report for SF today\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3d69899b",
"metadata": {},
"outputs": [],
"source": [
"# Logging of LLMChains\n",
"verbose = False\n",
"# If None, will keep on going forever\n",
"max_iterations: Optional[int] = 3\n",
"baby_agi = BabyAGI.from_llm(\n",
" llm=llm,\n",
" vectorstore=vectorstore,\n",
" task_execution_chain=agent_executor,\n",
" verbose=verbose,\n",
" max_iterations=max_iterations,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f7957b51",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"1: Make a todo list\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to come up with a todo list\n",
"Action: TODO\n",
"Action Input: Write a weather report for SF today\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Research current weather conditions in San Francisco\n",
"2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions\n",
"3. Analyze data to determine current weather trends\n",
"4. Write a brief introduction to the weather report\n",
"5. Describe current weather conditions in San Francisco\n",
"6. Discuss any upcoming weather changes\n",
"7. Summarize the weather report\n",
"8. Proofread and edit the report\n",
"9. Submit the report\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The todo list for writing a weather report for SF today is: 1. Research current weather conditions in San Francisco; 2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions; 3. Analyze data to determine current weather trends; 4. Write a brief introduction to the weather report; 5. Describe current weather conditions in San Francisco; 6. Discuss any upcoming weather changes; 7. Summarize the weather report; 8. Proofread and edit the report; 9. Submit the report.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"The todo list for writing a weather report for SF today is: 1. Research current weather conditions in San Francisco; 2. Gather data on temperature, humidity, wind speed, and other relevant weather conditions; 3. Analyze data to determine current weather trends; 4. Write a brief introduction to the weather report; 5. Describe current weather conditions in San Francisco; 6. Discuss any upcoming weather changes; 7. Summarize the weather report; 8. Proofread and edit the report; 9. Submit the report.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"2: Gather data on precipitation, cloud cover, and other relevant weather conditions;\n",
"3: Analyze data to determine any upcoming weather changes;\n",
"4: Research current weather forecasts for San Francisco;\n",
"5: Create a visual representation of the weather report;\n",
"6: Include relevant images and graphics in the report;\n",
"7: Format the report for readability;\n",
"8: Publish the report online;\n",
"9: Monitor the report for accuracy.\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"2: Gather data on precipitation, cloud cover, and other relevant weather conditions;\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to search for current weather conditions in San Francisco\n",
"Action: Search\n",
"Action Input: Current weather conditions in San Francisco\u001b[0m\u001b[36;1m\u001b[1;3mCurrent Weather for Popular Cities ; San Francisco, CA 46 · Partly Cloudy ; Manhattan, NY warning 52 · Cloudy ; Schiller Park, IL (60176) 40 · Sunny ; Boston, MA 54 ...\u001b[0m\u001b[32;1m\u001b[1;3m I need to compile the data into a weather report\n",
"Action: TODO\n",
"Action Input: Compile data into a weather report\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Gather data from reliable sources such as the National Weather Service, local weather stations, and other meteorological organizations.\n",
"\n",
"2. Analyze the data to identify trends and patterns.\n",
"\n",
"3. Create a chart or graph to visualize the data.\n",
"\n",
"4. Write a summary of the data and its implications.\n",
"\n",
"5. Compile the data into a report format.\n",
"\n",
"6. Proofread the report for accuracy and clarity.\n",
"\n",
"7. Publish the report to a website or other platform.\n",
"\n",
"8. Distribute the report to relevant stakeholders.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Today in San Francisco, the temperature is 46 degrees Fahrenheit with partly cloudy skies. The forecast for the rest of the day is expected to remain partly cloudy.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"Today in San Francisco, the temperature is 46 degrees Fahrenheit with partly cloudy skies. The forecast for the rest of the day is expected to remain partly cloudy.\n",
"\u001b[95m\u001b[1m\n",
"*****TASK LIST*****\n",
"\u001b[0m\u001b[0m\n",
"3: Format the report for readability;\n",
"4: Include relevant images and graphics in the report;\n",
"5: Compare the current weather conditions in San Francisco to the forecasted conditions;\n",
"6: Identify any potential weather-related hazards in the area;\n",
"7: Research historical weather patterns in San Francisco;\n",
"8: Identify any potential trends in the weather data;\n",
"9: Include relevant data sources in the report;\n",
"10: Summarize the weather report in a concise manner;\n",
"11: Include a summary of the forecasted weather conditions;\n",
"12: Include a summary of the current weather conditions;\n",
"13: Include a summary of the historical weather patterns;\n",
"14: Include a summary of the potential weather-related hazards;\n",
"15: Include a summary of the potential trends in the weather data;\n",
"16: Include a summary of the data sources used in the report;\n",
"17: Analyze data to determine any upcoming weather changes;\n",
"18: Research current weather forecasts for San Francisco;\n",
"19: Create a visual representation of the weather report;\n",
"20: Publish the report online;\n",
"21: Monitor the report for accuracy\n",
"\u001b[92m\u001b[1m\n",
"*****NEXT TASK*****\n",
"\u001b[0m\u001b[0m\n",
"3: Format the report for readability;\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to make sure the report is easy to read;\n",
"Action: TODO\n",
"Action Input: Make the report easy to read\u001b[0m\u001b[33;1m\u001b[1;3m\n",
"\n",
"1. Break up the report into sections with clear headings\n",
"2. Use bullet points and numbered lists to organize information\n",
"3. Use short, concise sentences\n",
"4. Use simple language and avoid jargon\n",
"5. Include visuals such as charts, graphs, and diagrams to illustrate points\n",
"6. Use bold and italicized text to emphasize key points\n",
"7. Include a table of contents and page numbers\n",
"8. Use a consistent font and font size throughout the report\n",
"9. Include a summary at the end of the report\n",
"10. Proofread the report for typos and errors\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The report should be formatted for readability by breaking it up into sections with clear headings, using bullet points and numbered lists to organize information, using short, concise sentences, using simple language and avoiding jargon, including visuals such as charts, graphs, and diagrams to illustrate points, using bold and italicized text to emphasize key points, including a table of contents and page numbers, using a consistent font and font size throughout the report, including a summary at the end of the report, and proofreading the report for typos and errors.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[93m\u001b[1m\n",
"*****TASK RESULT*****\n",
"\u001b[0m\u001b[0m\n",
"The report should be formatted for readability by breaking it up into sections with clear headings, using bullet points and numbered lists to organize information, using short, concise sentences, using simple language and avoiding jargon, including visuals such as charts, graphs, and diagrams to illustrate points, using bold and italicized text to emphasize key points, including a table of contents and page numbers, using a consistent font and font size throughout the report, including a summary at the end of the report, and proofreading the report for typos and errors.\n",
"\u001b[91m\u001b[1m\n",
"*****TASK ENDING*****\n",
"\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'objective': 'Write a weather report for SF today'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"baby_agi({\"objective\": OBJECTIVE})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "898a210b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,708 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# CAMEL Role-Playing Autonomous Cooperative Agents\n",
"\n",
"This is a langchain implementation of paper: \"CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society\".\n",
"\n",
"Overview:\n",
"\n",
"The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their \"cognitive\" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.\n",
"\n",
"The original implementation: https://github.com/lightaime/camel\n",
"\n",
"Project website: https://www.camel-ai.org/\n",
"\n",
"Arxiv paper: https://arxiv.org/abs/2303.17760\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import LangChain related modules "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain.prompts.chat import (\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" BaseMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define a CAMEL agent helper class"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"class CAMELAgent:\n",
" def __init__(\n",
" self,\n",
" system_message: SystemMessage,\n",
" model: ChatOpenAI,\n",
" ) -> None:\n",
" self.system_message = system_message\n",
" self.model = model\n",
" self.init_messages()\n",
"\n",
" def reset(self) -> None:\n",
" self.init_messages()\n",
" return self.stored_messages\n",
"\n",
" def init_messages(self) -> None:\n",
" self.stored_messages = [self.system_message]\n",
"\n",
" def update_messages(self, message: BaseMessage) -> List[BaseMessage]:\n",
" self.stored_messages.append(message)\n",
" return self.stored_messages\n",
"\n",
" def step(\n",
" self,\n",
" input_message: HumanMessage,\n",
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model.invoke(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup OpenAI API key and roles and task for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"\n",
"assistant_role_name = \"Python Programmer\"\n",
"user_role_name = \"Stock Trader\"\n",
"task = \"Develop a trading bot for the stock market\"\n",
"word_limit = 50 # word limit for task brainstorming"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a task specify agent for brainstorming and get the specified task"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n"
]
}
],
"source": [
"task_specifier_sys_msg = SystemMessage(content=\"You can make a task more specific.\")\n",
"task_specifier_prompt = \"\"\"Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.\n",
"Please make it more specific. Be creative and imaginative.\n",
"Please reply with the specified task in {word_limit} words or less. Do not add anything else.\"\"\"\n",
"task_specifier_template = HumanMessagePromptTemplate.from_template(\n",
" template=task_specifier_prompt\n",
")\n",
"task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))\n",
"task_specifier_msg = task_specifier_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" word_limit=word_limit,\n",
")[0]\n",
"specified_task_msg = task_specify_agent.step(task_specifier_msg)\n",
"print(f\"Specified task: {specified_task_msg.content}\")\n",
"specified_task = specified_task_msg.content"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create inception prompts for AI assistant and AI user for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"assistant_inception_prompt = \"\"\"Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"You must help me to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"I must instruct you based on your expertise and my needs to complete the task.\n",
"\n",
"I must give you one instruction at a time.\n",
"You must write a specific solution that appropriately completes the requested instruction.\n",
"You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.\n",
"Do not add anything else other than your solution to my instruction.\n",
"You are never supposed to ask me any questions you only answer questions.\n",
"You are never supposed to reply with a flake solution. Explain your solutions.\n",
"Your solution must be declarative sentences and simple present tense.\n",
"Unless I say the task is completed, you should always start with:\n",
"\n",
"Solution: <YOUR_SOLUTION>\n",
"\n",
"<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.\n",
"Always end <YOUR_SOLUTION> with: Next request.\"\"\"\n",
"\n",
"user_inception_prompt = \"\"\"Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"I must help you to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:\n",
"\n",
"1. Instruct with a necessary input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: <YOUR_INPUT>\n",
"\n",
"2. Instruct without any input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: None\n",
"\n",
"The \"Instruction\" describes a task or question. The paired \"Input\" provides further context or information for the requested \"Instruction\".\n",
"\n",
"You must give me one instruction at a time.\n",
"I must write a response that appropriately completes the requested instruction.\n",
"I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons.\n",
"You should instruct me not ask me questions.\n",
"Now you must start to instruct me using the two ways described above.\n",
"Do not add anything else other than your instruction and the optional corresponding input!\n",
"Keep giving me instructions and necessary inputs until you think the task is completed.\n",
"When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.\n",
"Never say <CAMEL_TASK_DONE> unless my responses have solved your task.\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a helper helper to get system messages for AI assistant and AI user from role names and the task"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):\n",
" assistant_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=assistant_inception_prompt\n",
" )\n",
" assistant_sys_msg = assistant_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" user_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=user_inception_prompt\n",
" )\n",
" user_sys_msg = user_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" return assistant_sys_msg, user_sys_msg"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create AI assistant agent and AI user agent from obtained system messages"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"assistant_sys_msg, user_sys_msg = get_sys_msgs(\n",
" assistant_role_name, user_role_name, specified_task\n",
")\n",
"assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))\n",
"user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))\n",
"\n",
"# Reset agents\n",
"assistant_agent.reset()\n",
"user_agent.reset()\n",
"\n",
"# Initialize chats\n",
"user_msg = HumanMessage(\n",
" content=(\n",
" f\"{user_sys_msg.content}. \"\n",
" \"Now start to give me introductions one by one. \"\n",
" \"Only reply with Instruction and Input.\"\n",
" )\n",
")\n",
"\n",
"assistant_msg = HumanMessage(content=f\"{assistant_sys_msg.content}\")\n",
"assistant_msg = assistant_agent.step(user_msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start role-playing session to solve the task!"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original task prompt:\n",
"Develop a trading bot for the stock market\n",
"\n",
"Specified task prompt:\n",
"Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Install the necessary Python libraries for data analysis and trading.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:\n",
"\n",
"```\n",
"pip install pandas numpy matplotlib ta-lib\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Import the necessary libraries in the Python script.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Load historical stock data into a pandas DataFrame.\n",
"Input: The path to the CSV file containing the historical stock data.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:\n",
"\n",
"```\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"```\n",
"\n",
"This will load the historical stock data into a pandas DataFrame called `df`. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:\n",
"\n",
"```\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"```\n",
"\n",
"This will set the date column as the index and sort the DataFrame in ascending order by date. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.\n",
"Input: The period for the short-term moving average and the period for the long-term moving average.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:\n",
"\n",
"```\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"```\n",
"\n",
"This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:\n",
"\n",
"```\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"```\n",
"\n",
"This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:\n",
"\n",
"```\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"```\n",
"\n",
"This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.\n",
"Input: The stop loss and profit target as percentages.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:\n",
"\n",
"```\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"```\n",
"\n",
"This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:\n",
"\n",
"```\n",
"total_pnl = new_df['pnl'].sum()\n",
"```\n",
"\n",
"This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:\n",
"\n",
"```\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the buy and sell signals using a scatter plot.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:\n",
"\n",
"```\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Print the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:\n",
"\n",
"```\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Finalize the code by putting all the code snippets together in a single Python script.\n",
"Input: The necessary inputs for each code snippet.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: Here is the complete Python script that puts all the code snippets together:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"\n",
"# Load historical stock data into a pandas DataFrame\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"\n",
"# Preprocess the historical stock data\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"\n",
"# Calculate the short-term and long-term moving averages\n",
"short_period = 10\n",
"long_period = 30\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"\n",
"# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"\n",
"# Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"\n",
"# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target\n",
"stop_loss_percent = 5\n",
"profit_target_percent = 10\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"\n",
"# Calculate the total profit or loss for all trades\n",
"total_pnl = new_df['pnl'].sum()\n",
"\n",
"# Visualize the stock data, short-term moving average, and long-term moving average using a line chart\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Visualize the buy and sell signals using a scatter plot\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Print the total profit or loss for all trades\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"<CAMEL_TASK_DONE>\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Great! Let me know if you need any further assistance.\n",
"\n",
"\n"
]
}
],
"source": [
"print(f\"Original task prompt:\\n{task}\\n\")\n",
"print(f\"Specified task prompt:\\n{specified_task}\\n\")\n",
"\n",
"chat_turn_limit, n = 30, 0\n",
"while n < chat_turn_limit:\n",
" n += 1\n",
" user_ai_msg = user_agent.step(assistant_msg)\n",
" user_msg = HumanMessage(content=user_ai_msg.content)\n",
" print(f\"AI User ({user_role_name}):\\n\\n{user_msg.content}\\n\\n\")\n",
"\n",
" assistant_ai_msg = assistant_agent.step(user_msg)\n",
" assistant_msg = HumanMessage(content=assistant_ai_msg.content)\n",
" print(f\"AI Assistant ({assistant_role_name}):\\n\\n{assistant_msg.content}\\n\\n\")\n",
" if \"<CAMEL_TASK_DONE>\" in user_msg.content:\n",
" break"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "camel",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,557 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Python Modules"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the following Python modules:\n",
"\n",
"```bash\n",
"pip install ipykernel python-dotenv cassio pandas langchain_openai langchain langchain-community langchainhub langchain_experimental openai-multi-tool-use-parallel-patch\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the `.env` File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Connection is via `cassio` using `auto=True` parameter, and the notebook uses OpenAI. You should create a `.env` file accordingly.\n",
"\n",
"For Cassandra, set:\n",
"```bash\n",
"CASSANDRA_CONTACT_POINTS\n",
"CASSANDRA_USERNAME\n",
"CASSANDRA_PASSWORD\n",
"CASSANDRA_KEYSPACE\n",
"```\n",
"\n",
"For Astra, set:\n",
"```bash\n",
"ASTRA_DB_APPLICATION_TOKEN\n",
"ASTRA_DB_DATABASE_ID\n",
"ASTRA_DB_KEYSPACE\n",
"```\n",
"\n",
"For example:\n",
"\n",
"```bash\n",
"# Connection to Astra:\n",
"ASTRA_DB_DATABASE_ID=a1b2c3d4-...\n",
"ASTRA_DB_APPLICATION_TOKEN=AstraCS:...\n",
"ASTRA_DB_KEYSPACE=notebooks\n",
"\n",
"# Also set \n",
"OPENAI_API_KEY=sk-....\n",
"```\n",
"\n",
"(You may also modify the below code to directly connect with `cassio`.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Cassandra"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import cassio\n",
"\n",
"cassio.init(auto=True)\n",
"session = cassio.config.resolve_session()\n",
"if not session:\n",
" raise Exception(\n",
" \"Check environment configuration or manually configure cassio connection parameters\"\n",
" )\n",
"\n",
"keyspace = os.environ.get(\n",
" \"ASTRA_DB_KEYSPACE\", os.environ.get(\"CASSANDRA_KEYSPACE\", None)\n",
")\n",
"if not keyspace:\n",
" raise ValueError(\"a KEYSPACE environment variable must be set\")\n",
"\n",
"session.set_keyspace(keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs to be done one time only!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The dataset used is from Kaggle, the [Environmental Sensor Telemetry Data](https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k?select=iot_telemetry_data.csv). The next cell will download and unzip the data into a Pandas dataframe. The following cell is instructions to download manually. \n",
"\n",
"The net result of this section is you should have a Pandas dataframe variable `df`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Automatically"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from io import BytesIO\n",
"from zipfile import ZipFile\n",
"\n",
"import pandas as pd\n",
"import requests\n",
"\n",
"datasetURL = \"https://storage.googleapis.com/kaggle-data-sets/788816/1355729/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T115828Z&X-Goog-Expires=259200&X-Goog-SignedHeaders=host&X-Goog-Signature=2849f003b100eb9dcda8dd8535990f51244292f67e4f5fad36f14aa67f2d4297672d8fe6ff5a39f03a29cda051e33e95d36daab5892b8874dcd5a60228df0361fa26bae491dd4371f02dd20306b583a44ba85a4474376188b1f84765147d3b4f05c57345e5de883c2c29653cce1f3755cd8e645c5e952f4fb1c8a735b22f0c811f97f7bce8d0235d0d3731ca8ab4629ff381f3bae9e35fc1b181c1e69a9c7913a5e42d9d52d53e5f716467205af9c8a3cc6746fc5352e8fbc47cd7d18543626bd67996d18c2045c1e475fc136df83df352fa747f1a3bb73e6ba3985840792ec1de407c15836640ec96db111b173bf16115037d53fdfbfd8ac44145d7f9a546aa\"\n",
"\n",
"response = requests.get(datasetURL)\n",
"if response.status_code == 200:\n",
" zip_file = ZipFile(BytesIO(response.content))\n",
" csv_file_name = zip_file.namelist()[0]\n",
"else:\n",
" print(\"Failed to download the file\")\n",
"\n",
"with zip_file.open(csv_file_name) as csv_file:\n",
" df = pd.read_csv(csv_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Manually"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can download the `.zip` file and unpack the `.csv` contained within. Comment in the next line, and adjust the path to this `.csv` file appropriately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# df = pd.read_csv(\"/path/to/iot_telemetry_data.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data into Cassandra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section assumes the existence of a dataframe `df`, the following cell validates its structure. The Download section above creates this object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert df is not None, \"Dataframe 'df' must be set\"\n",
"expected_columns = [\n",
" \"ts\",\n",
" \"device\",\n",
" \"co\",\n",
" \"humidity\",\n",
" \"light\",\n",
" \"lpg\",\n",
" \"motion\",\n",
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all([column in df.columns for column in expected_columns]), (\n",
" \"DataFrame does not have the expected columns\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create and load tables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import UTC, datetime\n",
"\n",
"from cassandra.query import BatchStatement\n",
"\n",
"# Create sensors table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_sensors (\n",
" device text,\n",
" conditions text,\n",
" room text,\n",
" PRIMARY KEY (device)\n",
")\n",
"WITH COMMENT = 'Environmental IoT room sensor metadata.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_sensors (device, conditions, room)\n",
"VALUES (?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"devices = [\n",
" (\"00:0f:00:70:91:0a\", \"stable conditions, cooler and more humid\", \"room 1\"),\n",
" (\"1c:bf:ce:15:ec:4d\", \"highly variable temperature and humidity\", \"room 2\"),\n",
" (\"b8:27:eb:bf:9d:51\", \"stable conditions, warmer and dryer\", \"room 3\"),\n",
"]\n",
"\n",
"for device, conditions, room in devices:\n",
" session.execute(pstmt, (device, conditions, room))\n",
"\n",
"print(\"Sensors inserted successfully.\")\n",
"\n",
"# Create data table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_data (\n",
" day text,\n",
" device text,\n",
" ts timestamp,\n",
" co double,\n",
" humidity double,\n",
" light boolean,\n",
" lpg double,\n",
" motion boolean,\n",
" smoke double,\n",
" temp double,\n",
" PRIMARY KEY ((day, device), ts)\n",
")\n",
"WITH COMMENT = 'Data from environmental IoT room sensors. Columns include device identifier, timestamp (ts) of the data collection, carbon monoxide level (co), relative humidity, light presence, LPG concentration, motion detection, smoke concentration, and temperature (temp). Data is partitioned by day and device.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_data (day, device, ts, co, humidity, light, lpg, motion, smoke, temp)\n",
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"def insert_data_batch(name, group):\n",
" batch = BatchStatement()\n",
" day, device = name\n",
" print(f\"Inserting batch for day: {day}, device: {device}\")\n",
"\n",
" for _, row in group.iterrows():\n",
" timestamp = datetime.fromtimestamp(row[\"ts\"], UTC)\n",
" batch.add(\n",
" pstmt,\n",
" (\n",
" day,\n",
" row[\"device\"],\n",
" timestamp,\n",
" row[\"co\"],\n",
" row[\"humidity\"],\n",
" row[\"light\"],\n",
" row[\"lpg\"],\n",
" row[\"motion\"],\n",
" row[\"smoke\"],\n",
" row[\"temp\"],\n",
" ),\n",
" )\n",
"\n",
" session.execute(batch)\n",
"\n",
"\n",
"# Convert columns to appropriate types\n",
"df[\"light\"] = df[\"light\"] == \"true\"\n",
"df[\"motion\"] = df[\"motion\"] == \"true\"\n",
"df[\"ts\"] = df[\"ts\"].astype(float)\n",
"df[\"day\"] = df[\"ts\"].apply(\n",
" lambda x: datetime.fromtimestamp(x, UTC).strftime(\"%Y-%m-%d\")\n",
")\n",
"\n",
"grouped_df = df.groupby([\"day\", \"device\"])\n",
"\n",
"for name, group in grouped_df:\n",
" insert_data_batch(name, group)\n",
"\n",
"print(\"Data load complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(session.keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the Tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python `import` statements for the demo:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"from langchain_community.agent_toolkits.cassandra_database.toolkit import (\n",
" CassandraDatabaseToolkit,\n",
")\n",
"from langchain_community.tools.cassandra_database.prompt import QUERY_PATH_PROMPT\n",
"from langchain_community.tools.cassandra_database.tool import (\n",
" GetSchemaCassandraDatabaseTool,\n",
" GetTableDataCassandraDatabaseTool,\n",
" QueryCassandraDatabaseTool,\n",
")\n",
"from langchain_community.utilities.cassandra_database import CassandraDatabase\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CassandraDatabase` object is loaded from `cassio`, though it does accept a `Session`-type parameter as an alternative."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a CassandraDatabase instance\n",
"db = CassandraDatabase(include_tables=[\"iot_sensors\", \"iot_data\"])\n",
"\n",
"# Create the Cassandra Database tools\n",
"query_tool = QueryCassandraDatabaseTool(db=db)\n",
"schema_tool = GetSchemaCassandraDatabaseTool(db=db)\n",
"select_data_tool = GetTableDataCassandraDatabaseTool(db=db)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tools can be invoked directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the tools\n",
"print(\"Executing a CQL query:\")\n",
"query = \"SELECT * FROM iot_sensors LIMIT 5;\"\n",
"result = query_tool.run({\"query\": query})\n",
"print(result)\n",
"\n",
"print(\"\\nGetting the schema for a keyspace:\")\n",
"schema = schema_tool.run({\"keyspace\": keyspace})\n",
"print(schema)\n",
"\n",
"print(\"\\nGetting data from a table:\")\n",
"table = \"iot_data\"\n",
"predicate = \"day = '2020-07-14' and device = 'b8:27:eb:bf:9d:51'\"\n",
"data = select_data_tool.run(\n",
" {\"keyspace\": keyspace, \"table\": table, \"predicate\": predicate, \"limit\": 5}\n",
")\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agent Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_experimental.utilities import PythonREPL\n",
"\n",
"python_repl = PythonREPL()\n",
"\n",
"repl_tool = Tool(\n",
" name=\"python_repl\",\n",
" description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n",
" func=python_repl.run,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\")\n",
"toolkit = CassandraDatabaseToolkit(db=db)\n",
"\n",
"# context = toolkit.get_context()\n",
"# tools = toolkit.get_tools()\n",
"tools = [schema_tool, select_data_tool, repl_tool]\n",
"\n",
"input = (\n",
" QUERY_PATH_PROMPT\n",
" + f\"\"\"\n",
"\n",
"Here is your task: In the {keyspace} keyspace, find the total number of times the temperature of each device has exceeded 23 degrees on July 14, 2020.\n",
" Create a summary report including the name of the room. Use Pandas if helpful.\n",
"\"\"\"\n",
")\n",
"\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"\n",
"# messages = [\n",
"# HumanMessagePromptTemplate.from_template(input),\n",
"# AIMessage(content=QUERY_PATH_PROMPT),\n",
"# MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
"# ]\n",
"\n",
"# prompt = ChatPromptTemplate.from_messages(messages)\n",
"# print(prompt)\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"\n",
"# Construct the OpenAI Tools agent\n",
"agent = create_openai_tools_agent(llm, tools, prompt)\n",
"\n",
"print(\"Available tools:\")\n",
"for tool in tools:\n",
" print(\"\\t\" + tool.name + \" - \" + tool.description + \" - \" + str(tool))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"\n",
"response = agent_executor.invoke({\"input\": input})\n",
"\n",
"print(response[\"output\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,255 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# QA using Activeloop's DeepLake\n",
"In this tutorial, we are going to use Langchain + Activeloop's Deep Lake with GPT4 to semantically search and ask questions over a group chat.\n",
"\n",
"View a working demo [here](https://twitter.com/thisissukh_/status/1647223328363679745)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Install required packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python3 -m pip install --upgrade langchain 'deeplake[enterprise]' openai tiktoken"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Add API keys"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain.chains import RetrievalQA\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import (\n",
" CharacterTextSplitter,\n",
" RecursiveCharacterTextSplitter,\n",
")\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",
"os.environ[\"ACTIVELOOP_TOKEN\"] = activeloop_token\n",
"os.environ[\"ACTIVELOOP_ORG\"] = getpass.getpass(\"Activeloop Org:\")\n",
"\n",
"org_id = os.environ[\"ACTIVELOOP_ORG\"]\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"dataset_path = \"hub://\" + org_id + \"/data\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## 2. Create sample data"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can generate a sample group chat conversation using ChatGPT with this prompt:\n",
"\n",
"```\n",
"Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.\n",
"```\n",
"\n",
"I've already generated such a chat in `messages.txt`. We can keep it simple and use this for our example.\n",
"\n",
"## 3. Ingest chat embeddings\n",
"\n",
"We load the messages in the text file, chunk and upload to ActiveLoop Vector store."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Participants:\\n\\nJerry: Loves movies and is a bit of a klutz.\\nSamantha: Enthusiastic about food and always trying new restaurants.\\nBarry: A nature lover, but always manages to get lost.\\nJerry: Hey, guys! You won\\'t believe what happened to me at the Times Square AMC theater. I tripped over my own feet and spilled popcorn everywhere! 🍿💥\\n\\nSamantha: LOL, that\\'s so you, Jerry! Was the floor buttery enough for you to ice skate on after that? 😂\\n\\nBarry: Sounds like a regular Tuesday for you, Jerry. Meanwhile, I tried to find that new hiking trail in Central Park. You know, the one that\\'s supposed to be impossible to get lost on? Well, guess what...\\n\\nJerry: You found a hidden treasure?\\n\\nBarry: No, I got lost. AGAIN. 🧭🙄\\n\\nSamantha: Barry, you\\'d get lost in your own backyard! But speaking of treasures, I found this new sushi place in Little Tokyo. \"Samantha\\'s Sushi Symphony\" it\\'s called. Coincidence? I think not!\\n\\nJerry: Maybe they named it after your ability to eat your body weight in sushi. 🍣', metadata={}), Document(page_content='Barry: How do you even FIND all these places, Samantha?\\n\\nSamantha: Simple, I don\\'t rely on Barry\\'s navigation skills. 😉 But seriously, the wasabi there was hotter than Jerry\\'s love for Marvel movies!\\n\\nJerry: Hey, nothing wrong with a little superhero action. By the way, did you guys see the new \"Captain Crunch: Breakfast Avenger\" trailer?\\n\\nSamantha: Captain Crunch? Are you sure you didn\\'t get that from one of your Saturday morning cereal binges?\\n\\nBarry: Yeah, and did he defeat his arch-enemy, General Mills? 😆\\n\\nJerry: Ha-ha, very funny. Anyway, that sushi place sounds awesome, Samantha. Next time, let\\'s go together, and maybe Barry can guide us... if we want a city-wide tour first.\\n\\nBarry: As long as we\\'re not hiking, I\\'ll get us there... eventually. 😅\\n\\nSamantha: It\\'s a date! But Jerry, you\\'re banned from carrying any food items.\\n\\nJerry: Deal! Just promise me no wasabi challenges. I don\\'t want to end up like the time I tried Sriracha ice cream.', metadata={}), Document(page_content=\"Barry: Wait, what happened with Sriracha ice cream?\\n\\nJerry: Let's just say it was a hot situation. Literally. 🔥\\n\\nSamantha: 🤣 I still have the video!\\n\\nJerry: Samantha, if you value our friendship, that video will never see the light of day.\\n\\nSamantha: No promises, Jerry. No promises. 🤐😈\\n\\nBarry: I foresee a fun weekend ahead! 🎉\", metadata={})]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your Deep Lake dataset has been successfully created!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\\"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset(path='hub://adilkhan/data', tensors=['embedding', 'id', 'metadata', 'text'])\n",
"\n",
" tensor htype shape dtype compression\n",
" ------- ------- ------- ------- ------- \n",
" embedding embedding (3, 1536) float32 None \n",
" id text (3, 1) str None \n",
" metadata json (3, 1) str None \n",
" text text (3, 1) str None \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" \r"
]
}
],
"source": [
"with open(\"messages.txt\") as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"pages = text_splitter.split_text(state_of_the_union)\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n",
"texts = text_splitter.create_documents(pages)\n",
"\n",
"print(texts)\n",
"\n",
"dataset_path = \"hub://\" + org_id + \"/data\"\n",
"embeddings = OpenAIEmbeddings()\n",
"db = DeepLake.from_documents(\n",
" texts, embeddings, dataset_path=dataset_path, overwrite=True\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`Optional`: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# with open(\"messages.txt\") as f:\n",
"# state_of_the_union = f.read()\n",
"# text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"# pages = text_splitter.split_text(state_of_the_union)\n",
"\n",
"# text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n",
"# texts = text_splitter.create_documents(pages)\n",
"\n",
"# print(texts)\n",
"\n",
"# dataset_path = \"hub://\" + org + \"/data\"\n",
"# embeddings = OpenAIEmbeddings()\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=dataset_path, overwrite=True, runtime={\"tensor_db\": True}\n",
"# )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Ask questions\n",
"\n",
"Now we can ask a question and get an answer back with a semantic search:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"db = DeepLake(dataset_path=dataset_path, read_only=True, embedding=embeddings)\n",
"\n",
"retriever = db.as_retriever()\n",
"retriever.search_kwargs[\"distance_metric\"] = \"cos\"\n",
"retriever.search_kwargs[\"k\"] = 4\n",
"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=OpenAI(), chain_type=\"stuff\", retriever=retriever, return_source_documents=False\n",
")\n",
"\n",
"# What was the restaurant the group was talking about called?\n",
"query = input(\"Enter query:\")\n",
"\n",
"# The Hungry Lobster\n",
"ans = qa({\"query\": query})\n",
"\n",
"print(ans)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

View File

@@ -1,156 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Elasticsearch\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
"\n",
"We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n",
"\n",
"This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).\n",
"\n",
"The Elasticsearch client must have permissions for index listing, mapping description and search queries.\n",
"\n",
"See [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for instructions on how to run Elasticsearch locally."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain langchain-experimental openai elasticsearch\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file\n",
"# import dotenv\n",
"\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"from elasticsearch import Elasticsearch\n",
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Initialize Elasticsearch python client.\n",
"# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.Elasticsearch\n",
"ELASTIC_SEARCH_SERVER = \"https://elastic:pass@localhost:9200\"\n",
"db = Elasticsearch(ELASTIC_SEARCH_SERVER)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Uncomment the next cell to initially populate your db."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# customers = [\n",
"# {\"firstname\": \"Jennifer\", \"lastname\": \"Walters\"},\n",
"# {\"firstname\": \"Monica\",\"lastname\":\"Rambeau\"},\n",
"# {\"firstname\": \"Carol\",\"lastname\":\"Danvers\"},\n",
"# {\"firstname\": \"Wanda\",\"lastname\":\"Maximoff\"},\n",
"# {\"firstname\": \"Jennifer\",\"lastname\":\"Takeda\"},\n",
"# ]\n",
"# for i, customer in enumerate(customers):\n",
"# db.create(index=\"customers\", document=customer, id=i)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"question = \"What are the first names of all the customers?\"\n",
"chain.run(question)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can customize the prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
"\n",
"Unless told to do not query for all the columns from a specific index, only ask for a few relevant columns given the question.\n",
"\n",
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n",
"\n",
"Use the following format:\n",
"\n",
"Question: Question here\n",
"ESQuery: Elasticsearch Query formatted as json\n",
"\"\"\"\n",
"\n",
"PROMPT = PromptTemplate.from_template(\n",
" PROMPT_TEMPLATE,\n",
")\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,214 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2def22ea",
"metadata": {},
"source": [
"# Extraction with OpenAI Tools\n",
"\n",
"Performing extraction has never been easier! OpenAI's tool calling ability is the perfect thing to use as it allows for extracting multiple different elements from text that are different types. \n",
"\n",
"Models after 1106 use tools and support \"parallel function calling\" which makes this super easy."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c628496",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain.chains.openai_tools import create_extraction_chain_pydantic\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "afe9657b",
"metadata": {},
"outputs": [],
"source": [
"# Make sure to use a recent model that supports tools\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bc0ca3b6",
"metadata": {},
"outputs": [],
"source": [
"# Pydantic is an easy way to define a schema\n",
"class Person(BaseModel):\n",
" \"\"\"Information about people to extract.\"\"\"\n",
"\n",
" name: str\n",
" age: Optional[int] = None"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2036af68",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain_pydantic(Person, model)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1748ad21",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Person(name='jane', age=2), Person(name='bob', age=3)]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"jane is 2 and bob is 3\"})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "c8262ce5",
"metadata": {},
"outputs": [],
"source": [
"# Let's define another element\n",
"class Class(BaseModel):\n",
" \"\"\"Information about classes to extract.\"\"\"\n",
"\n",
" teacher: str\n",
" students: List[str]"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "4973c104",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain_pydantic([Person, Class], model)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e976a15e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Person(name='jane', age=2),\n",
" Person(name='bob', age=3),\n",
" Class(teacher='Mrs Sampson', students=['jane', 'bob'])]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"jane is 2 and bob is 3 and they are in Mrs Sampson's class\"})"
]
},
{
"cell_type": "markdown",
"id": "6575a7d6",
"metadata": {},
"source": [
"## Under the hood\n",
"\n",
"Under the hood, this is a simple chain:"
]
},
{
"cell_type": "markdown",
"id": "b8ba83e5",
"metadata": {},
"source": [
"```python\n",
"from typing import Union, List, Type, Optional\n",
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.runnables import Runnable\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_core.language_models import BaseLanguageModel\n",
"\n",
"_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned \\\n",
"in the following passage together with their properties.\n",
"\n",
"If a property is not present and is not required in the function parameters, do not include it in the output.\"\"\" # noqa: E501\n",
"\n",
"\n",
"def create_extraction_chain_pydantic(\n",
" pydantic_schemas: Union[List[Type[BaseModel]], Type[BaseModel]],\n",
" llm: BaseLanguageModel,\n",
" system_message: str = _EXTRACTION_TEMPLATE,\n",
") -> Runnable:\n",
" if not isinstance(pydantic_schemas, list):\n",
" pydantic_schemas = [pydantic_schemas]\n",
" prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", system_message),\n",
" (\"user\", \"{input}\")\n",
" ])\n",
" tools = [convert_pydantic_to_openai_tool(p) for p in pydantic_schemas]\n",
" model = llm.bind(tools=tools)\n",
" chain = prompt | model | PydanticToolsParser(tools=pydantic_schemas)\n",
" return chain\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2eac6b68",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,245 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
"metadata": {},
"source": [
"## Fireworks.AI + LangChain + RAG\n",
" \n",
"[Fireworks AI](https://python.langchain.com/docs/integrations/llms/fireworks) wants to provide the best experience when working with LangChain, and here is an example of Fireworks + LangChain doing RAG\n",
"\n",
"See [our models page](https://fireworks.ai/models) for the full list of models. We use `accounts/fireworks/models/mixtral-8x7b-instruct` for RAG In this tutorial.\n",
"\n",
"For the RAG target, we will use the Gemma technical report https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Found existing installation: langchain-fireworks 0.0.1\n",
"Uninstalling langchain-fireworks-0.0.1:\n",
" Successfully uninstalled langchain-fireworks-0.0.1\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Obtaining file:///mnt/disks/data/langchain/libs/partners/fireworks\n",
" Installing build dependencies ... \u001b[?25ldone\n",
"\u001b[?25h Checking if build backend supports build_editable ... \u001b[?25ldone\n",
"\u001b[?25h Getting requirements to build editable ... \u001b[?25ldone\n",
"\u001b[?25h Preparing editable metadata (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25hRequirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (3.9.3)\n",
"Requirement already satisfied: fireworks-ai<0.13.0,>=0.12.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.12.0)\n",
"Requirement already satisfied: langchain-core<0.2,>=0.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.1.23)\n",
"Requirement already satisfied: requests<3,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (2.31.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (23.1.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.4.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (6.0.4)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.9.2)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (4.0.3)\n",
"Requirement already satisfied: httpx in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.26.0)\n",
"Requirement already satisfied: httpx-sse in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.4.0)\n",
"Requirement already satisfied: pydantic in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.4.2)\n",
"Requirement already satisfied: Pillow in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (10.2.0)\n",
"Requirement already satisfied: PyYAML>=5.3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (6.0.1)\n",
"Requirement already satisfied: anyio<5,>=3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (3.7.1)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.33)\n",
"Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (0.1.5)\n",
"Requirement already satisfied: packaging<24.0,>=23.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (23.2)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (8.2.3)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.3.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2.0.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2023.7.22)\n",
"Requirement already satisfied: sniffio>=1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.3.0)\n",
"Requirement already satisfied: exceptiongroup in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.1.3)\n",
"Requirement already satisfied: jsonpointer>=1.9 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (2.4)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.5.0)\n",
"Requirement already satisfied: pydantic-core==2.10.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.10.1)\n",
"Requirement already satisfied: typing-extensions>=4.6.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (4.8.0)\n",
"Requirement already satisfied: httpcore==1.* in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (1.0.2)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpcore==1.*->httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.14.0)\n",
"Building wheels for collected packages: langchain-fireworks\n",
" Building editable for langchain-fireworks (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25h Created wheel for langchain-fireworks: filename=langchain_fireworks-0.0.1-py3-none-any.whl size=2228 sha256=564071b120b09ec31f2dc737733448a33bbb26e40b49fcde0c129ad26045259d\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-oz368vdk/wheels/e0/ad/31/d7e76dd73d61905ff7f369f5b0d21a4b5e7af4d3cb7487aece\n",
"Successfully built langchain-fireworks\n",
"Installing collected packages: langchain-fireworks\n",
"Successfully installed langchain-fireworks-0.0.1\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --quiet pypdf langchain-chroma tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cf719376",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<module 'fireworks' from '/mnt/disks/data/langchain/.venv/lib/python3.9/site-packages/fireworks/__init__.py'>\n"
]
}
],
"source": [
"import fireworks\n",
"\n",
"print(fireworks)\n",
"import fireworks.client"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab49327-0532-4480-804c-d066c302a322",
"metadata": {},
"outputs": [],
"source": [
"# Load\n",
"import requests\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"# Download the PDF from a URL and save it to a temporary location\n",
"url = \"https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf\"\n",
"response = requests.get(url, stream=True)\n",
"file_name = \"temp_file.pdf\"\n",
"with open(file_name, \"wb\") as pdf:\n",
" pdf.write(response.content)\n",
"\n",
"loader = PyPDFLoader(file_name)\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_chroma import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=all_splits,\n",
" collection_name=\"rag-chroma\",\n",
" embedding=FireworksEmbeddings(),\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"# RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"# LLM\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
" temperature=0.0,\n",
" max_tokens=2000,\n",
" top_k=1,\n",
")\n",
"\n",
"# RAG chain\n",
"chain = (\n",
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
]
},
{
"cell_type": "markdown",
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,493 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0f0b9afa",
"metadata": {},
"source": [
"# Retrieve as you generate with FLARE\n",
"\n",
"This notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).\n",
"\n",
"Please see the original repo [here](https://github.com/jzbjyb/FLARE/tree/main).\n",
"\n",
"The basic idea is:\n",
"\n",
"- Start answering a question\n",
"- If you start generating tokens the model is uncertain about, look up relevant documents\n",
"- Use those documents to continue generating\n",
"- Repeat until finished\n",
"\n",
"There is a lot of cool detail in how the lookup of relevant documents is done.\n",
"Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is `Joe Biden went to Harvard`, and the tokens the model was uncertain about was `Harvard`, then a good generated question would be `where did Joe Biden go to college`. This generated question is then used in a retrieval step to fetch relevant documents.\n",
"\n",
"In order to set up this chain, we will need three things:\n",
"\n",
"- An LLM to generate the answer\n",
"- An LLM to generate hypothetical questions to use in retrieval\n",
"- A retriever to use to look up answers for\n",
"\n",
"The LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).\n",
"\n",
"The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.\n",
"\n",
"The retriever can be anything. In this notebook we will use [SERPER](https://serper.dev/) search engine, because it is cheap.\n",
"\n",
"Other important parameters to understand:\n",
"\n",
"- `max_generation_len`: The maximum number of tokens to generate before stopping to check if any are uncertain\n",
"- `min_prob`: Any tokens generated with probability below this will be considered uncertain"
]
},
{
"cell_type": "markdown",
"id": "a7e4b63d",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "042bb161",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"SERPER_API_KEY\"] = \"\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a7888f4a",
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, List\n",
"\n",
"from langchain.callbacks.manager import (\n",
" AsyncCallbackManagerForRetrieverRun,\n",
" CallbackManagerForRetrieverRun,\n",
")\n",
"from langchain_community.utilities import GoogleSerperAPIWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_core.retrievers import BaseRetriever\n",
"from langchain_openai import ChatOpenAI, OpenAI"
]
},
{
"cell_type": "markdown",
"id": "5f552dce",
"metadata": {},
"source": [
"## Retriever"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "59c7d875",
"metadata": {},
"outputs": [],
"source": [
"class SerperSearchRetriever(BaseRetriever):\n",
" search: GoogleSerperAPIWrapper = None\n",
"\n",
" def _get_relevant_documents(\n",
" self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any\n",
" ) -> List[Document]:\n",
" return [Document(page_content=self.search.run(query))]\n",
"\n",
" async def _aget_relevant_documents(\n",
" self,\n",
" query: str,\n",
" *,\n",
" run_manager: AsyncCallbackManagerForRetrieverRun,\n",
" **kwargs: Any,\n",
" ) -> List[Document]:\n",
" raise NotImplementedError()\n",
"\n",
"\n",
"retriever = SerperSearchRetriever(search=GoogleSerperAPIWrapper())"
]
},
{
"cell_type": "markdown",
"id": "92478194",
"metadata": {},
"source": [
"## FLARE Chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "577e7c2c",
"metadata": {},
"outputs": [],
"source": [
"# We set this so we can see what exactly is going on\n",
"from langchain.globals import set_verbose\n",
"\n",
"set_verbose(True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "300d783e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import FlareChain\n",
"\n",
"flare = FlareChain.from_llm(\n",
" ChatOpenAI(temperature=0),\n",
" retriever=retriever,\n",
" max_generation_len=164,\n",
" min_prob=0.3,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1f3d5e90",
"metadata": {},
"outputs": [],
"source": [
"query = \"explain in great detail the difference between the langchain framework and baby agi\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4b1bfa8c",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new FlareChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mCurrent Response: \u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: \n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new QuestionGeneratorChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" decentralized platform for natural language processing\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" uses a blockchain\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" distributed ledger to\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" process data, allowing for secure and transparent data sharing.\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" set of tools\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" help developers create\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" create an AI system\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.\n",
"\n",
"Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.\n",
"\n",
"In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for\n",
"\n",
"The question to which the answer is the term/entity/phrase \" NLP applications\" is:\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mGenerated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?']\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new _OpenAIResponseChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ...\n",
"\n",
"LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain.\n",
"\n",
"LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology.\n",
"\n",
"Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain.\n",
"\n",
"LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ...\n",
"\n",
"LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n",
"\n",
"Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023\n",
"\n",
"At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.\n",
">>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi\n",
">>> RESPONSE: \u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. '"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"flare.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7bed8944",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains”. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\\n\\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\\n\\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = OpenAI()\n",
"llm.invoke(query)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8fb76286",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new FlareChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mCurrent Response: \u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: \n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new QuestionGeneratorChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" very different origin\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" 2020 by a\" is:\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n",
"\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> EXISTING PARTIAL RESPONSE: \n",
"\n",
"Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. \n",
"\n",
"FINISHED\n",
"\n",
"The question to which the answer is the term/entity/phrase \" developers as a platform for creating and managing decentralized language learning applications.\" is:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mGenerated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?']\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new _OpenAIResponseChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mRespond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.\n",
"\n",
">>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ...\n",
"\n",
"After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.\n",
"\n",
"At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.\n",
">>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?\n",
">>> RESPONSE: \u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' The origin stories of LangChain and Bitcoin are quite different. Bitcoin was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency, while LangChain is a framework built around LLMs. '"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"flare.run(\"how are the origin stories of langchain and bitcoin similar or different?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbadd022",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Some files were not shown because too many files have changed in this diff Show More