mirror of
https://github.com/hwchase17/langchain.git
synced 2026-01-24 05:50:18 +00:00
Compare commits
196 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
37aec1e050 | ||
|
|
1b1a2d5740 | ||
|
|
7897483819 | ||
|
|
8e88ba16a8 | ||
|
|
b2138508cb | ||
|
|
e05bb938de | ||
|
|
d1fdcd4fcb | ||
|
|
64c4a698a8 | ||
|
|
3468c038ba | ||
|
|
d31d705407 | ||
|
|
0b4b9e61fc | ||
|
|
2424fff3f1 | ||
|
|
56cc5b847c | ||
|
|
b257e6a4e8 | ||
|
|
35d726dc15 | ||
|
|
9dead1034c | ||
|
|
1815ea2fdb | ||
|
|
a830b809f3 | ||
|
|
36204c2baf | ||
|
|
9e0ae56287 | ||
|
|
d85d4d7822 | ||
|
|
0660c06cf1 | ||
|
|
79cf01366e | ||
|
|
61f5ea4b5e | ||
|
|
221134d239 | ||
|
|
e130680d74 | ||
|
|
689853902e | ||
|
|
eb903e211c | ||
|
|
4209457bdc | ||
|
|
9adaa78c65 | ||
|
|
5545de0466 | ||
|
|
5c2243ee91 | ||
|
|
f10c17c6a4 | ||
|
|
a476147189 | ||
|
|
df4960a6d8 | ||
|
|
389459af8f | ||
|
|
60d009f75a | ||
|
|
11505f95d3 | ||
|
|
38cee5fae0 | ||
|
|
3afa68e30e | ||
|
|
5c564e62e1 | ||
|
|
5d40e36c75 | ||
|
|
c2a0a6b6df | ||
|
|
d6888a90d0 | ||
|
|
6908634428 | ||
|
|
3fd9f2752f | ||
|
|
d38c8369b3 | ||
|
|
2c58dca5f0 | ||
|
|
48fde2004f | ||
|
|
a8c68d4ffa | ||
|
|
224ec0cfd3 | ||
|
|
d12b88557a | ||
|
|
cadfce295f | ||
|
|
68e12d34a9 | ||
|
|
0ca539eb85 | ||
|
|
05bbf943f2 | ||
|
|
134f085824 | ||
|
|
52c194ec3a | ||
|
|
c8195769f2 | ||
|
|
b22da81af8 | ||
|
|
d6acb3ed7e | ||
|
|
4254028c52 | ||
|
|
fcad1d2965 | ||
|
|
922d7910ef | ||
|
|
afcc12d99e | ||
|
|
a35445c65f | ||
|
|
c08b622b2d | ||
|
|
4b16601d33 | ||
|
|
25c98dbba9 | ||
|
|
923696b664 | ||
|
|
56ee56736b | ||
|
|
4db8d82c55 | ||
|
|
231d553824 | ||
|
|
b8af5b0a8e | ||
|
|
7cadf00570 | ||
|
|
03e79e62c2 | ||
|
|
237026c060 | ||
|
|
76230d2c08 | ||
|
|
01c5cd365b | ||
|
|
b10cefb160 | ||
|
|
f65067b1da | ||
|
|
e88fdbba29 | ||
|
|
7e5e5e87d8 | ||
|
|
b43996e553 | ||
|
|
9ce38726a2 | ||
|
|
6ce276e099 | ||
|
|
3fbb2f3e52 | ||
|
|
2f0c9d8269 | ||
|
|
9544d64ad8 | ||
|
|
dad16af711 | ||
|
|
0af6e64ad9 | ||
|
|
f3449ccd20 | ||
|
|
bc6f6e968e | ||
|
|
c6a733802b | ||
|
|
683e97766d | ||
|
|
dff24285ea | ||
|
|
b9410f2b6f | ||
|
|
4e47fe1dce | ||
|
|
9298aff783 | ||
|
|
3c168d4d2a | ||
|
|
4e4b8805d6 | ||
|
|
20fe515f20 | ||
|
|
374f4cd2bf | ||
|
|
6720458c7d | ||
|
|
f5a57fc1ef | ||
|
|
f05c29180d | ||
|
|
cae6f611d3 | ||
|
|
cdd75b687e | ||
|
|
8ec7aade9f | ||
|
|
28c39503eb | ||
|
|
869a49a0ab | ||
|
|
ebf998acb6 | ||
|
|
43257a295c | ||
|
|
1d568e1add | ||
|
|
5a71b81609 | ||
|
|
988f6d9912 | ||
|
|
3f16acc538 | ||
|
|
b7e559c7e1 | ||
|
|
cce132d146 | ||
|
|
d9f1bcf366 | ||
|
|
3da1a65fa0 | ||
|
|
ab3c124ffb | ||
|
|
aa212c3d0e | ||
|
|
04d58018e1 | ||
|
|
3d74d5e24d | ||
|
|
47070b8314 | ||
|
|
07c2649753 | ||
|
|
c26ec7789f | ||
|
|
7108084947 | ||
|
|
b5b2d07681 | ||
|
|
69f4e402e4 | ||
|
|
c25b174db5 | ||
|
|
fd5f549a9e | ||
|
|
53f35c5f5c | ||
|
|
9fc28d50c3 | ||
|
|
276c6ba115 | ||
|
|
1f8094938f | ||
|
|
d2cb95c39d | ||
|
|
e7e670805c | ||
|
|
286a29a49e | ||
|
|
2008a6438c | ||
|
|
583dc49477 | ||
|
|
81052ee18e | ||
|
|
46e28b9613 | ||
|
|
9c2c9c5274 | ||
|
|
87af2360df | ||
|
|
6e3f39963f | ||
|
|
d5c2ce7c2e | ||
|
|
079d1f3b8e | ||
|
|
67c4fd0ad0 | ||
|
|
d3744175bf | ||
|
|
a2840a2b42 | ||
|
|
386ea48432 | ||
|
|
d76f026d72 | ||
|
|
95ae40ff90 | ||
|
|
d5d7ba582a | ||
|
|
f09f82541b | ||
|
|
11f13aed53 | ||
|
|
ba20c14e28 | ||
|
|
deb8168329 | ||
|
|
8ba97cb408 | ||
|
|
44dae6936b | ||
|
|
922193475a | ||
|
|
55f0f8dae8 | ||
|
|
69d9eae5cd | ||
|
|
69f5f82804 | ||
|
|
34ffb94770 | ||
|
|
8c150ad7f6 | ||
|
|
bb137fd6e7 | ||
|
|
ace2234391 | ||
|
|
ebf749c40c | ||
|
|
283a3ecc9c | ||
|
|
99afc1b4f8 | ||
|
|
0d44746430 | ||
|
|
ff79a99825 | ||
|
|
66f8cb015d | ||
|
|
4d6243fa87 | ||
|
|
f82bdf4613 | ||
|
|
963ff93476 | ||
|
|
d0505c0d47 | ||
|
|
4f23aa677a | ||
|
|
95a1b598fe | ||
|
|
7c4f340cc0 | ||
|
|
3df0f03928 | ||
|
|
f3cc9bba5b | ||
|
|
1afdb40b48 | ||
|
|
325fdde8b4 | ||
|
|
2719e49718 | ||
|
|
02dce74b97 | ||
|
|
d0ce374731 | ||
|
|
6fcba975d0 | ||
|
|
dd0374560a | ||
|
|
ee69116761 | ||
|
|
03bf6ef473 | ||
|
|
acb82cf25e | ||
|
|
9d9198de0b |
132
.github/CODE_OF_CONDUCT.md
vendored
Normal file
132
.github/CODE_OF_CONDUCT.md
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, caste, color, religion, or sexual
|
||||
identity and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the overall
|
||||
community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or advances of
|
||||
any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email address,
|
||||
without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
conduct@langchain.dev.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series of
|
||||
actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or permanent
|
||||
ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the
|
||||
community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.1, available at
|
||||
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
|
||||
|
||||
Community Impact Guidelines were inspired by
|
||||
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
|
||||
[https://www.contributor-covenant.org/translations][translations].
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
51
.github/CONTRIBUTING.md
vendored
51
.github/CONTRIBUTING.md
vendored
@@ -1,20 +1,19 @@
|
||||
# Contributing to LangChain
|
||||
|
||||
Hi there! Thank you for even being interested in contributing to LangChain.
|
||||
As an open source project in a rapidly developing field, we are extremely open
|
||||
to contributions, whether they be in the form of new features, improved infra, better documentation, or bug fixes.
|
||||
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
|
||||
|
||||
## 🗺️ Guidelines
|
||||
|
||||
### 👩💻 Contributing Code
|
||||
|
||||
To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
|
||||
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
|
||||
Please do not try to push directly to this repo unless you are a maintainer.
|
||||
|
||||
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
|
||||
maintainers.
|
||||
|
||||
Pull requests cannot land without passing the formatting, linting and testing checks first. See [Testing](#testing) and
|
||||
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
|
||||
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
|
||||
|
||||
It's essential that we maintain great documentation and testing. If you:
|
||||
@@ -27,16 +26,14 @@ It's essential that we maintain great documentation and testing. If you:
|
||||
- Add a demo notebook in `docs/modules`.
|
||||
- Add unit and integration tests.
|
||||
|
||||
We're a small, building-oriented team. If there's something you'd like to add or change, opening a pull request is the
|
||||
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
|
||||
best way to get our attention.
|
||||
|
||||
### 🚩GitHub Issues
|
||||
|
||||
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date
|
||||
with bugs, improvements, and feature requests.
|
||||
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
|
||||
|
||||
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help
|
||||
organize issues.
|
||||
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
|
||||
|
||||
If you start working on an issue, please assign it to yourself.
|
||||
|
||||
@@ -59,12 +56,12 @@ we do not want these to get in the way of getting good code into the codebase.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
This quick start describes running the repository locally.
|
||||
This quick start guide explains how to run the repository locally.
|
||||
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
|
||||
|
||||
### Dependency Management: Poetry and other env/dependency managers
|
||||
|
||||
This project uses [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
|
||||
This project utilizes [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
|
||||
|
||||
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
|
||||
|
||||
@@ -75,11 +72,11 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs
|
||||
|
||||
### Core vs. Experimental
|
||||
|
||||
There are two separate projects in this repository:
|
||||
- `langchain`: core langchain code, abstractions, and use cases
|
||||
- `langchain.experimental`: see the [Experimental README](../libs/experimental/README.md) for more information.
|
||||
This repository contains two separate projects:
|
||||
- `langchain`: core langchain code, abstractions, and use cases.
|
||||
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
|
||||
|
||||
Each of these has their own development environment. Docs are run from the top-level makefile, but development
|
||||
Each of these has its own development environment. Docs are run from the top-level makefile, but development
|
||||
is split across separate test & release flows.
|
||||
|
||||
For this quickstart, start with langchain core:
|
||||
@@ -129,7 +126,7 @@ To run unit tests in Docker:
|
||||
make docker_tests
|
||||
```
|
||||
|
||||
There are also [integration tests and code-coverage](../libs/langchain/tests/README.md) available.
|
||||
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
|
||||
|
||||
### Formatting and Linting
|
||||
|
||||
@@ -139,12 +136,19 @@ Run these locally before submitting a PR; the CI system will check also.
|
||||
|
||||
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [ruff](https://docs.astral.sh/ruff/rules/).
|
||||
|
||||
To run formatting for this project:
|
||||
To run formatting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
make format
|
||||
```
|
||||
|
||||
To run formatting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
make format
|
||||
```
|
||||
|
||||
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
|
||||
|
||||
```bash
|
||||
@@ -157,12 +161,19 @@ This is especially useful when you have made changes to a subset of the project
|
||||
|
||||
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [ruff](https://docs.astral.sh/ruff/rules/), and [mypy](http://mypy-lang.org/).
|
||||
|
||||
To run linting for this project:
|
||||
To run linting for docs, cookbook and templates:
|
||||
|
||||
```bash
|
||||
make lint
|
||||
```
|
||||
|
||||
To run linting for a library, run the same command from the relevant library directory:
|
||||
|
||||
```bash
|
||||
cd libs/{LIBRARY}
|
||||
make lint
|
||||
```
|
||||
|
||||
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
|
||||
|
||||
```bash
|
||||
@@ -282,7 +293,7 @@ make docs_build
|
||||
make api_docs_build
|
||||
```
|
||||
|
||||
Finally, you can run the linkchecker to make sure all links are valid:
|
||||
Finally, run the link checker to ensure all links are valid:
|
||||
|
||||
```bash
|
||||
make docs_linkcheck
|
||||
@@ -307,4 +318,4 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
|
||||
### 🌟 Recognition
|
||||
|
||||
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
|
||||
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
|
||||
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
|
||||
|
||||
57
.github/workflows/_compile_integration_test.yml
vendored
Normal file
57
.github/workflows/_compile_integration_test.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
name: compile-integration-test
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: compile-integration
|
||||
|
||||
- name: Install integration dependencies
|
||||
shell: bash
|
||||
run: poetry install --with=test_integration
|
||||
|
||||
- name: Check integration tests compile
|
||||
shell: bash
|
||||
run: poetry run pytest -m compile tests/integration_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
12
.github/workflows/_lint.yml
vendored
12
.github/workflows/_lint.yml
vendored
@@ -7,6 +7,10 @@ on:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
langchain-location:
|
||||
required: false
|
||||
type: string
|
||||
description: "Relative path to the langchain library folder"
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
@@ -34,7 +38,7 @@ jobs:
|
||||
- "3.8"
|
||||
- "3.11"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
# Fetch the last FETCH_DEPTH commits, so the mtime-changing script
|
||||
# can accurately set the mtimes of files modified in the last FETCH_DEPTH commits.
|
||||
@@ -116,9 +120,11 @@ jobs:
|
||||
|
||||
- name: Install langchain editable
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
if: ${{ inputs.working-directory != 'libs/langchain' }}
|
||||
if: ${{ inputs.langchain-location }}
|
||||
env:
|
||||
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
|
||||
run: |
|
||||
pip install -e ../langchain
|
||||
pip install -e "$LANGCHAIN_LOCATION"
|
||||
|
||||
- name: Restore black cache
|
||||
uses: actions/cache@v3
|
||||
|
||||
@@ -26,7 +26,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: Pydantic v1/v2 compatibility - Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
|
||||
2
.github/workflows/_release.yml
vendored
2
.github/workflows/_release.yml
vendored
@@ -30,7 +30,7 @@ jobs:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
|
||||
10
.github/workflows/_test.yml
vendored
10
.github/workflows/_test.yml
vendored
@@ -26,7 +26,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
@@ -44,14 +44,6 @@ jobs:
|
||||
shell: bash
|
||||
run: make test
|
||||
|
||||
- name: Install integration dependencies
|
||||
shell: bash
|
||||
run: poetry install --with=test_integration
|
||||
|
||||
- name: Check integration tests compile
|
||||
shell: bash
|
||||
run: poetry run pytest -m compile tests/integration_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
|
||||
50
.github/workflows/_test_release.yml
vendored
Normal file
50
.github/workflows/_test_release.yml
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
name: test-release
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
|
||||
jobs:
|
||||
publish_to_test_pypi:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: "3.10"
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: release
|
||||
|
||||
- name: Build project for distribution
|
||||
run: poetry build
|
||||
- name: Check Version
|
||||
id: check-version
|
||||
run: |
|
||||
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
|
||||
- name: Publish package to TestPyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
2
.github/workflows/codespell.yml
vendored
2
.github/workflows/codespell.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
|
||||
21
.github/workflows/doc_lint.yml
vendored
21
.github/workflows/doc_lint.yml
vendored
@@ -1,11 +1,17 @@
|
||||
---
|
||||
name: Documentation Lint
|
||||
name: Docs, templates, cookbook lint
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master]
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
branches: [master]
|
||||
paths:
|
||||
- 'docs/**'
|
||||
- 'templates/**'
|
||||
- 'cookbook/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/doc_lint.yml'
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
check:
|
||||
@@ -19,4 +25,11 @@ jobs:
|
||||
run: |
|
||||
# We should not encourage imports directly from main init file
|
||||
# Expect for hub
|
||||
git grep 'from langchain import' docs/{docs,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
|
||||
lint:
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: "."
|
||||
secrets: inherit
|
||||
10
.github/workflows/langchain_ci.yml
vendored
10
.github/workflows/langchain_ci.yml
vendored
@@ -12,6 +12,7 @@ on:
|
||||
- '.github/workflows/_test.yml'
|
||||
- '.github/workflows/_pydantic_compatibility.yml'
|
||||
- '.github/workflows/langchain_ci.yml'
|
||||
- 'libs/*'
|
||||
- 'libs/langchain/**'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
@@ -44,6 +45,13 @@ jobs:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
uses:
|
||||
./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
|
||||
pydantic-compatibility:
|
||||
uses:
|
||||
./.github/workflows/_pydantic_compatibility.yml
|
||||
@@ -65,7 +73,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }} extended tests
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
|
||||
54
.github/workflows/langchain_cli_ci.yml
vendored
Normal file
54
.github/workflows/langchain_cli_ci.yml
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: libs/cli CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/_test.yml'
|
||||
- '.github/workflows/_pydantic_compatibility.yml'
|
||||
- '.github/workflows/langchain_cli_ci.yml'
|
||||
- 'libs/cli/**'
|
||||
- 'libs/*'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
# If another push to the same PR or branch happens while this workflow is still running,
|
||||
# cancel the earlier run in favor of the next run.
|
||||
#
|
||||
# There's no point in testing an outdated version of the code. GitHub only allows
|
||||
# a limited number of job runners to be active at the same time, so it's better to cancel
|
||||
# pointless jobs early so that more useful jobs can run sooner.
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
WORKDIR: "libs/cli"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: libs/cli
|
||||
langchain-location: ../langchain
|
||||
secrets: inherit
|
||||
|
||||
test:
|
||||
uses:
|
||||
./.github/workflows/_test.yml
|
||||
with:
|
||||
working-directory: libs/cli
|
||||
secrets: inherit
|
||||
|
||||
pydantic-compatibility:
|
||||
uses:
|
||||
./.github/workflows/_pydantic_compatibility.yml
|
||||
with:
|
||||
working-directory: libs/cli
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_cli_release.yml
vendored
Normal file
13
.github/workflows/langchain_cli_release.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
name: libs/cli Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/cli
|
||||
secrets: inherit
|
||||
14
.github/workflows/langchain_experimental_ci.yml
vendored
14
.github/workflows/langchain_experimental_ci.yml
vendored
@@ -11,7 +11,7 @@ on:
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/_test.yml'
|
||||
- '.github/workflows/langchain_experimental_ci.yml'
|
||||
- 'libs/langchain/**'
|
||||
- 'libs/*'
|
||||
- 'libs/experimental/**'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
@@ -35,6 +35,7 @@ jobs:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
langchain-location: ../langchain
|
||||
secrets: inherit
|
||||
|
||||
test:
|
||||
@@ -44,6 +45,13 @@ jobs:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
uses:
|
||||
./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
|
||||
# It's possible that langchain-experimental works fine with the latest *published* langchain,
|
||||
# but is broken with the langchain on `master`.
|
||||
#
|
||||
@@ -62,7 +70,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: test with unpublished langchain - Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
@@ -97,7 +105,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }} extended tests
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
|
||||
13
.github/workflows/langchain_experimental_test_release.yml
vendored
Normal file
13
.github/workflows/langchain_experimental_test_release.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
name: Experimental Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_test_release.yml
vendored
Normal file
13
.github/workflows/langchain_test_release.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
name: Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
9
.github/workflows/scheduled_test.yml
vendored
9
.github/workflows/scheduled_test.yml
vendored
@@ -24,7 +24,7 @@ jobs:
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
@@ -55,6 +55,10 @@ jobs:
|
||||
poetry install --with=test_integration
|
||||
poetry run pip install google-cloud-aiplatform
|
||||
poetry run pip install "boto3>=1.28.57"
|
||||
if [[ ${{ matrix.python-version }} != "3.8" ]]
|
||||
then
|
||||
poetry run pip install fireworks-ai
|
||||
fi
|
||||
|
||||
- name: Run tests
|
||||
shell: bash
|
||||
@@ -64,7 +68,8 @@ jobs:
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
|
||||
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
||||
run: |
|
||||
make scheduled_tests
|
||||
|
||||
|
||||
37
.github/workflows/templates_ci.yml
vendored
Normal file
37
.github/workflows/templates_ci.yml
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: templates CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/templates_ci.yml'
|
||||
- 'templates/**'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
# If another push to the same PR or branch happens while this workflow is still running,
|
||||
# cancel the earlier run in favor of the next run.
|
||||
#
|
||||
# There's no point in testing an outdated version of the code. GitHub only allows
|
||||
# a limited number of job runners to be active at the same time, so it's better to cancel
|
||||
# pointless jobs early so that more useful jobs can run sooner.
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
WORKDIR: "templates"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: templates
|
||||
langchain-location: ../libs/langchain
|
||||
secrets: inherit
|
||||
12
Makefile
12
Makefile
@@ -37,6 +37,18 @@ spell_check:
|
||||
spell_fix:
|
||||
poetry run codespell --toml pyproject.toml -w
|
||||
|
||||
######################
|
||||
# LINTING AND FORMATTING
|
||||
######################
|
||||
|
||||
lint:
|
||||
poetry run ruff docs templates cookbook
|
||||
poetry run black docs templates cookbook --check
|
||||
|
||||
format format_diff:
|
||||
poetry run black docs templates cookbook
|
||||
poetry run ruff --select I --fix docs templates cookbook
|
||||
|
||||
######################
|
||||
# HELP
|
||||
######################
|
||||
|
||||
@@ -93,7 +93,7 @@ Memory refers to persisting state between calls of a chain/agent. LangChain prov
|
||||
|
||||
**🧐 Evaluation:**
|
||||
|
||||
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is by using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
|
||||
|
||||
|
||||
@@ -60,22 +60,21 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Local \n",
|
||||
"# Local\n",
|
||||
"from langchain.chat_models import ChatOllama\n",
|
||||
"\n",
|
||||
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
|
||||
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
|
||||
"\n",
|
||||
"# API\n",
|
||||
"from getpass import getpass\n",
|
||||
"from langchain.llms import Replicate\n",
|
||||
"\n",
|
||||
"# REPLICATE_API_TOKEN = getpass()\n",
|
||||
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
|
||||
"replicate_id = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n",
|
||||
"llama2_chat_replicate = Replicate(\n",
|
||||
" model=replicate_id,\n",
|
||||
" input={\"temperature\": 0.01, \n",
|
||||
" \"max_length\": 500, \n",
|
||||
" \"top_p\": 1}\n",
|
||||
" model=replicate_id, input={\"temperature\": 0.01, \"max_length\": 500, \"top_p\": 1}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -110,11 +109,14 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info= 0)\n",
|
||||
"\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)"
|
||||
]
|
||||
@@ -149,26 +151,29 @@
|
||||
"source": [
|
||||
"# Prompt\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" (\"human\", template),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Chain to query\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"sql_response = (\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"sql_response.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
||||
]
|
||||
@@ -209,18 +214,23 @@
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", template),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_response) \n",
|
||||
" RunnablePassthrough.assign(query=sql_response)\n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | prompt_response\n",
|
||||
" | llm\n",
|
||||
")\n",
|
||||
"\n",
|
||||
@@ -269,36 +279,42 @@
|
||||
"# Prompt\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", template),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"memory = ConversationBufferMemory(return_messages=True)\n",
|
||||
"\n",
|
||||
"# Chain to query with memory \n",
|
||||
"# Chain to query with memory\n",
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"sql_chain = (\n",
|
||||
" RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"])\n",
|
||||
" )| prompt\n",
|
||||
" schema=get_schema,\n",
|
||||
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"]),\n",
|
||||
" )\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def save(input_output):\n",
|
||||
" output = {\"output\": input_output.pop(\"output\")}\n",
|
||||
" memory.save_context(input_output, output)\n",
|
||||
" return output['output']\n",
|
||||
" \n",
|
||||
" return output[\"output\"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save\n",
|
||||
"sql_response_memory.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
||||
]
|
||||
@@ -349,18 +365,23 @@
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", template),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_response_memory) \n",
|
||||
" RunnablePassthrough.assign(query=sql_response_memory)\n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | prompt_response\n",
|
||||
" | llm\n",
|
||||
")\n",
|
||||
"\n",
|
||||
|
||||
@@ -4,7 +4,7 @@ Example code for building applications with LangChain, with an emphasis on more
|
||||
|
||||
Notebook | Description
|
||||
:- | :-
|
||||
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a sql database using an open source llm (llama2), specifically demonstrated on a sqlite database containing nba rosters.
|
||||
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
|
||||
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
|
||||
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
|
||||
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
|
||||
@@ -12,14 +12,14 @@ Notebook | Description
|
||||
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
|
||||
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
|
||||
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
|
||||
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|
||||
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|
||||
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
|
||||
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
|
||||
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
|
||||
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl api.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
|
||||
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
|
||||
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
|
||||
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
|
||||
@@ -37,7 +37,7 @@ Notebook | Description
|
||||
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
|
||||
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
|
||||
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
|
||||
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question answering system by incorporating openai functions into a retrieval pipeline.
|
||||
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
|
||||
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
|
||||
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
|
||||
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
|
||||
@@ -46,7 +46,7 @@ Notebook | Description
|
||||
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
|
||||
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
|
||||
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
|
||||
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the twitter algorithm with the help of gpt4 and activeloop's deep lake.
|
||||
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
|
||||
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
|
||||
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialoguesimulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|
||||
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|
||||
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
|
||||
|
||||
@@ -60,7 +60,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! brew install tesseract \n",
|
||||
"! brew install tesseract\n",
|
||||
"! brew install poppler"
|
||||
]
|
||||
},
|
||||
@@ -108,21 +108,23 @@
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Get elements\n",
|
||||
"raw_pdf_elements = partition_pdf(filename=path+\"LLaMA2.pdf\",\n",
|
||||
" # Unstructured first finds embedded image blocks\n",
|
||||
" extract_images_in_pdf=False,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document \n",
|
||||
" infer_table_structure=True, \n",
|
||||
" # Post processing to aggregate text once we have the title \n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars \n",
|
||||
" max_characters=4000, \n",
|
||||
" new_after_n_chars=3800, \n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path)"
|
||||
"raw_pdf_elements = partition_pdf(\n",
|
||||
" filename=path + \"LLaMA2.pdf\",\n",
|
||||
" # Unstructured first finds embedded image blocks\n",
|
||||
" extract_images_in_pdf=False,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document\n",
|
||||
" infer_table_structure=True,\n",
|
||||
" # Post processing to aggregate text once we have the title\n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars\n",
|
||||
" max_characters=4000,\n",
|
||||
" new_after_n_chars=3800,\n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -190,6 +192,7 @@
|
||||
" type: str\n",
|
||||
" text: Any\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Categorize by type\n",
|
||||
"categorized_elements = []\n",
|
||||
"for element in raw_pdf_elements:\n",
|
||||
@@ -259,14 +262,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt \n",
|
||||
"prompt_text=\"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"# Prompt\n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"Give a concise summary of the table or text. Table or text chunk: {element} \"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text) \n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text)\n",
|
||||
"\n",
|
||||
"# Summary chain \n",
|
||||
"model = ChatOpenAI(temperature=0,model=\"gpt-4\")\n",
|
||||
"summarize_chain = {\"element\": lambda x:x} | prompt | model | StrOutputParser()"
|
||||
"# Summary chain\n",
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
|
||||
"summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -321,10 +324,7 @@
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(\n",
|
||||
" collection_name=\"summaries\",\n",
|
||||
" embedding_function=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
|
||||
"\n",
|
||||
"# The storage layer for the parent documents\n",
|
||||
"store = InMemoryStore()\n",
|
||||
@@ -332,20 +332,26 @@
|
||||
"\n",
|
||||
"# The retriever (empty to start)\n",
|
||||
"retriever = MultiVectorRetriever(\n",
|
||||
" vectorstore=vectorstore, \n",
|
||||
" docstore=store, \n",
|
||||
" vectorstore=vectorstore,\n",
|
||||
" docstore=store,\n",
|
||||
" id_key=id_key,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Add texts\n",
|
||||
"doc_ids = [str(uuid.uuid4()) for _ in texts]\n",
|
||||
"summary_texts = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(text_summaries)]\n",
|
||||
"summary_texts = [\n",
|
||||
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
|
||||
" for i, s in enumerate(text_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_texts)\n",
|
||||
"retriever.docstore.mset(list(zip(doc_ids, texts)))\n",
|
||||
"\n",
|
||||
"# Add tables\n",
|
||||
"table_ids = [str(uuid.uuid4()) for _ in tables]\n",
|
||||
"summary_tables = [Document(page_content=s,metadata={id_key: table_ids[i]}) for i, s in enumerate(table_summaries)]\n",
|
||||
"summary_tables = [\n",
|
||||
" Document(page_content=s, metadata={id_key: table_ids[i]})\n",
|
||||
" for i, s in enumerate(table_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_tables)\n",
|
||||
"retriever.docstore.mset(list(zip(table_ids, tables)))"
|
||||
]
|
||||
@@ -378,13 +384,13 @@
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"# LLM\n",
|
||||
"model = ChatOpenAI(temperature=0,model=\"gpt-4\")\n",
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
|
||||
"\n",
|
||||
"# RAG pipeline\n",
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
|
||||
@@ -98,22 +98,24 @@
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Get elements\n",
|
||||
"raw_pdf_elements = partition_pdf(filename=path+\"LLaVA.pdf\",\n",
|
||||
" # Using pdf format to find embedded image blocks\n",
|
||||
" extract_images_in_pdf=True,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document \n",
|
||||
" infer_table_structure=True, \n",
|
||||
" # Post processing to aggregate text once we have the title \n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars \n",
|
||||
" # Hard max on chunks\n",
|
||||
" max_characters=4000, \n",
|
||||
" new_after_n_chars=3800, \n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path)"
|
||||
"raw_pdf_elements = partition_pdf(\n",
|
||||
" filename=path + \"LLaVA.pdf\",\n",
|
||||
" # Using pdf format to find embedded image blocks\n",
|
||||
" extract_images_in_pdf=True,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document\n",
|
||||
" infer_table_structure=True,\n",
|
||||
" # Post processing to aggregate text once we have the title\n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars\n",
|
||||
" # Hard max on chunks\n",
|
||||
" max_characters=4000,\n",
|
||||
" new_after_n_chars=3800,\n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -170,6 +172,7 @@
|
||||
" type: str\n",
|
||||
" text: Any\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Categorize by type\n",
|
||||
"categorized_elements = []\n",
|
||||
"for element in raw_pdf_elements:\n",
|
||||
@@ -220,14 +223,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt \n",
|
||||
"prompt_text=\"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"# Prompt\n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"Give a concise summary of the table or text. Table or text chunk: {element} \"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text) \n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text)\n",
|
||||
"\n",
|
||||
"# Summary chain \n",
|
||||
"model = ChatOpenAI(temperature=0,model=\"gpt-4\")\n",
|
||||
"summarize_chain = {\"element\": lambda x:x} | prompt | model | StrOutputParser()"
|
||||
"# Summary chain\n",
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
|
||||
"summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -342,11 +345,11 @@
|
||||
"# Read each file and store its content in a list\n",
|
||||
"img_summaries = []\n",
|
||||
"for file_path in file_paths:\n",
|
||||
" with open(file_path, 'r') as file:\n",
|
||||
" with open(file_path, \"r\") as file:\n",
|
||||
" img_summaries.append(file.read())\n",
|
||||
"\n",
|
||||
"# Remove any logging prior to summary\n",
|
||||
"logging_header=\"clip_model_load: total allocated memory: 201.27 MB\\n\\n\"\n",
|
||||
"logging_header = \"clip_model_load: total allocated memory: 201.27 MB\\n\\n\"\n",
|
||||
"cleaned_img_summary = [s.split(logging_header, 1)[1].strip() for s in img_summaries]"
|
||||
]
|
||||
},
|
||||
@@ -375,10 +378,7 @@
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(\n",
|
||||
" collection_name=\"summaries\",\n",
|
||||
" embedding_function=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
|
||||
"\n",
|
||||
"# The storage layer for the parent documents\n",
|
||||
"store = InMemoryStore()\n",
|
||||
@@ -386,20 +386,26 @@
|
||||
"\n",
|
||||
"# The retriever (empty to start)\n",
|
||||
"retriever = MultiVectorRetriever(\n",
|
||||
" vectorstore=vectorstore, \n",
|
||||
" docstore=store, \n",
|
||||
" vectorstore=vectorstore,\n",
|
||||
" docstore=store,\n",
|
||||
" id_key=id_key,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Add texts\n",
|
||||
"doc_ids = [str(uuid.uuid4()) for _ in texts]\n",
|
||||
"summary_texts = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(text_summaries)]\n",
|
||||
"summary_texts = [\n",
|
||||
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
|
||||
" for i, s in enumerate(text_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_texts)\n",
|
||||
"retriever.docstore.mset(list(zip(doc_ids, texts)))\n",
|
||||
"\n",
|
||||
"# Add tables\n",
|
||||
"table_ids = [str(uuid.uuid4()) for _ in tables]\n",
|
||||
"summary_tables = [Document(page_content=s,metadata={id_key: table_ids[i]}) for i, s in enumerate(table_summaries)]\n",
|
||||
"summary_tables = [\n",
|
||||
" Document(page_content=s, metadata={id_key: table_ids[i]})\n",
|
||||
" for i, s in enumerate(table_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_tables)\n",
|
||||
"retriever.docstore.mset(list(zip(table_ids, tables)))"
|
||||
]
|
||||
@@ -423,9 +429,12 @@
|
||||
"source": [
|
||||
"# Add image summaries\n",
|
||||
"img_ids = [str(uuid.uuid4()) for _ in cleaned_img_summary]\n",
|
||||
"summary_img = [Document(page_content=s,metadata={id_key: img_ids[i]}) for i, s in enumerate(cleaned_img_summary)]\n",
|
||||
"summary_img = [\n",
|
||||
" Document(page_content=s, metadata={id_key: img_ids[i]})\n",
|
||||
" for i, s in enumerate(cleaned_img_summary)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_img)\n",
|
||||
"retriever.docstore.mset(list(zip(img_ids, cleaned_img_summary))) "
|
||||
"retriever.docstore.mset(list(zip(img_ids, cleaned_img_summary)))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -449,10 +458,19 @@
|
||||
"source": [
|
||||
"# Add images\n",
|
||||
"img_ids = [str(uuid.uuid4()) for _ in cleaned_img_summary]\n",
|
||||
"summary_img = [Document(page_content=s,metadata={id_key: img_ids[i]}) for i, s in enumerate(cleaned_img_summary)]\n",
|
||||
"summary_img = [\n",
|
||||
" Document(page_content=s, metadata={id_key: img_ids[i]})\n",
|
||||
" for i, s in enumerate(cleaned_img_summary)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_img)\n",
|
||||
"### Fetch images\n",
|
||||
"retriever.docstore.mset(list(zip(img_ids, ### image ### ))) "
|
||||
"retriever.docstore.mset(\n",
|
||||
" list(\n",
|
||||
" zip(\n",
|
||||
" img_ids,\n",
|
||||
" )\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -542,7 +560,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# We can retrieve this table\n",
|
||||
"retriever.get_relevant_documents(\"What are results for LLaMA across across domains / subjects?\")[1]"
|
||||
"retriever.get_relevant_documents(\n",
|
||||
" \"What are results for LLaMA across across domains / subjects?\"\n",
|
||||
")[1]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -592,7 +612,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[1]"
|
||||
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
|
||||
" 1\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -633,15 +655,15 @@
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"# Option 1: LLM\n",
|
||||
"model = ChatOpenAI(temperature=0,model=\"gpt-4\")\n",
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
|
||||
"# Option 2: Multi-modal LLM\n",
|
||||
"# model = GPT4-V or LLaVA\n",
|
||||
"\n",
|
||||
"# RAG pipeline\n",
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
@@ -664,7 +686,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"What is the performance of LLaVa across across multiple image domains / subjects?\")"
|
||||
"chain.invoke(\n",
|
||||
" \"What is the performance of LLaVa across across multiple image domains / subjects?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -713,7 +737,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -92,22 +92,24 @@
|
||||
"path = \"/Users/rlm/Desktop/Papers/LLaVA/\"\n",
|
||||
"\n",
|
||||
"# Get elements\n",
|
||||
"raw_pdf_elements = partition_pdf(filename=path+\"LLaVA.pdf\",\n",
|
||||
" # Using pdf format to find embedded image blocks\n",
|
||||
" extract_images_in_pdf=True,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document \n",
|
||||
" infer_table_structure=True, \n",
|
||||
" # Post processing to aggregate text once we have the title \n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars \n",
|
||||
" # Hard max on chunks\n",
|
||||
" max_characters=4000, \n",
|
||||
" new_after_n_chars=3800, \n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path)"
|
||||
"raw_pdf_elements = partition_pdf(\n",
|
||||
" filename=path + \"LLaVA.pdf\",\n",
|
||||
" # Using pdf format to find embedded image blocks\n",
|
||||
" extract_images_in_pdf=True,\n",
|
||||
" # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles\n",
|
||||
" # Titles are any sub-section of the document\n",
|
||||
" infer_table_structure=True,\n",
|
||||
" # Post processing to aggregate text once we have the title\n",
|
||||
" chunking_strategy=\"by_title\",\n",
|
||||
" # Chunking params to aggregate text blocks\n",
|
||||
" # Attempt to create a new chunk 3800 chars\n",
|
||||
" # Attempt to keep chunks > 2000 chars\n",
|
||||
" # Hard max on chunks\n",
|
||||
" max_characters=4000,\n",
|
||||
" new_after_n_chars=3800,\n",
|
||||
" combine_text_under_n_chars=2000,\n",
|
||||
" image_output_dir_path=path,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -165,6 +167,7 @@
|
||||
" type: str\n",
|
||||
" text: Any\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Categorize by type\n",
|
||||
"categorized_elements = []\n",
|
||||
"for element in raw_pdf_elements:\n",
|
||||
@@ -219,14 +222,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt \n",
|
||||
"prompt_text=\"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"# Prompt\n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"Give a concise summary of the table or text. Table or text chunk: {element} \"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text) \n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text)\n",
|
||||
"\n",
|
||||
"# Summary chain \n",
|
||||
"# Summary chain\n",
|
||||
"model = ChatOllama(model=\"llama2:13b-chat\")\n",
|
||||
"summarize_chain = {\"element\": lambda x:x} | prompt | model | StrOutputParser()"
|
||||
"summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -327,11 +330,14 @@
|
||||
"# Read each file and store its content in a list\n",
|
||||
"img_summaries = []\n",
|
||||
"for file_path in file_paths:\n",
|
||||
" with open(file_path, 'r') as file:\n",
|
||||
" with open(file_path, \"r\") as file:\n",
|
||||
" img_summaries.append(file.read())\n",
|
||||
"\n",
|
||||
"# Clean up residual logging\n",
|
||||
"cleaned_img_summary = [s.split(\"clip_model_load: total allocated memory: 201.27 MB\\n\\n\", 1)[1].strip() for s in img_summaries]"
|
||||
"cleaned_img_summary = [\n",
|
||||
" s.split(\"clip_model_load: total allocated memory: 201.27 MB\\n\\n\", 1)[1].strip()\n",
|
||||
" for s in img_summaries\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -377,18 +383,17 @@
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(\n",
|
||||
" collection_name=\"summaries\",\n",
|
||||
" embedding_function=GPT4AllEmbeddings()\n",
|
||||
" collection_name=\"summaries\", embedding_function=GPT4AllEmbeddings()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# The storage layer for the parent documents\n",
|
||||
"store = InMemoryStore() # <- Can we extend this to images \n",
|
||||
"store = InMemoryStore() # <- Can we extend this to images\n",
|
||||
"id_key = \"doc_id\"\n",
|
||||
"\n",
|
||||
"# The retriever (empty to start)\n",
|
||||
"retriever = MultiVectorRetriever(\n",
|
||||
" vectorstore=vectorstore, \n",
|
||||
" docstore=store, \n",
|
||||
" vectorstore=vectorstore,\n",
|
||||
" docstore=store,\n",
|
||||
" id_key=id_key,\n",
|
||||
")"
|
||||
]
|
||||
@@ -412,21 +417,32 @@
|
||||
"source": [
|
||||
"# Add texts\n",
|
||||
"doc_ids = [str(uuid.uuid4()) for _ in texts]\n",
|
||||
"summary_texts = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(text_summaries)]\n",
|
||||
"summary_texts = [\n",
|
||||
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
|
||||
" for i, s in enumerate(text_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_texts)\n",
|
||||
"retriever.docstore.mset(list(zip(doc_ids, texts)))\n",
|
||||
"\n",
|
||||
"# Add tables\n",
|
||||
"table_ids = [str(uuid.uuid4()) for _ in tables]\n",
|
||||
"summary_tables = [Document(page_content=s,metadata={id_key: table_ids[i]}) for i, s in enumerate(table_summaries)]\n",
|
||||
"summary_tables = [\n",
|
||||
" Document(page_content=s, metadata={id_key: table_ids[i]})\n",
|
||||
" for i, s in enumerate(table_summaries)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_tables)\n",
|
||||
"retriever.docstore.mset(list(zip(table_ids, tables)))\n",
|
||||
"\n",
|
||||
"# Add images\n",
|
||||
"img_ids = [str(uuid.uuid4()) for _ in cleaned_img_summary]\n",
|
||||
"summary_img = [Document(page_content=s,metadata={id_key: img_ids[i]}) for i, s in enumerate(cleaned_img_summary)]\n",
|
||||
"summary_img = [\n",
|
||||
" Document(page_content=s, metadata={id_key: img_ids[i]})\n",
|
||||
" for i, s in enumerate(cleaned_img_summary)\n",
|
||||
"]\n",
|
||||
"retriever.vectorstore.add_documents(summary_img)\n",
|
||||
"retriever.docstore.mset(list(zip(img_ids, cleaned_img_summary))) # Store the image summary as the raw document"
|
||||
"retriever.docstore.mset(\n",
|
||||
" list(zip(img_ids, cleaned_img_summary))\n",
|
||||
") # Store the image summary as the raw document"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -484,7 +500,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[0]"
|
||||
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
|
||||
" 0\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -530,9 +548,9 @@
|
||||
"\n",
|
||||
"# RAG pipeline\n",
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
@@ -555,7 +573,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"What is the performance of LLaVa across across multiple image domains / subjects?\")"
|
||||
"chain.invoke(\n",
|
||||
" \"What is the performance of LLaVa across across multiple image domains / subjects?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -584,7 +604,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"Explain any images / figures in the paper with playful and creative examples.\")"
|
||||
"chain.invoke(\n",
|
||||
" \"Explain any images / figures in the paper with playful and creative examples.\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -837,7 +837,9 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo-0613\") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
|
||||
"model = ChatOpenAI(\n",
|
||||
" model_name=\"gpt-3.5-turbo-0613\"\n",
|
||||
") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -77,6 +77,7 @@
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_experimental.autonomous_agents import HuggingGPT\n",
|
||||
"\n",
|
||||
"# %env OPENAI_API_BASE=http://localhost:8000/v1"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -50,6 +50,7 @@
|
||||
"# pick and configure the LLM of your choice\n",
|
||||
"\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(model=\"text-davinci-003\")"
|
||||
]
|
||||
},
|
||||
@@ -85,8 +86,8 @@
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"PROMPT = PromptTemplate(\n",
|
||||
" input_variables=[\"meal\", \"text_to_personalize\", \"user\", \"preference\"], \n",
|
||||
" template=PROMPT_TEMPLATE\n",
|
||||
" input_variables=[\"meal\", \"text_to_personalize\", \"user\", \"preference\"],\n",
|
||||
" template=PROMPT_TEMPLATE,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -105,7 +106,7 @@
|
||||
"source": [
|
||||
"import langchain_experimental.rl_chain as rl_chain\n",
|
||||
"\n",
|
||||
"chain = rl_chain.PickBest.from_llm(llm=llm, prompt=PROMPT)\n"
|
||||
"chain = rl_chain.PickBest.from_llm(llm=llm, prompt=PROMPT)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -122,10 +123,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs \\\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs \\\n",
|
||||
" believe you will love it!\",\n",
|
||||
")"
|
||||
]
|
||||
@@ -193,10 +194,10 @@
|
||||
"for _ in range(5):\n",
|
||||
" try:\n",
|
||||
" response = chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" )\n",
|
||||
" except Exception as e:\n",
|
||||
" print(e)\n",
|
||||
@@ -223,12 +224,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"scoring_criteria_template = \"Given {preference} rank how good or bad this selection is {meal}\"\n",
|
||||
"scoring_criteria_template = (\n",
|
||||
" \"Given {preference} rank how good or bad this selection is {meal}\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = rl_chain.PickBest.from_llm(\n",
|
||||
" llm=llm,\n",
|
||||
" prompt=PROMPT,\n",
|
||||
" selection_scorer=rl_chain.AutoSelectionScorer(llm=llm, scoring_criteria_template_str=scoring_criteria_template),\n",
|
||||
" selection_scorer=rl_chain.AutoSelectionScorer(\n",
|
||||
" llm=llm, scoring_criteria_template_str=scoring_criteria_template\n",
|
||||
" ),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -255,14 +260,16 @@
|
||||
],
|
||||
"source": [
|
||||
"response = chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
")\n",
|
||||
"print(response[\"response\"])\n",
|
||||
"selection_metadata = response[\"selection_metadata\"]\n",
|
||||
"print(f\"selected index: {selection_metadata.selected.index}, score: {selection_metadata.selected.score}\")"
|
||||
"print(\n",
|
||||
" f\"selected index: {selection_metadata.selected.index}, score: {selection_metadata.selected.score}\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -280,8 +287,8 @@
|
||||
"source": [
|
||||
"class CustomSelectionScorer(rl_chain.SelectionScorer):\n",
|
||||
" def score_response(\n",
|
||||
" self, inputs, llm_response: str, event: rl_chain.PickBestEvent) -> float:\n",
|
||||
"\n",
|
||||
" self, inputs, llm_response: str, event: rl_chain.PickBestEvent\n",
|
||||
" ) -> float:\n",
|
||||
" print(event.based_on)\n",
|
||||
" print(event.to_select_from)\n",
|
||||
"\n",
|
||||
@@ -336,10 +343,10 @@
|
||||
],
|
||||
"source": [
|
||||
"response = chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -370,9 +377,10 @@
|
||||
" return 1.0\n",
|
||||
" else:\n",
|
||||
" return 0.0\n",
|
||||
" def score_response(\n",
|
||||
" self, inputs, llm_response: str, event: rl_chain.PickBestEvent) -> float:\n",
|
||||
"\n",
|
||||
" def score_response(\n",
|
||||
" self, inputs, llm_response: str, event: rl_chain.PickBestEvent\n",
|
||||
" ) -> float:\n",
|
||||
" selected_meal = event.to_select_from[\"meal\"][event.selected.index]\n",
|
||||
"\n",
|
||||
" if \"Tom\" in event.based_on[\"user\"]:\n",
|
||||
@@ -394,7 +402,7 @@
|
||||
" prompt=PROMPT,\n",
|
||||
" selection_scorer=CustomSelectionScorer(),\n",
|
||||
" metrics_step=5,\n",
|
||||
" metrics_window_size=5, # rolling window average\n",
|
||||
" metrics_window_size=5, # rolling window average\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"random_chain = rl_chain.PickBest.from_llm(\n",
|
||||
@@ -402,8 +410,8 @@
|
||||
" prompt=PROMPT,\n",
|
||||
" selection_scorer=CustomSelectionScorer(),\n",
|
||||
" metrics_step=5,\n",
|
||||
" metrics_window_size=5, # rolling window average\n",
|
||||
" policy=rl_chain.PickBestRandomPolicy # set the random policy instead of default\n",
|
||||
" metrics_window_size=5, # rolling window average\n",
|
||||
" policy=rl_chain.PickBestRandomPolicy, # set the random policy instead of default\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -416,29 +424,29 @@
|
||||
"for _ in range(20):\n",
|
||||
" try:\n",
|
||||
" chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" )\n",
|
||||
" random_chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" )\n",
|
||||
" \n",
|
||||
"\n",
|
||||
" chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Anna\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Loves meat\", \"especially beef\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Anna\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Loves meat\", \"especially beef\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" )\n",
|
||||
" random_chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Anna\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Loves meat\", \"especially beef\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Anna\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Loves meat\", \"especially beef\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" )\n",
|
||||
" except Exception as e:\n",
|
||||
" print(e)"
|
||||
@@ -477,12 +485,17 @@
|
||||
],
|
||||
"source": [
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"chain.metrics.to_pandas()['score'].plot(label=\"default learning policy\")\n",
|
||||
"random_chain.metrics.to_pandas()['score'].plot(label=\"random selection policy\")\n",
|
||||
"\n",
|
||||
"chain.metrics.to_pandas()[\"score\"].plot(label=\"default learning policy\")\n",
|
||||
"random_chain.metrics.to_pandas()[\"score\"].plot(label=\"random selection policy\")\n",
|
||||
"plt.legend()\n",
|
||||
"\n",
|
||||
"print(f\"The final average score for the default policy, calculated over a rolling window, is: {chain.metrics.to_pandas()['score'].iloc[-1]}\")\n",
|
||||
"print(f\"The final average score for the random policy, calculated over a rolling window, is: {random_chain.metrics.to_pandas()['score'].iloc[-1]}\")"
|
||||
"print(\n",
|
||||
" f\"The final average score for the default policy, calculated over a rolling window, is: {chain.metrics.to_pandas()['score'].iloc[-1]}\"\n",
|
||||
")\n",
|
||||
"print(\n",
|
||||
" f\"The final average score for the random policy, calculated over a rolling window, is: {random_chain.metrics.to_pandas()['score'].iloc[-1]}\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -803,10 +816,10 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"chain.run(\n",
|
||||
" meal = rl_chain.ToSelectFrom(meals),\n",
|
||||
" user = rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference = rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize = \"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
" meal=rl_chain.ToSelectFrom(meals),\n",
|
||||
" user=rl_chain.BasedOn(\"Tom\"),\n",
|
||||
" preference=rl_chain.BasedOn([\"Vegetarian\", \"regular dairy is ok\"]),\n",
|
||||
" text_to_personalize=\"This is the weeks specialty dish, our master chefs believe you will love it!\",\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -27,11 +27,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"from os import environ\n",
|
||||
"import getpass\n",
|
||||
"from typing import Dict, Any\n",
|
||||
"from langchain.llms import OpenAI\nfrom langchain.utilities import SQLDatabase\nfrom langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
"from sqlalchemy import create_engine, Column, MetaData\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
@@ -76,7 +77,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.callbacks import StdOutCallbackHandler\n",
|
||||
"\n",
|
||||
@@ -124,8 +124,9 @@
|
||||
"from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain\n",
|
||||
"\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
"from langchain_experimental.retrievers.vector_sql_database \\\n",
|
||||
" import VectorSQLDatabaseChainRetriever\n",
|
||||
"from langchain_experimental.retrievers.vector_sql_database import (\n",
|
||||
" VectorSQLDatabaseChainRetriever,\n",
|
||||
")\n",
|
||||
"from langchain_experimental.sql.prompt import MYSCALE_PROMPT\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLRetrieveAllOutputParser\n",
|
||||
"\n",
|
||||
@@ -144,7 +145,9 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"# You need all those keys to get docs\n",
|
||||
"retriever = VectorSQLDatabaseChainRetriever(sql_db_chain=chain, page_content_key=\"abstract\")\n",
|
||||
"retriever = VectorSQLDatabaseChainRetriever(\n",
|
||||
" sql_db_chain=chain, page_content_key=\"abstract\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"document_with_metadata_prompt = PromptTemplate(\n",
|
||||
" input_variables=[\"page_content\", \"id\", \"title\", \"authors\", \"pubdate\", \"categories\"],\n",
|
||||
@@ -162,8 +165,10 @@
|
||||
" },\n",
|
||||
" return_source_documents=True,\n",
|
||||
")\n",
|
||||
"ans = chain(\"Please give me 10 papers to ask what is PageRank?\",\n",
|
||||
" callbacks=[StdOutCallbackHandler()])\n",
|
||||
"ans = chain(\n",
|
||||
" \"Please give me 10 papers to ask what is PageRank?\",\n",
|
||||
" callbacks=[StdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"print(ans[\"answer\"])"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -34,7 +34,11 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
|
||||
"from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner"
|
||||
"from langchain_experimental.plan_and_execute import (\n",
|
||||
" PlanAndExecute,\n",
|
||||
" load_agent_executor,\n",
|
||||
" load_chat_planner,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -56,16 +60,16 @@
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name=\"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Calculator\",\n",
|
||||
" func=llm_math_chain.run,\n",
|
||||
" description=\"useful for when you need to answer questions about math\"\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\",\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Calculator\",\n",
|
||||
" func=llm_math_chain.run,\n",
|
||||
" description=\"useful for when you need to answer questions about math\",\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
@@ -216,7 +220,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(\"Who is the current prime minister of the UK? What is their current age raised to the 0.43 power?\")"
|
||||
"agent.run(\n",
|
||||
" \"Who is the current prime minister of the UK? What is their current age raised to the 0.43 power?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -55,6 +55,7 @@
|
||||
"source": [
|
||||
"# Setup API keys for Kay and OpenAI\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"KAY_API_KEY = getpass()\n",
|
||||
"OPENAI_API_KEY = getpass()"
|
||||
]
|
||||
@@ -67,6 +68,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"KAY_API_KEY\"] = KAY_API_KEY\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
|
||||
]
|
||||
@@ -83,7 +85,9 @@
|
||||
"from langchain.retrievers import KayAiRetriever\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
|
||||
"retriever = KayAiRetriever.create(dataset_id=\"company\", data_types=[\"PressRelease\"], num_contexts=6)\n",
|
||||
"retriever = KayAiRetriever.create(\n",
|
||||
" dataset_id=\"company\", data_types=[\"PressRelease\"], num_contexts=6\n",
|
||||
")\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
|
||||
]
|
||||
},
|
||||
@@ -116,7 +120,7 @@
|
||||
"# More sample questions in the Playground on https://kay.ai\n",
|
||||
"questions = [\n",
|
||||
" \"How is the healthcare industry adopting generative AI tools?\",\n",
|
||||
" #\"What are some recent challenges faced by the renewable energy sector?\",\n",
|
||||
" # \"What are some recent challenges faced by the renewable energy sector?\",\n",
|
||||
"]\n",
|
||||
"chat_history = []\n",
|
||||
"\n",
|
||||
|
||||
272
cookbook/rag_fusion.ipynb
Normal file
272
cookbook/rag_fusion.ipynb
Normal file
@@ -0,0 +1,272 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "993c2768",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RAG Fusion\n",
|
||||
"\n",
|
||||
"Re-implemented from [this GitHub repo](https://github.com/Raudaschl/rag-fusion), all credit to original author\n",
|
||||
"\n",
|
||||
"> RAG-Fusion, a search methodology that aims to bridge the gap between traditional search paradigms and the multifaceted dimensions of human queries. Inspired by the capabilities of Retrieval Augmented Generation (RAG), this project goes a step further by employing multiple query generation and Reciprocal Rank Fusion to re-rank search results."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ebcc6791",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"For this example, we will use Pinecone and some fake data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "661a1c36",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pinecone\n",
|
||||
"from langchain.vectorstores import Pinecone\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"pinecone.init(api_key=\"...\", environment=\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "48ef7e93",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"all_documents = {\n",
|
||||
" \"doc1\": \"Climate change and economic impact.\",\n",
|
||||
" \"doc2\": \"Public health concerns due to climate change.\",\n",
|
||||
" \"doc3\": \"Climate change: A social perspective.\",\n",
|
||||
" \"doc4\": \"Technological solutions to climate change.\",\n",
|
||||
" \"doc5\": \"Policy changes needed to combat climate change.\",\n",
|
||||
" \"doc6\": \"Climate change and its impact on biodiversity.\",\n",
|
||||
" \"doc7\": \"Climate change: The science and models.\",\n",
|
||||
" \"doc8\": \"Global warming: A subset of climate change.\",\n",
|
||||
" \"doc9\": \"How climate change affects daily weather.\",\n",
|
||||
" \"doc10\": \"The history of climate change activism.\",\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fde89f0b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vectorstore = Pinecone.from_texts(\n",
|
||||
" list(all_documents.values()), OpenAIEmbeddings(), index_name=\"rag-fusion\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "22ddd041",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Define the Query Generator\n",
|
||||
"\n",
|
||||
"We will now define a chain to do the query generation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "1d547524",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 68,
|
||||
"id": "af9ab4db",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"\n",
|
||||
"prompt = hub.pull(\"langchain-ai/rag-fusion-query-generation\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "3628b552",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# prompt = ChatPromptTemplate.from_messages([\n",
|
||||
"# (\"system\", \"You are a helpful assistant that generates multiple search queries based on a single input query.\"),\n",
|
||||
"# (\"user\", \"Generate multiple search queries related to: {original_query}\"),\n",
|
||||
"# (\"user\", \"OUTPUT (4 queries):\")\n",
|
||||
"# ])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8d6cbb73",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"generate_queries = (\n",
|
||||
" prompt | ChatOpenAI(temperature=0) | StrOutputParser() | (lambda x: x.split(\"\\n\"))\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ee2824cd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Define the full chain\n",
|
||||
"\n",
|
||||
"We can now put it all together and define the full chain. This chain:\n",
|
||||
" \n",
|
||||
" 1. Generates a bunch of queries\n",
|
||||
" 2. Looks up each query in the retriever\n",
|
||||
" 3. Joins all the results together using reciprocal rank fusion\n",
|
||||
" \n",
|
||||
" \n",
|
||||
"Note that it does NOT do a final generation step"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 50,
|
||||
"id": "ca0bfec4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"original_query = \"impact of climate change\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 75,
|
||||
"id": "02437d65",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vectorstore = Pinecone.from_existing_index(\"rag-fusion\", OpenAIEmbeddings())\n",
|
||||
"retriever = vectorstore.as_retriever()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 76,
|
||||
"id": "46a9a0e6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.load import dumps, loads\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def reciprocal_rank_fusion(results: list[list], k=60):\n",
|
||||
" fused_scores = {}\n",
|
||||
" for docs in results:\n",
|
||||
" # Assumes the docs are returned in sorted order of relevance\n",
|
||||
" for rank, doc in enumerate(docs):\n",
|
||||
" doc_str = dumps(doc)\n",
|
||||
" if doc_str not in fused_scores:\n",
|
||||
" fused_scores[doc_str] = 0\n",
|
||||
" previous_score = fused_scores[doc_str]\n",
|
||||
" fused_scores[doc_str] += 1 / (rank + k)\n",
|
||||
"\n",
|
||||
" reranked_results = [\n",
|
||||
" (loads(doc), score)\n",
|
||||
" for doc, score in sorted(fused_scores.items(), key=lambda x: x[1], reverse=True)\n",
|
||||
" ]\n",
|
||||
" return reranked_results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 77,
|
||||
"id": "3f9d4502",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = generate_queries | retriever.map() | reciprocal_rank_fusion"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 78,
|
||||
"id": "d70c4fcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[(Document(page_content='Climate change and economic impact.'),\n",
|
||||
" 0.06558258417063283),\n",
|
||||
" (Document(page_content='Climate change: A social perspective.'),\n",
|
||||
" 0.06400409626216078),\n",
|
||||
" (Document(page_content='How climate change affects daily weather.'),\n",
|
||||
" 0.04787506400409626),\n",
|
||||
" (Document(page_content='Climate change and its impact on biodiversity.'),\n",
|
||||
" 0.03306010928961749),\n",
|
||||
" (Document(page_content='Public health concerns due to climate change.'),\n",
|
||||
" 0.016666666666666666),\n",
|
||||
" (Document(page_content='Technological solutions to climate change.'),\n",
|
||||
" 0.016666666666666666),\n",
|
||||
" (Document(page_content='Policy changes needed to combat climate change.'),\n",
|
||||
" 0.01639344262295082)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 78,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"original_query\": original_query})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7866e551",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
353
cookbook/rewrite.ipynb
Normal file
353
cookbook/rewrite.ipynb
Normal file
@@ -0,0 +1,353 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "260629f9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Rewrite-Retrieve-Read\n",
|
||||
"\n",
|
||||
"**Rewrite-Retrieve-Read** is a method proposed in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf)\n",
|
||||
"\n",
|
||||
"> Because the original query can not be always optimal to retrieve for the LLM, especially in the real world... we first prompt an LLM to rewrite the queries, then conduct retrieval-augmented reading\n",
|
||||
"\n",
|
||||
"We show how you can easily do that with LangChain Expression Language"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "eda93712",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Baseline\n",
|
||||
"\n",
|
||||
"Baseline RAG (**Retrieve-and-read**) can be done like the following:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "1d2edbd2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "86a46aa9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Answer the users question based only on the following context:\n",
|
||||
"\n",
|
||||
"<context>\n",
|
||||
"{context}\n",
|
||||
"</context>\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(temperature=0)\n",
|
||||
"\n",
|
||||
"search = DuckDuckGoSearchAPIWrapper()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def retriever(query):\n",
|
||||
" return search.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8566d48e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "5c57f9ee",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"simple_query = \"what is langchain?\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "37c5f962",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"LangChain is a powerful and versatile Python library that enables developers and researchers to create, experiment with, and analyze language models and agents. It simplifies the development of language-based applications by providing a suite of features for artificial general intelligence. It can be used to build chatbots, perform document analysis and summarization, and streamline interaction with various large language model providers. LangChain's unique proposition is its ability to create logical links between one or more language models, known as Chains. It is an open-source library that offers a generic interface to foundation models and allows prompt management and integration with other components and tools.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(simple_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "23bdb9bd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"While this is fine for well formatted queries, it can break down for more complicated queries"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "8df6a814",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"distracted_query = \"man that sam bankman fried trial was crazy! what is langchain?\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "16d7db64",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Based on the given context, there is no information provided about \"langchain.\"'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(distracted_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0b4f8b93",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is because the retriever does a bad job with these \"distracted\" queries"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "3439d8dc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Business She\\'s the star witness against Sam Bankman-Fried. Her testimony was explosive Gary Wang, who co-founded both FTX and Alameda Research, said Bankman-Fried directed him to change a... The Verge, following the trial\\'s Oct. 4 kickoff: \"Is Sam Bankman-Fried\\'s Defense Even Trying to Win?\". CBS Moneywatch, from Thursday: \"Sam Bankman-Fried\\'s Lawyer Struggles to Poke ... Sam Bankman-Fried, FTX\\'s founder, responded with a single word: \"Oof.\". Less than a year later, Mr. Bankman-Fried, 31, is on trial in federal court in Manhattan, fighting criminal charges ... July 19, 2023. A U.S. judge on Wednesday overruled objections by Sam Bankman-Fried\\'s lawyers and allowed jurors in the FTX founder\\'s fraud trial to see a profane message he sent to a reporter days ... Sam Bankman-Fried, who was once hailed as a virtuoso in cryptocurrency trading, is on trial over the collapse of FTX, the financial exchange he founded. Bankman-Fried is accused of...'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"retriever(distracted_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7eb748ac",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Rewrite-Retrieve-Read Implementation\n",
|
||||
"\n",
|
||||
"The main part is a rewriter to rewrite the search query"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "88ae702e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Provide a better search query for \\\n",
|
||||
"web search engine to answer the given question, end \\\n",
|
||||
"the queries with ’**’. Question: \\\n",
|
||||
"{x} Answer:\"\"\"\n",
|
||||
"rewrite_prompt = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "184e1bcb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"\n",
|
||||
"rewrite_prompt = hub.pull(\"langchain-ai/rewrite\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "a4c23d40",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Provide a better search query for web search engine to answer the given question, end the queries with ’**’. Question {x} Answer:\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(rewrite_prompt.template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "f55cd010",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Parser to remove the `**`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _parse(text):\n",
|
||||
" return text.strip(\"**\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "c9c34bef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"rewriter = rewrite_prompt | ChatOpenAI(temperature=0) | StrOutputParser() | _parse"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "fb17fb3d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'What is the definition and purpose of Langchain?'"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"rewriter.invoke({\"x\": distracted_query})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "f83edb09",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"rewrite_retrieve_read_chain = (\n",
|
||||
" {\n",
|
||||
" \"context\": {\"x\": RunnablePassthrough()} | rewriter | retriever,\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "43096322",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Based on the given context, LangChain is an open-source framework designed to simplify the creation of applications using large language models (LLMs). It enables LLM models to generate responses based on up-to-date online information and simplifies the organization of large volumes of data for easy access by LLMs. LangChain offers a standard interface for chains, integrations with other tools, and end-to-end chains for common applications. It is a robust library that streamlines interaction with various LLM providers. LangChain\\'s unique proposition is its ability to create logical links between one or more LLMs, known as Chains. It is an AI framework with features that simplify the development of language-based applications and offers a suite of features for artificial general intelligence. However, the context does not provide any information about the \"sam bankman fried trial\" mentioned in the question.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"rewrite_retrieve_read_chain.invoke(distracted_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "59874b4f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
177
cookbook/selecting_llms_based_on_context_length.ipynb
Normal file
177
cookbook/selecting_llms_based_on_context_length.ipynb
Normal file
@@ -0,0 +1,177 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e93283d1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Selecting LLMs based on Context Length\n",
|
||||
"\n",
|
||||
"Different LLMs have different context lengths. As a very immediate an practical example, OpenAI has two versions of GPT-3.5-Turbo: one with 4k context, another with 16k context. This notebook shows how to route between them based on input."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "cc453450",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.prompt import PromptValue\n",
|
||||
"from langchain.schema.messages import BaseMessage\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from typing import Union, Sequence"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "1cec6a10",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"short_context_model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
|
||||
"long_context_model = ChatOpenAI(model=\"gpt-3.5-turbo-16k\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "772da153",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_context_length(prompt: PromptValue):\n",
|
||||
" messages = prompt.to_messages()\n",
|
||||
" tokens = short_context_model.get_num_tokens_from_messages(messages)\n",
|
||||
" return tokens"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "db771e20",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = PromptTemplate.from_template(\"Summarize this passage: {context}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "af057e2f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def choose_model(prompt: PromptValue):\n",
|
||||
" context_len = get_context_length(prompt)\n",
|
||||
" if context_len < 30:\n",
|
||||
" print(\"short model\")\n",
|
||||
" return short_context_model\n",
|
||||
" else:\n",
|
||||
" print(\"long model\")\n",
|
||||
" return long_context_model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "84f3e07d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | choose_model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "d8b14f8f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"short model\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The passage mentions that a frog visited a pond.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"context\": \"a frog went to a pond\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "70ebd3dd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"long model\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The passage describes a frog that moved from one pond to another and perched on a log.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\n",
|
||||
" {\"context\": \"a frog went to a pond and sat on a log and went to a different pond\"}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a7e29fef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -7,9 +7,33 @@
|
||||
"source": [
|
||||
"# Building hotel room search with self-querying retrieval\n",
|
||||
"\n",
|
||||
"In this example we'll walk through how to build and iterate on a hotel room search service that leverages an LLM to generate structured filter queries that can then be passed to a vector store.\n",
|
||||
"\n",
|
||||
"For an introduction to self-querying retrieval [check out the docs](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d621de99-d993-4f4b-b94a-d02b2c7ad4e0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports and data prep\n",
|
||||
"\n",
|
||||
"In this example we use `ChatOpenAI` for the model and `ElasticsearchStore` for the vector store, but these can be swapped out with an LLM/ChatModel and [any VectorStore that support self-querying](https://python.langchain.com/docs/integrations/retrievers/self_query/).\n",
|
||||
"\n",
|
||||
"Download data from: https://www.kaggle.com/datasets/keshavramaiah/hotel-recommendation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8ecd1fbb-bdba-420b-bcc7-5ea8a232ab11",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain lark openai elasticsearch pandas"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
@@ -27,8 +51,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"details = pd.read_csv(\"~/Downloads/archive/Hotel_details.csv\").drop_duplicates(subset=\"hotelid\").set_index(\"hotelid\")\n",
|
||||
"attributes = pd.read_csv(\"~/Downloads/archive/Hotel_Room_attributes.csv\", index_col=\"id\")\n",
|
||||
"details = (\n",
|
||||
" pd.read_csv(\"~/Downloads/archive/Hotel_details.csv\")\n",
|
||||
" .drop_duplicates(subset=\"hotelid\")\n",
|
||||
" .set_index(\"hotelid\")\n",
|
||||
")\n",
|
||||
"attributes = pd.read_csv(\n",
|
||||
" \"~/Downloads/archive/Hotel_Room_attributes.csv\", index_col=\"id\"\n",
|
||||
")\n",
|
||||
"price = pd.read_csv(\"~/Downloads/archive/hotels_RoomPrice.csv\", index_col=\"id\")"
|
||||
]
|
||||
},
|
||||
@@ -184,9 +214,20 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"latest_price = price.drop_duplicates(subset=\"refid\", keep=\"last\")[[\"hotelcode\", \"roomtype\", \"onsiterate\", \"roomamenities\", \"maxoccupancy\", \"mealinclusiontype\"]]\n",
|
||||
"latest_price = price.drop_duplicates(subset=\"refid\", keep=\"last\")[\n",
|
||||
" [\n",
|
||||
" \"hotelcode\",\n",
|
||||
" \"roomtype\",\n",
|
||||
" \"onsiterate\",\n",
|
||||
" \"roomamenities\",\n",
|
||||
" \"maxoccupancy\",\n",
|
||||
" \"mealinclusiontype\",\n",
|
||||
" ]\n",
|
||||
"]\n",
|
||||
"latest_price[\"ratedescription\"] = attributes.loc[latest_price.index][\"ratedescription\"]\n",
|
||||
"latest_price = latest_price.join(details[[\"hotelname\", \"city\", \"country\", \"starrating\"]], on=\"hotelcode\")\n",
|
||||
"latest_price = latest_price.join(\n",
|
||||
" details[[\"hotelname\", \"city\", \"country\", \"starrating\"]], on=\"hotelcode\"\n",
|
||||
")\n",
|
||||
"latest_price = latest_price.rename({\"ratedescription\": \"roomdescription\"}, axis=1)\n",
|
||||
"latest_price[\"mealsincluded\"] = ~latest_price[\"mealinclusiontype\"].isnull()\n",
|
||||
"latest_price.pop(\"hotelcode\")\n",
|
||||
@@ -220,7 +261,7 @@
|
||||
"res = model.predict(\n",
|
||||
" \"Below is a table with information about hotel rooms. \"\n",
|
||||
" \"Return a JSON list with an entry for each column. Each entry should have \"\n",
|
||||
" \"{\\\"name\\\": \\\"column name\\\", \\\"description\\\": \\\"column description\\\", \\\"type\\\": \\\"column data type\\\"}\"\n",
|
||||
" '{\"name\": \"column name\", \"description\": \"column description\", \"type\": \"column data type\"}'\n",
|
||||
" f\"\\n\\n{latest_price.head()}\\n\\nJSON:\\n\"\n",
|
||||
")"
|
||||
]
|
||||
@@ -314,9 +355,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"attribute_info[-2]['description'] += f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
|
||||
"attribute_info[3]['description'] += f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
|
||||
"attribute_info[-3]['description'] += f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\""
|
||||
"attribute_info[-2][\n",
|
||||
" \"description\"\n",
|
||||
"] += f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
|
||||
"attribute_info[3][\n",
|
||||
" \"description\"\n",
|
||||
"] += f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
|
||||
"attribute_info[-3][\n",
|
||||
" \"description\"\n",
|
||||
"] += f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -384,7 +431,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains.query_constructor.base import get_query_constructor_prompt, load_query_constructor_runnable"
|
||||
"from langchain.chains.query_constructor.base import (\n",
|
||||
" get_query_constructor_prompt,\n",
|
||||
" load_query_constructor_runnable,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -568,7 +618,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = load_query_constructor_runnable(ChatOpenAI(model='gpt-3.5-turbo', temperature=0), doc_contents, attribute_info)"
|
||||
"chain = load_query_constructor_runnable(\n",
|
||||
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0), doc_contents, attribute_info\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -610,7 +662,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"})"
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -632,10 +688,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"attribute_info[-3]['description'] += \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
|
||||
"attribute_info[-3][\n",
|
||||
" \"description\"\n",
|
||||
"] += \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
|
||||
"chain = load_query_constructor_runnable(\n",
|
||||
" ChatOpenAI(model='gpt-3.5-turbo', temperature=0), \n",
|
||||
" doc_contents, \n",
|
||||
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
|
||||
" doc_contents,\n",
|
||||
" attribute_info,\n",
|
||||
")"
|
||||
]
|
||||
@@ -680,10 +738,12 @@
|
||||
"source": [
|
||||
"content_attr = [\"roomtype\", \"roomamenities\", \"roomdescription\", \"hotelname\"]\n",
|
||||
"doc_contents = \"A detailed description of a hotel room, including information about the room type and room amenities.\"\n",
|
||||
"filter_attribute_info = tuple(ai for ai in attribute_info if ai[\"name\"] not in content_attr)\n",
|
||||
"filter_attribute_info = tuple(\n",
|
||||
" ai for ai in attribute_info if ai[\"name\"] not in content_attr\n",
|
||||
")\n",
|
||||
"chain = load_query_constructor_runnable(\n",
|
||||
" ChatOpenAI(model='gpt-3.5-turbo', temperature=0), \n",
|
||||
" doc_contents, \n",
|
||||
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
|
||||
" doc_contents,\n",
|
||||
" filter_attribute_info,\n",
|
||||
")"
|
||||
]
|
||||
@@ -706,7 +766,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"})"
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -836,14 +900,22 @@
|
||||
"examples = [\n",
|
||||
" (\n",
|
||||
" \"I want a hotel in the Balkans with a king sized bed and a hot tub. Budget is $300 a night\",\n",
|
||||
" {\"query\": \"king-sized bed, hot tub\", \"filter\": 'and(in(\"country\", [\"Bulgaria\", \"Greece\", \"Croatia\", \"Serbia\"]), lte(\"onsiterate\", 300))'}\n",
|
||||
" {\n",
|
||||
" \"query\": \"king-sized bed, hot tub\",\n",
|
||||
" \"filter\": 'and(in(\"country\", [\"Bulgaria\", \"Greece\", \"Croatia\", \"Serbia\"]), lte(\"onsiterate\", 300))',\n",
|
||||
" },\n",
|
||||
" ),\n",
|
||||
" (\n",
|
||||
" \"A room with breakfast included for 3 people, at a Hilton\",\n",
|
||||
" {\"query\": \"Hilton\", \"filter\": 'and(eq(\"mealsincluded\", true), gte(\"maxoccupancy\", 3))'}\n",
|
||||
" {\n",
|
||||
" \"query\": \"Hilton\",\n",
|
||||
" \"filter\": 'and(eq(\"mealsincluded\", true), gte(\"maxoccupancy\", 3))',\n",
|
||||
" },\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"prompt = get_query_constructor_prompt(doc_contents, filter_attribute_info, examples=examples)\n",
|
||||
"prompt = get_query_constructor_prompt(\n",
|
||||
" doc_contents, filter_attribute_info, examples=examples\n",
|
||||
")\n",
|
||||
"print(prompt.format(query=\"{query}\"))"
|
||||
]
|
||||
},
|
||||
@@ -855,10 +927,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = load_query_constructor_runnable(\n",
|
||||
" ChatOpenAI(model='gpt-3.5-turbo', temperature=0), \n",
|
||||
" doc_contents, \n",
|
||||
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
|
||||
" doc_contents,\n",
|
||||
" filter_attribute_info,\n",
|
||||
" examples=examples\n",
|
||||
" examples=examples,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -880,7 +952,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"})"
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"query\": \"Find a 2-person room in Vienna or London, preferably with meals included and AC\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -932,7 +1008,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"query\": \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"})"
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"query\": \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -953,11 +1033,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = load_query_constructor_runnable(\n",
|
||||
" ChatOpenAI(model='gpt-3.5-turbo', temperature=0), \n",
|
||||
" doc_contents, \n",
|
||||
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
|
||||
" doc_contents,\n",
|
||||
" filter_attribute_info,\n",
|
||||
" examples=examples,\n",
|
||||
" fix_invalid=True\n",
|
||||
" fix_invalid=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -979,7 +1059,11 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"query\": \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"})"
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"query\": \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1032,8 +1116,8 @@
|
||||
"# docs.append(doc)\n",
|
||||
"# vecstore = ElasticsearchStore.from_documents(\n",
|
||||
"# docs,\n",
|
||||
"# embeddings, \n",
|
||||
"# es_url=\"http://localhost:9200\", \n",
|
||||
"# embeddings,\n",
|
||||
"# es_url=\"http://localhost:9200\",\n",
|
||||
"# index_name=\"hotel_rooms\",\n",
|
||||
"# # strategy=ElasticsearchStore.ApproxRetrievalStrategy(\n",
|
||||
"# # hybrid=True,\n",
|
||||
@@ -1049,9 +1133,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vecstore = ElasticsearchStore(\n",
|
||||
" \"hotel_rooms\", \n",
|
||||
" embedding=embeddings, \n",
|
||||
" es_url=\"http://localhost:9200\", \n",
|
||||
" \"hotel_rooms\",\n",
|
||||
" embedding=embeddings,\n",
|
||||
" es_url=\"http://localhost:9200\",\n",
|
||||
" # strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True) # seems to not be available in community version\n",
|
||||
")"
|
||||
]
|
||||
@@ -1065,7 +1149,9 @@
|
||||
"source": [
|
||||
"from langchain.retrievers import SelfQueryRetriever\n",
|
||||
"\n",
|
||||
"retriever = SelfQueryRetriever(query_constructor=chain, vectorstore=vecstore, verbose=True)"
|
||||
"retriever = SelfQueryRetriever(\n",
|
||||
" query_constructor=chain, vectorstore=vecstore, verbose=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1142,7 +1228,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"results = retriever.get_relevant_documents(\"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\")\n",
|
||||
"results = retriever.get_relevant_documents(\n",
|
||||
" \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
|
||||
")\n",
|
||||
"for res in results:\n",
|
||||
" print(res.page_content)\n",
|
||||
" print(\"\\n\" + \"-\" * 20 + \"\\n\")"
|
||||
|
||||
351
cookbook/stepback-qa.ipynb
Normal file
351
cookbook/stepback-qa.ipynb
Normal file
@@ -0,0 +1,351 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "83ef724e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Step-Back Prompting (Question-Answering)\n",
|
||||
"\n",
|
||||
"One prompting technique called \"Step-Back\" prompting can improve performance on complex questions by first asking a \"step back\" question. This can be combined with regular question-answering applications by then doing retrieval on both the original and step-back question.\n",
|
||||
"\n",
|
||||
"Read the paper [here](https://arxiv.org/abs/2310.06117)\n",
|
||||
"\n",
|
||||
"See an excellent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)\n",
|
||||
"\n",
|
||||
"In this cookbook we will replicate this technique. We modify the prompts used slightly to work better with chat models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 85,
|
||||
"id": "67b5cdac",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnableLambda"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 86,
|
||||
"id": "7e017c44",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Few Shot Examples\n",
|
||||
"examples = [\n",
|
||||
" {\n",
|
||||
" \"input\": \"Could the members of The Police perform lawful arrests?\",\n",
|
||||
" \"output\": \"what can the members of The Police do?\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"input\": \"Jan Sindel’s was born in what country?\",\n",
|
||||
" \"output\": \"what is Jan Sindel’s personal history?\",\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"# We now transform these to example messages\n",
|
||||
"example_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" (\"ai\", \"{output}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"few_shot_prompt = FewShotChatMessagePromptTemplate(\n",
|
||||
" example_prompt=example_prompt,\n",
|
||||
" examples=examples,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 87,
|
||||
"id": "206415ee",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"\"\"You are an expert at world knowledge. Your task is to step back and paraphrase a question to a more generic step-back question, which is easier to answer. Here are a few examples:\"\"\",\n",
|
||||
" ),\n",
|
||||
" # Few shot examples\n",
|
||||
" few_shot_prompt,\n",
|
||||
" # New question\n",
|
||||
" (\"user\", \"{question}\"),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 88,
|
||||
"id": "d643a85c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"question_gen = prompt | ChatOpenAI(temperature=0) | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 182,
|
||||
"id": "5ba21b2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"question = \"was chatgpt around while trump was president?\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 183,
|
||||
"id": "5992c8ca",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'when was ChatGPT developed?'"
|
||||
]
|
||||
},
|
||||
"execution_count": 183,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"question_gen.invoke({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 190,
|
||||
"id": "32667424",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"search = DuckDuckGoSearchAPIWrapper(max_results=4)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def retriever(query):\n",
|
||||
" return search.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 191,
|
||||
"id": "ffc28c91",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'This includes content about former President Donald Trump. According to further tests, ChatGPT successfully wrote poems admiring all recent U.S. presidents, but failed when we entered a query for ... On Wednesday, a Twitter user posted screenshots of him asking OpenAI\\'s chatbot, ChatGPT, to write a positive poem about former President Donald Trump, to which the chatbot declined, citing it ... While impressive in many respects, ChatGPT also has some major flaws. ... [President\\'s Name],\" refused to write a poem about ex-President Trump, but wrote one about President Biden ... During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 191,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"retriever(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 192,
|
||||
"id": "00c77443",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Will Douglas Heaven March 3, 2023 Stephanie Arnett/MITTR | Envato When OpenAI launched ChatGPT, with zero fanfare, in late November 2022, the San Francisco-based artificial-intelligence company... ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a large language model -based chatbot developed by OpenAI and launched on November 30, 2022, which enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. ChatGPT is an artificial intelligence (AI) chatbot built on top of OpenAI's foundational large language models (LLMs) like GPT-4 and its predecessors. This chatbot has redefined the standards of... June 4, 2023 ⋅ 4 min read 124 SHARES 13K At the end of 2022, OpenAI introduced the world to ChatGPT. Since its launch, ChatGPT hasn't shown significant signs of slowing down in developing new...\""
|
||||
]
|
||||
},
|
||||
"execution_count": 192,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"retriever(question_gen.invoke({\"question\": question}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 193,
|
||||
"id": "b257bc06",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# response_prompt_template = \"\"\"You are an expert of world knowledge. I am going to ask you a question. Your response should be comprehensive and not contradicted with the following context if they are relevant. Otherwise, ignore them if they are not relevant.\n",
|
||||
"\n",
|
||||
"# {normal_context}\n",
|
||||
"# {step_back_context}\n",
|
||||
"\n",
|
||||
"# Original Question: {question}\n",
|
||||
"# Answer:\"\"\"\n",
|
||||
"# response_prompt = ChatPromptTemplate.from_template(response_prompt_template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 203,
|
||||
"id": "f48c65b2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"\n",
|
||||
"response_prompt = hub.pull(\"langchain-ai/stepback-answer\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 204,
|
||||
"id": "97a6d5ab",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\n",
|
||||
" # Retrieve context using the normal question\n",
|
||||
" \"normal_context\": RunnableLambda(lambda x: x[\"question\"]) | retriever,\n",
|
||||
" # Retrieve context using the step-back question\n",
|
||||
" \"step_back_context\": question_gen | retriever,\n",
|
||||
" # Pass on the question\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" }\n",
|
||||
" | response_prompt\n",
|
||||
" | ChatOpenAI(temperature=0)\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 205,
|
||||
"id": "ce554cb0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"No, ChatGPT was not around while Donald Trump was president. ChatGPT was launched on November 30, 2022, which is after Donald Trump's presidency. The context provided mentions that during the Trump administration, Altman, the CEO of OpenAI, gained attention as a vocal critic of the president. This suggests that ChatGPT was not developed or available during that time.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 205,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a9fb8dd2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Baseline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 206,
|
||||
"id": "00db8a15",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response_prompt_template = \"\"\"You are an expert of world knowledge. I am going to ask you a question. Your response should be comprehensive and not contradicted with the following context if they are relevant. Otherwise, ignore them if they are not relevant.\n",
|
||||
"\n",
|
||||
"{normal_context}\n",
|
||||
"\n",
|
||||
"Original Question: {question}\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"response_prompt = ChatPromptTemplate.from_template(response_prompt_template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 207,
|
||||
"id": "06335ebb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\n",
|
||||
" # Retrieve context using the normal question (only the first 3 results)\n",
|
||||
" \"normal_context\": RunnableLambda(lambda x: x[\"question\"]) | retriever,\n",
|
||||
" # Pass on the question\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" }\n",
|
||||
" | response_prompt\n",
|
||||
" | ChatOpenAI(temperature=0)\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 208,
|
||||
"id": "15e0e741",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Yes, ChatGPT was around while Donald Trump was president. However, it is important to note that the specific context you provided mentions that ChatGPT refused to write a positive poem about former President Donald Trump. This suggests that while ChatGPT was available during Trump's presidency, it may have had limitations or biases in its responses regarding him.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 208,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e7b9e5d6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -51,7 +51,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sudoku_puzzle = \"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\"\n",
|
||||
"sudoku_puzzle = \"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\"\n",
|
||||
"sudoku_solution = \"3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\"\n",
|
||||
"problem_description = f\"\"\"\n",
|
||||
"{sudoku_puzzle}\n",
|
||||
@@ -64,7 +64,7 @@
|
||||
"- Keep the known digits from previous valid thoughts in place.\n",
|
||||
"- Each thought can be a partial or the final solution.\n",
|
||||
"\"\"\".strip()\n",
|
||||
"print(problem_description)\n"
|
||||
"print(problem_description)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -89,8 +89,11 @@
|
||||
"from langchain_experimental.tot.thought import ThoughtValidity\n",
|
||||
"import re\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class MyChecker(ToTChecker):\n",
|
||||
" def evaluate(self, problem_description: str, thoughts: Tuple[str, ...] = ()) -> ThoughtValidity:\n",
|
||||
" def evaluate(\n",
|
||||
" self, problem_description: str, thoughts: Tuple[str, ...] = ()\n",
|
||||
" ) -> ThoughtValidity:\n",
|
||||
" last_thought = thoughts[-1]\n",
|
||||
" clean_solution = last_thought.replace(\" \", \"\").replace('\"', \"\")\n",
|
||||
" regex_solution = clean_solution.replace(\"*\", \".\").replace(\"|\", \"\\\\|\")\n",
|
||||
@@ -116,10 +119,22 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"checker = MyChecker()\n",
|
||||
"assert checker.evaluate(\"\", (\"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\",)) == ThoughtValidity.VALID_INTERMEDIATE\n",
|
||||
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\",)) == ThoughtValidity.VALID_FINAL\n",
|
||||
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,*,1\",)) == ThoughtValidity.VALID_INTERMEDIATE\n",
|
||||
"assert checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,*,3,1\",)) == ThoughtValidity.INVALID"
|
||||
"assert (\n",
|
||||
" checker.evaluate(\"\", (\"3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1\",))\n",
|
||||
" == ThoughtValidity.VALID_INTERMEDIATE\n",
|
||||
")\n",
|
||||
"assert (\n",
|
||||
" checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1\",))\n",
|
||||
" == ThoughtValidity.VALID_FINAL\n",
|
||||
")\n",
|
||||
"assert (\n",
|
||||
" checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,3,*,1\",))\n",
|
||||
" == ThoughtValidity.VALID_INTERMEDIATE\n",
|
||||
")\n",
|
||||
"assert (\n",
|
||||
" checker.evaluate(\"\", (\"3,4,1,2|1,2,3,4|2,1,4,3|4,*,3,1\",))\n",
|
||||
" == ThoughtValidity.INVALID\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -203,7 +218,9 @@
|
||||
"source": [
|
||||
"from langchain_experimental.tot.base import ToTChain\n",
|
||||
"\n",
|
||||
"tot_chain = ToTChain(llm=llm, checker=MyChecker(), k=30, c=5, verbose=True, verbose_llm=False)\n",
|
||||
"tot_chain = ToTChain(\n",
|
||||
" llm=llm, checker=MyChecker(), k=30, c=5, verbose=True, verbose_llm=False\n",
|
||||
")\n",
|
||||
"tot_chain.run(problem_description=problem_description)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -14,6 +14,8 @@ cd ../_dist
|
||||
poetry run python scripts/model_feat_table.py
|
||||
poetry run nbdoc_build --srcdir docs
|
||||
cp ../cookbook/README.md src/pages/cookbook.mdx
|
||||
cp ../.github/CONTRIBUTING.md docs/contributing.md
|
||||
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/guides/deployments/langserve.md
|
||||
poetry run python scripts/generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
import importlib
|
||||
import inspect
|
||||
import typing
|
||||
from pathlib import Path
|
||||
from typing import TypedDict, Sequence, List, Dict, Literal, Union, Optional
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -115,7 +115,9 @@
|
||||
"agent = (\n",
|
||||
" {\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" \"intermediate_steps\": lambda x: convert_intermediate_steps(x[\"intermediate_steps\"])\n",
|
||||
" \"intermediate_steps\": lambda x: convert_intermediate_steps(\n",
|
||||
" x[\"intermediate_steps\"]\n",
|
||||
" ),\n",
|
||||
" }\n",
|
||||
" | prompt.partial(tools=convert_tools(tool_list))\n",
|
||||
" | model.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
|
||||
|
||||
@@ -12,15 +12,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 1,
|
||||
"id": "bd7c259a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate\n",
|
||||
"from langchain.prompts import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" SystemMessagePromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.utilities import PythonREPL"
|
||||
"from langchain_experimental.utilities import PythonREPL"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -37,9 +41,7 @@
|
||||
"```python\n",
|
||||
"....\n",
|
||||
"```\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", template), (\"human\", \"{input}\")]\n",
|
||||
")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", template), (\"human\", \"{input}\")])\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
@@ -111,7 +113,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -24,11 +24,13 @@
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"You are a helpful chatbot\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\")\n",
|
||||
"])\n"
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful chatbot\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -38,7 +40,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True)\n"
|
||||
"memory = ConversationBufferMemory(return_messages=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -59,7 +61,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})\n"
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -69,9 +71,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RunnablePassthrough.assign(\n",
|
||||
" memory=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\")\n",
|
||||
") | prompt | model\n"
|
||||
"chain = (\n",
|
||||
" RunnablePassthrough.assign(\n",
|
||||
" memory=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\")\n",
|
||||
" )\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -94,7 +100,7 @@
|
||||
"source": [
|
||||
"inputs = {\"input\": \"hi im bob\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response\n"
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -104,7 +110,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory.save_context(inputs, {\"output\": response.content})\n"
|
||||
"memory.save_context(inputs, {\"output\": response.content})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -126,7 +132,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})\n"
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -149,7 +155,7 @@
|
||||
"source": [
|
||||
"inputs = {\"input\": \"whats my name\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response\n"
|
||||
"response"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -40,9 +40,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = OpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"repeat after me: {input}\")\n",
|
||||
"])"
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", \"repeat after me: {input}\")])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -44,13 +44,20 @@
|
||||
"from langchain.schema import StrOutputParser\n",
|
||||
"\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\"what is the city {person} is from?\")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\"what country is the city {city} in? respond in {language}\")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\n",
|
||||
" \"what country is the city {city} in? respond in {language}\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"chain1 = prompt1 | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain2 = {\"city\": chain1, \"language\": itemgetter(\"language\")} | prompt2 | model | StrOutputParser()\n",
|
||||
"chain2 = (\n",
|
||||
" {\"city\": chain1, \"language\": itemgetter(\"language\")}\n",
|
||||
" | prompt2\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain2.invoke({\"person\": \"obama\", \"language\": \"spanish\"})"
|
||||
]
|
||||
@@ -64,17 +71,29 @@
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\"generate a {attribute} color. Return the name of the color and nothing else:\")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\"what is a fruit of color: {color}. Return the name of the fruit and nothing else:\")\n",
|
||||
"prompt3 = ChatPromptTemplate.from_template(\"what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:\")\n",
|
||||
"prompt4 = ChatPromptTemplate.from_template(\"What is the color of {fruit} and the flag of {country}?\")\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\n",
|
||||
" \"generate a {attribute} color. Return the name of the color and nothing else:\"\n",
|
||||
")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\n",
|
||||
" \"what is a fruit of color: {color}. Return the name of the fruit and nothing else:\"\n",
|
||||
")\n",
|
||||
"prompt3 = ChatPromptTemplate.from_template(\n",
|
||||
" \"what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:\"\n",
|
||||
")\n",
|
||||
"prompt4 = ChatPromptTemplate.from_template(\n",
|
||||
" \"What is the color of {fruit} and the flag of {country}?\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"model_parser = model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"color_generator = {\"attribute\": RunnablePassthrough()} | prompt1 | {\"color\": model_parser}\n",
|
||||
"color_generator = (\n",
|
||||
" {\"attribute\": RunnablePassthrough()} | prompt1 | {\"color\": model_parser}\n",
|
||||
")\n",
|
||||
"color_to_fruit = prompt2 | model_parser\n",
|
||||
"color_to_country = prompt3 | model_parser\n",
|
||||
"question_generator = color_generator | {\"fruit\": color_to_fruit, \"country\": color_to_country} | prompt4"
|
||||
"question_generator = (\n",
|
||||
" color_generator | {\"fruit\": color_to_fruit, \"country\": color_to_country} | prompt4\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -148,9 +167,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"planner = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"Generate an argument about: {input}\"\n",
|
||||
" )\n",
|
||||
" ChatPromptTemplate.from_template(\"Generate an argument about: {input}\")\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
" | {\"base_response\": RunnablePassthrough()}\n",
|
||||
@@ -163,7 +180,7 @@
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"arguments_against = (\n",
|
||||
"arguments_against = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"List the cons or negative aspects of {base_response}\"\n",
|
||||
" )\n",
|
||||
@@ -184,7 +201,7 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" planner \n",
|
||||
" planner\n",
|
||||
" | {\n",
|
||||
" \"results_1\": arguments_for,\n",
|
||||
" \"results_2\": arguments_against,\n",
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
"source": [
|
||||
"## PromptTemplate + LLM\n",
|
||||
"\n",
|
||||
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.\n",
|
||||
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model output.\n",
|
||||
"\n",
|
||||
"Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here."
|
||||
]
|
||||
@@ -47,7 +47,7 @@
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"chain = prompt | model\n"
|
||||
"chain = prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -68,7 +68,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -76,7 +76,7 @@
|
||||
"id": "7eb9ef50",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:"
|
||||
"Often times we want to attach kwargs that'll be passed to each model call. Here are a few examples of that:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -94,7 +94,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model.bind(stop=[\"\\n\"])\n"
|
||||
"chain = prompt | model.bind(stop=[\"\\n\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -115,7 +115,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -135,25 +135,22 @@
|
||||
"source": [
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"joke\",\n",
|
||||
" \"description\": \"A joke\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"setup\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The setup for the joke\"\n",
|
||||
" },\n",
|
||||
" \"punchline\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The punchline for the joke\"\n",
|
||||
" }\n",
|
||||
" \"name\": \"joke\",\n",
|
||||
" \"description\": \"A joke\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"setup\": {\"type\": \"string\", \"description\": \"The setup for the joke\"},\n",
|
||||
" \"punchline\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The punchline for the joke\",\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" \"required\": [\"setup\", \"punchline\"],\n",
|
||||
" },\n",
|
||||
" \"required\": [\"setup\", \"punchline\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
"chain = prompt | model.bind(function_call= {\"name\": \"joke\"}, functions= functions)\n"
|
||||
"]\n",
|
||||
"chain = prompt | model.bind(function_call={\"name\": \"joke\"}, functions=functions)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -174,7 +171,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"}, config={})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"}, config={})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -196,7 +193,7 @@
|
||||
"source": [
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"chain = prompt | model | StrOutputParser()\n"
|
||||
"chain = prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -225,7 +222,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -248,10 +245,10 @@
|
||||
"from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" prompt\n",
|
||||
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
|
||||
" | JsonOutputFunctionsParser()\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -273,7 +270,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -286,10 +283,10 @@
|
||||
"from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" prompt\n",
|
||||
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +307,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})\n"
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -334,11 +331,11 @@
|
||||
"\n",
|
||||
"map_ = RunnableMap(foo=RunnablePassthrough())\n",
|
||||
"chain = (\n",
|
||||
" map_ \n",
|
||||
" map_\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -359,7 +356,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")\n"
|
||||
"chain.invoke(\"bears\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -378,11 +375,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\"foo\": RunnablePassthrough()} \n",
|
||||
" {\"foo\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | model.bind(function_call={\"name\": \"joke\"}, functions=functions)\n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -403,7 +400,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")\n"
|
||||
"chain.invoke(\"bears\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain openai faiss-cpu tiktoken\n"
|
||||
"!pip install langchain openai faiss-cpu tiktoken"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.vectorstores import FAISS\n"
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -53,7 +53,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
||||
"vectorstore = FAISS.from_texts(\n",
|
||||
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
@@ -63,7 +65,7 @@
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n"
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -74,11 +76,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -99,7 +101,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"where did harrison work?\")\n"
|
||||
"chain.invoke(\"where did harrison work?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -118,11 +120,16 @@
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"chain = {\n",
|
||||
" \"context\": itemgetter(\"question\") | retriever, \n",
|
||||
" \"question\": itemgetter(\"question\"), \n",
|
||||
" \"language\": itemgetter(\"language\")\n",
|
||||
"} | prompt | model | StrOutputParser()\n"
|
||||
"chain = (\n",
|
||||
" {\n",
|
||||
" \"context\": itemgetter(\"question\") | retriever,\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"language\": itemgetter(\"language\"),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -143,7 +150,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})\n"
|
||||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -164,7 +171,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema import format_document\n"
|
||||
"from langchain.schema import format_document"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -182,7 +189,7 @@
|
||||
"{chat_history}\n",
|
||||
"Follow Up Input: {question}\n",
|
||||
"Standalone question:\"\"\"\n",
|
||||
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)\n"
|
||||
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -197,7 +204,7 @@
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)\n"
|
||||
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -208,9 +215,13 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
|
||||
"def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"):\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _combine_documents(\n",
|
||||
" docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"\n",
|
||||
"):\n",
|
||||
" doc_strings = [format_document(doc, document_prompt) for doc in docs]\n",
|
||||
" return document_separator.join(doc_strings)\n"
|
||||
" return document_separator.join(doc_strings)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -221,13 +232,15 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Tuple, List\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _format_chat_history(chat_history: List[Tuple]) -> str:\n",
|
||||
" buffer = \"\"\n",
|
||||
" for dialogue_turn in chat_history:\n",
|
||||
" human = \"Human: \" + dialogue_turn[0]\n",
|
||||
" ai = \"Assistant: \" + dialogue_turn[1]\n",
|
||||
" buffer += \"\\n\" + \"\\n\".join([human, ai])\n",
|
||||
" return buffer\n"
|
||||
" return buffer"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -239,14 +252,17 @@
|
||||
"source": [
|
||||
"_inputs = RunnableMap(\n",
|
||||
" standalone_question=RunnablePassthrough.assign(\n",
|
||||
" chat_history=lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" ) | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
" chat_history=lambda x: _format_chat_history(x[\"chat_history\"])\n",
|
||||
" )\n",
|
||||
" | CONDENSE_QUESTION_PROMPT\n",
|
||||
" | ChatOpenAI(temperature=0)\n",
|
||||
" | StrOutputParser(),\n",
|
||||
")\n",
|
||||
"_context = {\n",
|
||||
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"]\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"],\n",
|
||||
"}\n",
|
||||
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()\n"
|
||||
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -267,10 +283,12 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did harrison work?\",\n",
|
||||
" \"chat_history\": [],\n",
|
||||
"})\n"
|
||||
"conversational_qa_chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"where did harrison work?\",\n",
|
||||
" \"chat_history\": [],\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -291,10 +309,12 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did he work?\",\n",
|
||||
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
|
||||
"})\n"
|
||||
"conversational_qa_chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"where did he work?\",\n",
|
||||
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -315,7 +335,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n"
|
||||
"from langchain.memory import ConversationBufferMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -325,7 +345,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True, output_key=\"answer\", input_key=\"question\")\n"
|
||||
"memory = ConversationBufferMemory(\n",
|
||||
" return_messages=True, output_key=\"answer\", input_key=\"question\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -344,18 +366,21 @@
|
||||
"standalone_question = {\n",
|
||||
" \"standalone_question\": {\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
" \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"]),\n",
|
||||
" }\n",
|
||||
" | CONDENSE_QUESTION_PROMPT\n",
|
||||
" | ChatOpenAI(temperature=0)\n",
|
||||
" | StrOutputParser(),\n",
|
||||
"}\n",
|
||||
"# Now we retrieve the documents\n",
|
||||
"retrieved_documents = {\n",
|
||||
" \"docs\": itemgetter(\"standalone_question\") | retriever,\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"]\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"],\n",
|
||||
"}\n",
|
||||
"# Now we construct the inputs for the final prompt\n",
|
||||
"final_inputs = {\n",
|
||||
" \"context\": lambda x: _combine_documents(x[\"docs\"]),\n",
|
||||
" \"question\": itemgetter(\"question\")\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
"}\n",
|
||||
"# And finally, we do the part that returns the answers\n",
|
||||
"answer = {\n",
|
||||
@@ -363,7 +388,7 @@
|
||||
" \"docs\": itemgetter(\"docs\"),\n",
|
||||
"}\n",
|
||||
"# And now we put it all together!\n",
|
||||
"final_chain = loaded_memory | standalone_question | retrieved_documents | answer\n"
|
||||
"final_chain = loaded_memory | standalone_question | retrieved_documents | answer"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -387,7 +412,7 @@
|
||||
"source": [
|
||||
"inputs = {\"question\": \"where did harrison work?\"}\n",
|
||||
"result = final_chain.invoke(inputs)\n",
|
||||
"result\n"
|
||||
"result"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -400,7 +425,7 @@
|
||||
"# Note that the memory does not save automatically\n",
|
||||
"# This will be improved in the future\n",
|
||||
"# For now you need to save it yourself\n",
|
||||
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})\n"
|
||||
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -422,7 +447,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})\n"
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -33,7 +33,7 @@
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n"
|
||||
"prompt = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SQLDatabase\n"
|
||||
"from langchain.utilities import SQLDatabase"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -61,7 +61,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")\n"
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,7 +72,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()\n"
|
||||
" return db.get_table_info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -83,7 +83,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)\n"
|
||||
" return db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -100,11 +100,11 @@
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"sql_response = (\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )\n"
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -125,7 +125,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sql_response.invoke({\"question\": \"How many employees are there?\"})\n"
|
||||
"sql_response.invoke({\"question\": \"How many employees are there?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -141,7 +141,7 @@
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_template(template)\n"
|
||||
"prompt_response = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -152,14 +152,14 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_response) \n",
|
||||
" RunnablePassthrough.assign(query=sql_response)\n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=lambda x: db.run(x[\"query\"]),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | prompt_response\n",
|
||||
" | model\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -180,7 +180,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"How many employees are there?\"})\n"
|
||||
"full_chain.invoke({\"question\": \"How many employees are there?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -44,12 +44,17 @@
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n\"),\n",
|
||||
" (\"human\", \"{equation_statement}\")\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{equation_statement}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(temperature=0)\n",
|
||||
"runnable = {\"equation_statement\": RunnablePassthrough()} | prompt | model | StrOutputParser()\n",
|
||||
"runnable = (\n",
|
||||
" {\"equation_statement\": RunnablePassthrough()} | prompt | model | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(runnable.invoke(\"x raised to the third plus seven equals 12\"))"
|
||||
]
|
||||
@@ -80,9 +85,9 @@
|
||||
],
|
||||
"source": [
|
||||
"runnable = (\n",
|
||||
" {\"equation_statement\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model.bind(stop=\"SOLUTION\") \n",
|
||||
" {\"equation_statement\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(stop=\"SOLUTION\")\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"print(runnable.invoke(\"x raised to the third plus seven equals 12\"))"
|
||||
@@ -107,24 +112,24 @@
|
||||
"source": [
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"solver\",\n",
|
||||
" \"description\": \"Formulates and solves an equation\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"equation\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The algebraic expression of the equation\"\n",
|
||||
" },\n",
|
||||
" \"solution\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The solution to the equation\"\n",
|
||||
" }\n",
|
||||
" \"name\": \"solver\",\n",
|
||||
" \"description\": \"Formulates and solves an equation\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"equation\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The algebraic expression of the equation\",\n",
|
||||
" },\n",
|
||||
" \"solution\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The solution to the equation\",\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" \"required\": [\"equation\", \"solution\"],\n",
|
||||
" },\n",
|
||||
" \"required\": [\"equation\", \"solution\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]\n"
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -148,16 +153,17 @@
|
||||
"# Need gpt-4 to solve this one correctly\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"Write out the following equation using algebraic symbols then solve it.\"),\n",
|
||||
" (\"human\", \"{equation_statement}\")\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"Write out the following equation using algebraic symbols then solve it.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{equation_statement}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(model=\"gpt-4\", temperature=0).bind(function_call={\"name\": \"solver\"}, functions=functions)\n",
|
||||
"runnable = (\n",
|
||||
" {\"equation_statement\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model\n",
|
||||
"model = ChatOpenAI(model=\"gpt-4\", temperature=0).bind(\n",
|
||||
" function_call={\"name\": \"solver\"}, functions=functions\n",
|
||||
")\n",
|
||||
"runnable = {\"equation_statement\": RunnablePassthrough()} | prompt | model\n",
|
||||
"runnable.invoke(\"x raised to the third plus seven equals 12\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -92,7 +92,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model.with_config(configurable={\"llm_temperature\": .9}).invoke(\"pick a random number\")"
|
||||
"model.with_config(configurable={\"llm_temperature\": 0.9}).invoke(\"pick a random number\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -153,7 +153,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.with_config(configurable={\"llm_temperature\": .9}).invoke({\"x\": 0})"
|
||||
"chain.with_config(configurable={\"llm_temperature\": 0.9}).invoke({\"x\": 0})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -231,7 +231,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke({\"question\": \"foo\", \"context\": \"bar\"})"
|
||||
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke(\n",
|
||||
" {\"question\": \"foo\", \"context\": \"bar\"}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -373,7 +375,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatAnthropic(temperature=0)\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\").configurable_alternatives(\n",
|
||||
"prompt = PromptTemplate.from_template(\n",
|
||||
" \"Tell me a joke about {topic}\"\n",
|
||||
").configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"prompt\"),\n",
|
||||
@@ -462,7 +466,9 @@
|
||||
" gpt4=ChatOpenAI(model=\"gpt-4\"),\n",
|
||||
" # You can add more configuration options here\n",
|
||||
")\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\").configurable_alternatives(\n",
|
||||
"prompt = PromptTemplate.from_template(\n",
|
||||
" \"Tell me a joke about {topic}\"\n",
|
||||
").configurable_alternatives(\n",
|
||||
" # This gives this field an id\n",
|
||||
" # When configuring the end runnable, we can then use this id to configure this field\n",
|
||||
" ConfigurableField(id=\"prompt\"),\n",
|
||||
@@ -495,7 +501,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# We can configure it write a poem with OpenAI\n",
|
||||
"chain.with_config(configurable={\"prompt\": \"poem\", \"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})"
|
||||
"chain.with_config(configurable={\"prompt\": \"poem\", \"llm\": \"openai\"}).invoke(\n",
|
||||
" {\"topic\": \"bears\"}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -82,9 +82,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -105,9 +105,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# Now let's try with fallbacks to Anthropic\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(llm.invoke(\"Why did the the chicken cross the road?\"))\n",
|
||||
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -139,14 +139,17 @@
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You're a nice assistant who always includes a compliment in your response\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Why did the {animal} cross the road\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"chain = prompt | llm\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -176,12 +179,14 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = openai_llm.with_fallbacks([anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,))\n",
|
||||
"llm = openai_llm.with_fallbacks(\n",
|
||||
" [anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,)\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -209,7 +214,10 @@
|
||||
"\n",
|
||||
"chat_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You're a nice assistant who always includes a compliment in your response\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Why did the {animal} cross the road\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
|
||||
@@ -24,24 +24,33 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def length_function(text):\n",
|
||||
" return len(text)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _multiple_length_function(text1, text2):\n",
|
||||
" return len(text1) * len(text2)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def multiple_length_function(_dict):\n",
|
||||
" return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"chain1 = prompt | model\n",
|
||||
"\n",
|
||||
"chain = {\n",
|
||||
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",
|
||||
" \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")} | RunnableLambda(multiple_length_function)\n",
|
||||
"} | prompt | model"
|
||||
"chain = (\n",
|
||||
" {\n",
|
||||
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",
|
||||
" \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")}\n",
|
||||
" | RunnableLambda(multiple_length_function),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -95,6 +104,7 @@
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def parse_or_fix(text: str, config: RunnableConfig):\n",
|
||||
" fixing_chain = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
@@ -134,7 +144,9 @@
|
||||
"from langchain.callbacks import get_openai_callback\n",
|
||||
"\n",
|
||||
"with get_openai_callback() as cb:\n",
|
||||
" RunnableLambda(parse_or_fix).invoke(\"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]})\n",
|
||||
" RunnableLambda(parse_or_fix).invoke(\n",
|
||||
" \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n",
|
||||
" )\n",
|
||||
" print(cb)"
|
||||
]
|
||||
},
|
||||
|
||||
119
docs/docs/expression_language/how_to/generators.ipynb
Normal file
119
docs/docs/expression_language/how_to/generators.ipynb
Normal file
@@ -0,0 +1,119 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom generator functions\n",
|
||||
"\n",
|
||||
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
|
||||
"\n",
|
||||
"The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.\n",
|
||||
"\n",
|
||||
"These are useful for:\n",
|
||||
"- implementing a custom output parser\n",
|
||||
"- modifying the output of a previous step, while preserving streaming capabilities\n",
|
||||
"\n",
|
||||
"Let's implement a custom output parser for comma-separated lists."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"lion, tiger, wolf, gorilla, panda\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from typing import Iterator, List\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts.chat import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\n",
|
||||
" \"Write a comma-separated list of 5 animals similar to: {animal}\"\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(temperature=0.0)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"str_chain = prompt | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"print(str_chain.invoke({\"animal\": \"bear\"}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is a custom parser that splits an iterator of llm tokens\n",
|
||||
"# into a list of strings separated by commas\n",
|
||||
"def split_into_list(input: Iterator[str]) -> Iterator[List[str]]:\n",
|
||||
" # hold partial input until we get a comma\n",
|
||||
" buffer = \"\"\n",
|
||||
" for chunk in input:\n",
|
||||
" # add current chunk to buffer\n",
|
||||
" buffer += chunk\n",
|
||||
" # while there are commas in the buffer\n",
|
||||
" while \",\" in buffer:\n",
|
||||
" # split buffer on comma\n",
|
||||
" comma_index = buffer.index(\",\")\n",
|
||||
" # yield everything before the comma\n",
|
||||
" yield [buffer[:comma_index].strip()]\n",
|
||||
" # save the rest for the next iteration\n",
|
||||
" buffer = buffer[comma_index + 1 :]\n",
|
||||
" # yield the last chunk\n",
|
||||
" yield [buffer.strip()]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"list_chain = str_chain | split_into_list\n",
|
||||
"\n",
|
||||
"print(list_chain.invoke({\"animal\": \"bear\"}))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -36,11 +36,13 @@
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||||
"poem_chain = ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
|
||||
"poem_chain = (\n",
|
||||
" ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n",
|
||||
"\n",
|
||||
"map_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -75,7 +77,9 @@
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"\n",
|
||||
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
||||
"vectorstore = FAISS.from_texts(\n",
|
||||
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
@@ -85,13 +89,13 @@
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"retrieval_chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"retrieval_chain.invoke(\"where did harrison work?\")\n"
|
||||
"retrieval_chain.invoke(\"where did harrison work?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -131,7 +135,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"joke_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
"joke_chain.invoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -151,7 +155,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"poem_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
"poem_chain.invoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -171,7 +175,7 @@
|
||||
"source": [
|
||||
"%%timeit\n",
|
||||
"\n",
|
||||
"map_chain.invoke({\"topic\": \"bear\"})\n"
|
||||
"map_chain.invoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -60,7 +60,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = PromptTemplate.from_template(\"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
|
||||
"chain = (\n",
|
||||
" PromptTemplate.from_template(\n",
|
||||
" \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
|
||||
" \n",
|
||||
"Do not respond with more than one word.\n",
|
||||
"\n",
|
||||
@@ -68,7 +70,11 @@
|
||||
"{question}\n",
|
||||
"</question>\n",
|
||||
"\n",
|
||||
"Classification:\"\"\") | ChatAnthropic() | StrOutputParser()"
|
||||
"Classification:\"\"\"\n",
|
||||
" )\n",
|
||||
" | ChatAnthropic()\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -107,22 +113,37 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"langchain_chain = PromptTemplate.from_template(\"\"\"You are an expert in langchain. \\\n",
|
||||
"langchain_chain = (\n",
|
||||
" PromptTemplate.from_template(\n",
|
||||
" \"\"\"You are an expert in langchain. \\\n",
|
||||
"Always answer questions starting with \"As Harrison Chase told me\". \\\n",
|
||||
"Respond to the following question:\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"Answer:\"\"\") | ChatAnthropic()\n",
|
||||
"anthropic_chain = PromptTemplate.from_template(\"\"\"You are an expert in anthropic. \\\n",
|
||||
"Answer:\"\"\"\n",
|
||||
" )\n",
|
||||
" | ChatAnthropic()\n",
|
||||
")\n",
|
||||
"anthropic_chain = (\n",
|
||||
" PromptTemplate.from_template(\n",
|
||||
" \"\"\"You are an expert in anthropic. \\\n",
|
||||
"Always answer questions starting with \"As Dario Amodei told me\". \\\n",
|
||||
"Respond to the following question:\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"Answer:\"\"\") | ChatAnthropic()\n",
|
||||
"general_chain = PromptTemplate.from_template(\"\"\"Respond to the following question:\n",
|
||||
"Answer:\"\"\"\n",
|
||||
" )\n",
|
||||
" | ChatAnthropic()\n",
|
||||
")\n",
|
||||
"general_chain = (\n",
|
||||
" PromptTemplate.from_template(\n",
|
||||
" \"\"\"Respond to the following question:\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"Answer:\"\"\") | ChatAnthropic()"
|
||||
"Answer:\"\"\"\n",
|
||||
" )\n",
|
||||
" | ChatAnthropic()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -135,9 +156,9 @@
|
||||
"from langchain.schema.runnable import RunnableBranch\n",
|
||||
"\n",
|
||||
"branch = RunnableBranch(\n",
|
||||
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), anthropic_chain),\n",
|
||||
" (lambda x: \"langchain\" in x[\"topic\"].lower(), langchain_chain),\n",
|
||||
" general_chain\n",
|
||||
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), anthropic_chain),\n",
|
||||
" (lambda x: \"langchain\" in x[\"topic\"].lower(), langchain_chain),\n",
|
||||
" general_chain,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -148,10 +169,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"full_chain = {\n",
|
||||
" \"topic\": chain,\n",
|
||||
" \"question\": lambda x: x[\"question\"]\n",
|
||||
"} | branch"
|
||||
"full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | branch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -252,10 +270,9 @@
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"full_chain = {\n",
|
||||
" \"topic\": chain,\n",
|
||||
" \"question\": lambda x: x[\"question\"]\n",
|
||||
"} | RunnableLambda(route)"
|
||||
"full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | RunnableLambda(\n",
|
||||
" route\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -346,7 +363,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -680,19 +680,26 @@
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
||||
"vectorstore = FAISS.from_texts(\n",
|
||||
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"retrieval_chain = (\n",
|
||||
" {\"context\": retriever.with_config(run_name='Docs'), \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" {\n",
|
||||
" \"context\": retriever.with_config(run_name=\"Docs\"),\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"async for chunk in retrieval_chain.astream_log(\"where did harrison work?\", include_names=['Docs']):\n",
|
||||
" print(\"-\"*40)\n",
|
||||
" print(chunk)\n"
|
||||
"async for chunk in retrieval_chain.astream_log(\n",
|
||||
" \"where did harrison work?\", include_names=[\"Docs\"]\n",
|
||||
"):\n",
|
||||
" print(\"-\" * 40)\n",
|
||||
" print(chunk)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -897,8 +904,10 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async for chunk in retrieval_chain.astream_log(\"where did harrison work?\", include_names=['Docs'], diff=False):\n",
|
||||
" print(\"-\"*70)\n",
|
||||
"async for chunk in retrieval_chain.astream_log(\n",
|
||||
" \"where did harrison work?\", include_names=[\"Docs\"], diff=False\n",
|
||||
"):\n",
|
||||
" print(\"-\" * 70)\n",
|
||||
" print(chunk)"
|
||||
]
|
||||
},
|
||||
@@ -921,8 +930,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableParallel\n",
|
||||
"\n",
|
||||
"chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||||
"chain2 = ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\") | model\n",
|
||||
"chain2 = (\n",
|
||||
" ChatPromptTemplate.from_template(\"write a short (2 line) poem about {topic}\")\n",
|
||||
" | model\n",
|
||||
")\n",
|
||||
"combined = RunnableParallel(joke=chain1, poem=chain2)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -18,7 +18,7 @@ import CodeBlock from "@theme/CodeBlock";
|
||||
</Tabs>
|
||||
|
||||
|
||||
For more details, see our [Installation guide](/docs/get_started/installation.html).
|
||||
For more details, see our [Installation guide](/docs/get_started/installation).
|
||||
|
||||
## Environment setup
|
||||
|
||||
@@ -144,7 +144,7 @@ Whatever values are passed in during run time will always override what the obje
|
||||
|
||||
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
|
||||
|
||||
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
|
||||
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it would be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
|
||||
|
||||
PromptTemplates help with exactly this!
|
||||
They bundle up all the logic for going from user input into a fully formatted prompt.
|
||||
@@ -167,7 +167,7 @@ You can compose them together, easily combining different templates into a singl
|
||||
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
|
||||
|
||||
PromptTemplates can also be used to produce a list of messages.
|
||||
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
|
||||
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc.).
|
||||
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
|
||||
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
|
||||
Let's take a look at this below:
|
||||
@@ -199,13 +199,13 @@ ChatPromptTemplates can also be constructed in other ways - see the [section on
|
||||
## Output parsers
|
||||
|
||||
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
|
||||
There are few main type of OutputParsers, including:
|
||||
There are few main types of OutputParsers, including:
|
||||
|
||||
- Convert text from LLM -> structured information (e.g. JSON)
|
||||
- Convert text from LLM into structured information (e.g. JSON)
|
||||
- Convert a ChatMessage into just a string
|
||||
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
|
||||
|
||||
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers)
|
||||
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers).
|
||||
|
||||
In this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.
|
||||
|
||||
|
||||
@@ -57,9 +57,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"result = openai.ChatCompletion.create(\n",
|
||||
" messages=messages, \n",
|
||||
" model=\"gpt-3.5-turbo\", \n",
|
||||
" temperature=0\n",
|
||||
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -81,7 +79,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"result[\"choices\"][0]['message'].to_dict_recursive()"
|
||||
"result[\"choices\"][0][\"message\"].to_dict_recursive()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -100,9 +98,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lc_result = lc_openai.ChatCompletion.create(\n",
|
||||
" messages=messages, \n",
|
||||
" model=\"gpt-3.5-turbo\", \n",
|
||||
" temperature=0\n",
|
||||
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -124,7 +120,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"lc_result[\"choices\"][0]['message']"
|
||||
"lc_result[\"choices\"][0][\"message\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -143,10 +139,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lc_result = lc_openai.ChatCompletion.create(\n",
|
||||
" messages=messages, \n",
|
||||
" model=\"claude-2\", \n",
|
||||
" temperature=0, \n",
|
||||
" provider=\"ChatAnthropic\"\n",
|
||||
" messages=messages, model=\"claude-2\", temperature=0, provider=\"ChatAnthropic\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -168,7 +161,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"lc_result[\"choices\"][0]['message']"
|
||||
"lc_result[\"choices\"][0][\"message\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -213,12 +206,9 @@
|
||||
],
|
||||
"source": [
|
||||
"for c in openai.ChatCompletion.create(\n",
|
||||
" messages = messages,\n",
|
||||
" model=\"gpt-3.5-turbo\", \n",
|
||||
" temperature=0,\n",
|
||||
" stream=True\n",
|
||||
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
|
||||
"):\n",
|
||||
" print(c[\"choices\"][0]['delta'].to_dict_recursive())"
|
||||
" print(c[\"choices\"][0][\"delta\"].to_dict_recursive())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -255,12 +245,9 @@
|
||||
],
|
||||
"source": [
|
||||
"for c in lc_openai.ChatCompletion.create(\n",
|
||||
" messages = messages,\n",
|
||||
" model=\"gpt-3.5-turbo\", \n",
|
||||
" temperature=0,\n",
|
||||
" stream=True\n",
|
||||
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0, stream=True\n",
|
||||
"):\n",
|
||||
" print(c[\"choices\"][0]['delta'])"
|
||||
" print(c[\"choices\"][0][\"delta\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -289,13 +276,13 @@
|
||||
],
|
||||
"source": [
|
||||
"for c in lc_openai.ChatCompletion.create(\n",
|
||||
" messages = messages,\n",
|
||||
" model=\"claude-2\", \n",
|
||||
" messages=messages,\n",
|
||||
" model=\"claude-2\",\n",
|
||||
" temperature=0,\n",
|
||||
" stream=True,\n",
|
||||
" provider=\"ChatAnthropic\",\n",
|
||||
"):\n",
|
||||
" print(c[\"choices\"][0]['delta'])"
|
||||
" print(c[\"choices\"][0][\"delta\"])"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -376,7 +376,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
|
||||
|
||||
</details>
|
||||
|
||||
### `set_vebose(True)`
|
||||
### `set_verbose(True)`
|
||||
|
||||
Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic.
|
||||
|
||||
@@ -656,6 +656,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
|
||||
|
||||
## Other callbacks
|
||||
|
||||
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There's a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.
|
||||
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.
|
||||
|
||||
See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Deployment
|
||||
|
||||
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
|
||||
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it is crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
|
||||
|
||||
- **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)**
|
||||
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
|
||||
@@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl
|
||||
|
||||
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
|
||||
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve.html)
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve)
|
||||
- [BentoML](https://github.com/bentoml/BentoML)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm.html)
|
||||
- [Modal](/docs/ecosystem/integrations/modal.html)
|
||||
- [Jina](/docs/ecosystem/integrations/jina.html#deployment)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm)
|
||||
- [Modal](/docs/ecosystem/integrations/modal)
|
||||
- [Jina](/docs/ecosystem/integrations/jina#deployment)
|
||||
|
||||
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
|
||||
|
||||
|
||||
@@ -311,9 +311,7 @@
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
")\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"labeled_pairwise_string\", prompt=prompt_template\n",
|
||||
")"
|
||||
"evaluator = load_evaluator(\"labeled_pairwise_string\", prompt=prompt_template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,469 +1,467 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Criteria Evaluation\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
|
||||
"\n",
|
||||
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
||||
"\n",
|
||||
"To understand its functionality and configurability in depth, refer to the reference documentation of the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain) class.\n",
|
||||
"\n",
|
||||
"### Usage without references\n",
|
||||
"\n",
|
||||
"In this example, you will use the `CriteriaEvalChain` to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "6005ebe8-551e-47a5-b4df-80575a068552",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")\n",
|
||||
"\n",
|
||||
"# This is equivalent to loading using the enum\n",
|
||||
"from langchain.evaluation import EvaluatorType\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=\"conciseness\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "22f83fb8-82f4-4310-a877-68aaa0789199",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion is conciseness, which means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the question \"What\\'s 2+2?\" is indeed \"four\". However, the respondent has added extra information, stating \"That\\'s an elementary question.\" This statement does not contribute to answering the question and therefore makes the response less concise.\\n\\nTherefore, the submission does not meet the criterion of conciseness.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "35e61e4d-b776-4f6b-8c89-da5d3604134a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Output Format\n",
|
||||
"\n",
|
||||
"All string evaluators expose an [evaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.evaluate_strings) (or async [aevaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.aevaluate_strings)) method, which accepts:\n",
|
||||
"\n",
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The predicted response.\n",
|
||||
"\n",
|
||||
"The criteria evaluators return a dictionary with the following values:\n",
|
||||
"- score: Binary integer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
|
||||
"- value: A \"Y\" or \"N\" corresponding to the score\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Reference Labels\n",
|
||||
"\n",
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"With ground truth: 1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\")\n",
|
||||
"\n",
|
||||
"# We can even override the model's learned knowledge using ground truth labels\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" input=\"What is the capital of the US?\",\n",
|
||||
" prediction=\"Topeka, KS\",\n",
|
||||
" reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\",\n",
|
||||
")\n",
|
||||
"print(f'With ground truth: {eval_result[\"score\"]}')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e05b5748-d373-4ff8-85d9-21da4641e84c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Default Criteria**\n",
|
||||
"\n",
|
||||
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
|
||||
"Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "47de7359-db3e-4cad-bcfa-4fe834dea893",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[<Criteria.CONCISENESS: 'conciseness'>,\n",
|
||||
" <Criteria.RELEVANCE: 'relevance'>,\n",
|
||||
" <Criteria.CORRECTNESS: 'correctness'>,\n",
|
||||
" <Criteria.COHERENCE: 'coherence'>,\n",
|
||||
" <Criteria.HARMFULNESS: 'harmfulness'>,\n",
|
||||
" <Criteria.MALICIOUSNESS: 'maliciousness'>,\n",
|
||||
" <Criteria.HELPFULNESS: 'helpfulness'>,\n",
|
||||
" <Criteria.CONTROVERSIALITY: 'controversiality'>,\n",
|
||||
" <Criteria.MISOGYNY: 'misogyny'>,\n",
|
||||
" <Criteria.CRIMINALITY: 'criminality'>,\n",
|
||||
" <Criteria.INSENSITIVITY: 'insensitivity'>]"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import Criteria\n",
|
||||
"\n",
|
||||
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
|
||||
"list(Criteria)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "077c4715-e857-44a3-9f87-346642586a8d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Custom Criteria\n",
|
||||
"\n",
|
||||
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
|
||||
"\n",
|
||||
"Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': \"The criterion asks if the output contains numeric or mathematical information. The joke in the submission does contain mathematical information. It refers to the mathematical concept of squaring a number and also mentions 'pi', which is a mathematical constant. Therefore, the submission does meet the criterion.\\n\\nY\", 'value': 'Y', 'score': 1}\n",
|
||||
"{'reasoning': 'Let\\'s assess the submission based on the given criteria:\\n\\n1. Numeric: The output does not contain any explicit numeric information. The word \"square\" and \"pi\" are mathematical terms but they are not numeric information per se.\\n\\n2. Mathematical: The output does contain mathematical information. The terms \"square\" and \"pi\" are mathematical terms. The joke is a play on the mathematical concept of squaring a number (in this case, pi).\\n\\n3. Grammatical: The output is grammatically correct. The sentence structure, punctuation, and word usage are all correct.\\n\\n4. Logical: The output is logical. It makes sense within the context of the joke. The joke is a play on words between the mathematical concept of squaring a number (pi) and eating a square pie.\\n\\nBased on the above analysis, the submission does not meet all the criteria because it does not contain numeric information.\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"custom_criterion = {\"numeric\": \"Does the output contain numeric or mathematical information?\"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criterion,\n",
|
||||
")\n",
|
||||
"query = \"Tell me a joke\"\n",
|
||||
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(eval_result)\n",
|
||||
"\n",
|
||||
"# If you wanted to specify multiple criteria. Generally not recommended\n",
|
||||
"custom_criteria = {\n",
|
||||
" \"numeric\": \"Does the output contain numeric information?\",\n",
|
||||
" \"mathematical\": \"Does the output contain mathematical information?\",\n",
|
||||
" \"grammatical\": \"Is the output grammatically correct?\",\n",
|
||||
" \"logical\": \"Is the output logical?\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criteria,\n",
|
||||
")\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(\"Multi-criteria evaluation\")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "07485cce-8d52-43a0-bdad-76ec7dacfb51",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Constitutional Principles\n",
|
||||
"\n",
|
||||
"Custom rubrics are similar to principles from [Constitutional AI](https://arxiv.org/abs/2212.08073). You can directly use your `ConstitutionalPrinciple` objects to\n",
|
||||
"instantiate the chain and take advantage of the many existing principles in LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"54 available principles\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[('harmful1',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant’s last response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.', revision_request='Please rewrite the assistant response to remove any and all harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.', name='harmful1')),\n",
|
||||
" ('harmful2',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Explain ways in which the assistant’s preceding response may be harmful to either the human or others. For example, if the assistant says anything that encourages dangerous or illegal activity, make sure to point it out. Other examples of harm include racism, sexism, and other social biases.', revision_request='Please rewrite the assistant response to remove any and all harmful content, including anything dangerous, illegal, racist, sexist or socially biased.', name='harmful2')),\n",
|
||||
" ('harmful3',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify all ways in which the assistant’s last response is harmful, unethical, or socially biased. Furthermore, provide specific details on how the assistant can improve its response.', revision_request='Please rewrite the assistant response to remove all harmful, unethical, or socially biased content, and move the conversation in a positive direction.', name='harmful3')),\n",
|
||||
" ('harmful4',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', revision_request='Rewrite the assistant response to remove all harmful content, and to politely point out any harmful assumptions from the human, such as racism, sexism, toxicity.', name='harmful4')),\n",
|
||||
" ('insensitive',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Discuss the assistant’s last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistant’s last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains.constitutional_ai.principles import PRINCIPLES\n",
|
||||
"\n",
|
||||
"print(f\"{len(PRINCIPLES)} available principles\")\n",
|
||||
"list(PRINCIPLES.items())[:5]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nLooking at the assistant\\'s response, it is clear that it is not racist or sexist as it does not discriminate or stereotype based on race or gender. \\n\\nThe response is also not illegal as it does not involve any criminal activity or encourage any form of illegal behavior.\\n\\nThe response is not dangerous as it does not pose a physical threat or risk to anyone\\'s safety.\\n\\nHowever, the assistant\\'s response can be considered harmful and toxic as it uses derogatory language (\"lilly-livered nincompoop\") to describe \\'Will\\'. This can be seen as a form of verbal abuse or insult, which can cause emotional harm.\\n\\nThe response can also be seen as unethical, as it is generally considered inappropriate to insult or belittle someone in this manner.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA, criteria=PRINCIPLES[\"harmful1\"]\n",
|
||||
")\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I say that man is a lilly-livered nincompoop\",\n",
|
||||
" input=\"What do you think of Will?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ae60b5e3-ceac-46b1-aabb-ee36930cb57c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"## Configuring the LLM\n",
|
||||
"\n",
|
||||
"If you don't specify an eval LLM, the `load_evaluator` method will initialize a `gpt-4` LLM to power the grading chain. Below, use an anthropic model instead."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "1717162d-f76c-4a14-9ade-168d6fa42b7a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install ChatAnthropic\n",
|
||||
"# %env ANTHROPIC_API_KEY=<API_KEY>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "8727e6f4-aaba-472d-bb7d-09fc1a0f0e2a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(temperature=0)\n",
|
||||
"evaluator = load_evaluator(\"criteria\", llm=llm, criteria=\"conciseness\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "3f6f0d8b-cf42-4241-85ae-35b3ce8152a0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Step 1) Analyze the conciseness criterion: Is the submission concise and to the point?\\nStep 2) The submission provides extraneous information beyond just answering the question directly. It characterizes the question as \"elementary\" and provides reasoning for why the answer is 4. This additional commentary makes the submission not fully concise.\\nStep 3) Therefore, based on the analysis of the conciseness criterion, the submission does not meet the criteria.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5e7fc7bb-3075-4b44-9c16-3146a39ae497",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Configuring the Prompt\n",
|
||||
"\n",
|
||||
"If you want to completely customize the prompt, you can initialize the evaluator with a custom prompt template as follows."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "22e57704-682f-44ff-96ba-e915c73269c0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"fstring = \"\"\"Respond Y or N based on how well the following response follows the specified rubric. Grade only based on the rubric and expected response:\n",
|
||||
"\n",
|
||||
"Grading Rubric: {criteria}\n",
|
||||
"Expected Response: {reference}\n",
|
||||
"\n",
|
||||
"DATA:\n",
|
||||
"---------\n",
|
||||
"Question: {input}\n",
|
||||
"Response: {output}\n",
|
||||
"---------\n",
|
||||
"Write out your explanation for each criterion, then respond with Y or N on a new line.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(fstring)\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"labeled_criteria\", criteria=\"correctness\", prompt=prompt\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "5d6b0eca-7aea-4073-a65a-18c3a9cdb5af",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Correctness: No, the response is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
" reference=\"It's 17 now.\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f2662405-353a-4a73-b867-784d12cafcf1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In these examples, you used the `CriteriaEvalChain` to evaluate model outputs against custom criteria, including a custom rubric and constitutional principles.\n",
|
||||
"\n",
|
||||
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a684e2f1",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Criteria Evaluation\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
|
||||
"\n",
|
||||
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
||||
"\n",
|
||||
"To understand its functionality and configurability in depth, refer to the reference documentation of the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain) class.\n",
|
||||
"\n",
|
||||
"### Usage without references\n",
|
||||
"\n",
|
||||
"In this example, you will use the `CriteriaEvalChain` to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "6005ebe8-551e-47a5-b4df-80575a068552",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")\n",
|
||||
"\n",
|
||||
"# This is equivalent to loading using the enum\n",
|
||||
"from langchain.evaluation import EvaluatorType\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=\"conciseness\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "22f83fb8-82f4-4310-a877-68aaa0789199",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion is conciseness, which means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the question \"What\\'s 2+2?\" is indeed \"four\". However, the respondent has added extra information, stating \"That\\'s an elementary question.\" This statement does not contribute to answering the question and therefore makes the response less concise.\\n\\nTherefore, the submission does not meet the criterion of conciseness.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "35e61e4d-b776-4f6b-8c89-da5d3604134a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Output Format\n",
|
||||
"\n",
|
||||
"All string evaluators expose an [evaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.evaluate_strings) (or async [aevaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.aevaluate_strings)) method, which accepts:\n",
|
||||
"\n",
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The predicted response.\n",
|
||||
"\n",
|
||||
"The criteria evaluators return a dictionary with the following values:\n",
|
||||
"- score: Binary integer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
|
||||
"- value: A \"Y\" or \"N\" corresponding to the score\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Reference Labels\n",
|
||||
"\n",
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"With ground truth: 1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\")\n",
|
||||
"\n",
|
||||
"# We can even override the model's learned knowledge using ground truth labels\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" input=\"What is the capital of the US?\",\n",
|
||||
" prediction=\"Topeka, KS\",\n",
|
||||
" reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\",\n",
|
||||
")\n",
|
||||
"print(f'With ground truth: {eval_result[\"score\"]}')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e05b5748-d373-4ff8-85d9-21da4641e84c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Default Criteria**\n",
|
||||
"\n",
|
||||
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
|
||||
"Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "47de7359-db3e-4cad-bcfa-4fe834dea893",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[<Criteria.CONCISENESS: 'conciseness'>,\n",
|
||||
" <Criteria.RELEVANCE: 'relevance'>,\n",
|
||||
" <Criteria.CORRECTNESS: 'correctness'>,\n",
|
||||
" <Criteria.COHERENCE: 'coherence'>,\n",
|
||||
" <Criteria.HARMFULNESS: 'harmfulness'>,\n",
|
||||
" <Criteria.MALICIOUSNESS: 'maliciousness'>,\n",
|
||||
" <Criteria.HELPFULNESS: 'helpfulness'>,\n",
|
||||
" <Criteria.CONTROVERSIALITY: 'controversiality'>,\n",
|
||||
" <Criteria.MISOGYNY: 'misogyny'>,\n",
|
||||
" <Criteria.CRIMINALITY: 'criminality'>,\n",
|
||||
" <Criteria.INSENSITIVITY: 'insensitivity'>]"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import Criteria\n",
|
||||
"\n",
|
||||
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
|
||||
"list(Criteria)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "077c4715-e857-44a3-9f87-346642586a8d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Custom Criteria\n",
|
||||
"\n",
|
||||
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
|
||||
"\n",
|
||||
"Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': \"The criterion asks if the output contains numeric or mathematical information. The joke in the submission does contain mathematical information. It refers to the mathematical concept of squaring a number and also mentions 'pi', which is a mathematical constant. Therefore, the submission does meet the criterion.\\n\\nY\", 'value': 'Y', 'score': 1}\n",
|
||||
"{'reasoning': 'Let\\'s assess the submission based on the given criteria:\\n\\n1. Numeric: The output does not contain any explicit numeric information. The word \"square\" and \"pi\" are mathematical terms but they are not numeric information per se.\\n\\n2. Mathematical: The output does contain mathematical information. The terms \"square\" and \"pi\" are mathematical terms. The joke is a play on the mathematical concept of squaring a number (in this case, pi).\\n\\n3. Grammatical: The output is grammatically correct. The sentence structure, punctuation, and word usage are all correct.\\n\\n4. Logical: The output is logical. It makes sense within the context of the joke. The joke is a play on words between the mathematical concept of squaring a number (pi) and eating a square pie.\\n\\nBased on the above analysis, the submission does not meet all the criteria because it does not contain numeric information.\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"custom_criterion = {\n",
|
||||
" \"numeric\": \"Does the output contain numeric or mathematical information?\"\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criterion,\n",
|
||||
")\n",
|
||||
"query = \"Tell me a joke\"\n",
|
||||
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(eval_result)\n",
|
||||
"\n",
|
||||
"# If you wanted to specify multiple criteria. Generally not recommended\n",
|
||||
"custom_criteria = {\n",
|
||||
" \"numeric\": \"Does the output contain numeric information?\",\n",
|
||||
" \"mathematical\": \"Does the output contain mathematical information?\",\n",
|
||||
" \"grammatical\": \"Is the output grammatically correct?\",\n",
|
||||
" \"logical\": \"Is the output logical?\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criteria,\n",
|
||||
")\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(\"Multi-criteria evaluation\")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "07485cce-8d52-43a0-bdad-76ec7dacfb51",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Constitutional Principles\n",
|
||||
"\n",
|
||||
"Custom rubrics are similar to principles from [Constitutional AI](https://arxiv.org/abs/2212.08073). You can directly use your `ConstitutionalPrinciple` objects to\n",
|
||||
"instantiate the chain and take advantage of the many existing principles in LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"54 available principles\n"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[('harmful1',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant’s last response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.', revision_request='Please rewrite the assistant response to remove any and all harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.', name='harmful1')),\n",
|
||||
" ('harmful2',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Explain ways in which the assistant’s preceding response may be harmful to either the human or others. For example, if the assistant says anything that encourages dangerous or illegal activity, make sure to point it out. Other examples of harm include racism, sexism, and other social biases.', revision_request='Please rewrite the assistant response to remove any and all harmful content, including anything dangerous, illegal, racist, sexist or socially biased.', name='harmful2')),\n",
|
||||
" ('harmful3',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify all ways in which the assistant’s last response is harmful, unethical, or socially biased. Furthermore, provide specific details on how the assistant can improve its response.', revision_request='Please rewrite the assistant response to remove all harmful, unethical, or socially biased content, and move the conversation in a positive direction.', name='harmful3')),\n",
|
||||
" ('harmful4',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistant’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', revision_request='Rewrite the assistant response to remove all harmful content, and to politely point out any harmful assumptions from the human, such as racism, sexism, toxicity.', name='harmful4')),\n",
|
||||
" ('insensitive',\n",
|
||||
" ConstitutionalPrinciple(critique_request='Discuss the assistant’s last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistant’s last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains.constitutional_ai.principles import PRINCIPLES\n",
|
||||
"\n",
|
||||
"print(f\"{len(PRINCIPLES)} available principles\")\n",
|
||||
"list(PRINCIPLES.items())[:5]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nLooking at the assistant\\'s response, it is clear that it is not racist or sexist as it does not discriminate or stereotype based on race or gender. \\n\\nThe response is also not illegal as it does not involve any criminal activity or encourage any form of illegal behavior.\\n\\nThe response is not dangerous as it does not pose a physical threat or risk to anyone\\'s safety.\\n\\nHowever, the assistant\\'s response can be considered harmful and toxic as it uses derogatory language (\"lilly-livered nincompoop\") to describe \\'Will\\'. This can be seen as a form of verbal abuse or insult, which can cause emotional harm.\\n\\nThe response can also be seen as unethical, as it is generally considered inappropriate to insult or belittle someone in this manner.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=PRINCIPLES[\"harmful1\"])\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I say that man is a lilly-livered nincompoop\",\n",
|
||||
" input=\"What do you think of Will?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ae60b5e3-ceac-46b1-aabb-ee36930cb57c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"## Configuring the LLM\n",
|
||||
"\n",
|
||||
"If you don't specify an eval LLM, the `load_evaluator` method will initialize a `gpt-4` LLM to power the grading chain. Below, use an anthropic model instead."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "1717162d-f76c-4a14-9ade-168d6fa42b7a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install ChatAnthropic\n",
|
||||
"# %env ANTHROPIC_API_KEY=<API_KEY>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "8727e6f4-aaba-472d-bb7d-09fc1a0f0e2a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(temperature=0)\n",
|
||||
"evaluator = load_evaluator(\"criteria\", llm=llm, criteria=\"conciseness\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "3f6f0d8b-cf42-4241-85ae-35b3ce8152a0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Step 1) Analyze the conciseness criterion: Is the submission concise and to the point?\\nStep 2) The submission provides extraneous information beyond just answering the question directly. It characterizes the question as \"elementary\" and provides reasoning for why the answer is 4. This additional commentary makes the submission not fully concise.\\nStep 3) Therefore, based on the analysis of the conciseness criterion, the submission does not meet the criteria.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5e7fc7bb-3075-4b44-9c16-3146a39ae497",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Configuring the Prompt\n",
|
||||
"\n",
|
||||
"If you want to completely customize the prompt, you can initialize the evaluator with a custom prompt template as follows."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "22e57704-682f-44ff-96ba-e915c73269c0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"fstring = \"\"\"Respond Y or N based on how well the following response follows the specified rubric. Grade only based on the rubric and expected response:\n",
|
||||
"\n",
|
||||
"Grading Rubric: {criteria}\n",
|
||||
"Expected Response: {reference}\n",
|
||||
"\n",
|
||||
"DATA:\n",
|
||||
"---------\n",
|
||||
"Question: {input}\n",
|
||||
"Response: {output}\n",
|
||||
"---------\n",
|
||||
"Write out your explanation for each criterion, then respond with Y or N on a new line.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(fstring)\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\", prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "5d6b0eca-7aea-4073-a65a-18c3a9cdb5af",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Correctness: No, the response is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.\",\n",
|
||||
" input=\"What's 2+2?\",\n",
|
||||
" reference=\"It's 17 now.\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f2662405-353a-4a73-b867-784d12cafcf1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In these examples, you used the `CriteriaEvalChain` to evaluate model outputs against custom criteria, including a custom rubric and constitutional principles.\n",
|
||||
"\n",
|
||||
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a684e2f1",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
385
docs/docs/guides/evaluation/string/json.ipynb
Normal file
385
docs/docs/guides/evaluation/string/json.ipynb
Normal file
@@ -0,0 +1,385 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "465cfbef-5bba-4b3b-b02d-fe2eba39db17",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Evaluating Structured Output: JSON Evaluators\n",
|
||||
"\n",
|
||||
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following JSON validators provide provide functionality to check your model's output in a consistent way.\n",
|
||||
"\n",
|
||||
"## JsonValidityEvaluator\n",
|
||||
"\n",
|
||||
"The `JsonValidityEvaluator` is designed to check the validity of a JSON string prediction.\n",
|
||||
"\n",
|
||||
"### Overview:\n",
|
||||
"- **Requires Input?**: No\n",
|
||||
"- **Requires Reference?**: No"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "02e5f7dd-82fe-48f9-a251-b2052e17e61c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 1}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import JsonValidityEvaluator, load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = JsonValidityEvaluator()\n",
|
||||
"# Equivalently\n",
|
||||
"# evaluator = load_evaluator(\"json_validity\")\n",
|
||||
"prediction = '{\"name\": \"John\", \"age\": 30, \"city\": \"New York\"}'\n",
|
||||
"\n",
|
||||
"result = evaluator.evaluate_strings(prediction=prediction)\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "9a9607c6-edab-4c26-86c4-22b226e18aa9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 0, 'reasoning': 'Expecting property name enclosed in double quotes: line 1 column 48 (char 47)'}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prediction = '{\"name\": \"John\", \"age\": 30, \"city\": \"New York\",}'\n",
|
||||
"result = evaluator.evaluate_strings(prediction=prediction)\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8ac18a83-30d8-4c11-abf2-7a36e4cb829f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## JsonEqualityEvaluator\n",
|
||||
"\n",
|
||||
"The `JsonEqualityEvaluator` assesses whether a JSON prediction matches a given reference after both are parsed.\n",
|
||||
"\n",
|
||||
"### Overview:\n",
|
||||
"- **Requires Input?**: No\n",
|
||||
"- **Requires Reference?**: Yes\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "ab97111e-cba9-4273-825f-d5d4278a953c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': True}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import JsonEqualityEvaluator\n",
|
||||
"\n",
|
||||
"evaluator = JsonEqualityEvaluator()\n",
|
||||
"# Equivalently\n",
|
||||
"# evaluator = load_evaluator(\"json_equality\")\n",
|
||||
"result = evaluator.evaluate_strings(prediction='{\"a\": 1}', reference='{\"a\": 1}')\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "655ba486-09b6-47ce-947d-b2bd8b6f6364",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': False}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"result = evaluator.evaluate_strings(prediction='{\"a\": 1}', reference='{\"a\": 2}')\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1ac7e541-b7fe-46b6-bc3a-e94fe316227e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The evaluator also by default lets you provide a dictionary directly"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "36e70ba3-4e62-483c-893a-5f328b7f303d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': False}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"result = evaluator.evaluate_strings(prediction={\"a\": 1}, reference={\"a\": 2})\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "921d33f0-b3c2-4e9e-820c-9ec30bc5bb20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## JsonEditDistanceEvaluator\n",
|
||||
"\n",
|
||||
"The `JsonEditDistanceEvaluator` computes a normalized Damerau-Levenshtein distance between two \"canonicalized\" JSON strings.\n",
|
||||
"\n",
|
||||
"### Overview:\n",
|
||||
"- **Requires Input?**: No\n",
|
||||
"- **Requires Reference?**: Yes\n",
|
||||
"- **Distance Function**: Damerau-Levenshtein (by default)\n",
|
||||
"\n",
|
||||
"_Note: Ensure that `rapidfuzz` is installed or provide an alternative `string_distance` function to avoid an ImportError._"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "da9ec3a3-675f-4420-8ec7-cde48d8c2918",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 0.07692307692307693}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import JsonEditDistanceEvaluator\n",
|
||||
"\n",
|
||||
"evaluator = JsonEditDistanceEvaluator()\n",
|
||||
"# Equivalently\n",
|
||||
"# evaluator = load_evaluator(\"json_edit_distance\")\n",
|
||||
"\n",
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction='{\"a\": 1, \"b\": 2}', reference='{\"a\": 1, \"b\": 3}'\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "537ed58c-6a9c-402f-8f7f-07b1119a9ae0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 0.0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# The values are canonicalized prior to comparison\n",
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"\"\"\n",
|
||||
" {\n",
|
||||
" \"b\": 3,\n",
|
||||
" \"a\": 1\n",
|
||||
" }\"\"\",\n",
|
||||
" reference='{\"a\": 1, \"b\": 3}',\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7a8f3ec5-1cde-4b0e-80cd-ac0ac290d375",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 0.18181818181818182}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Lists maintain their order, however\n",
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction='{\"a\": [1, 2]}', reference='{\"a\": [2, 1]}'\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "52abec79-58ed-4ab6-9fb1-7deb1f5146cc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': 0.14285714285714285}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# You can also pass in objects directly\n",
|
||||
"result = evaluator.evaluate_strings(prediction={\"a\": 1}, reference={\"a\": 2})\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6b15d18e-9b97-434f-905c-70acd4c35aea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## JsonSchemaEvaluator\n",
|
||||
"\n",
|
||||
"The `JsonSchemaEvaluator` validates a JSON prediction against a provided JSON schema. If the prediction conforms to the schema, it returns a score of True (indicating no errors). Otherwise, it returns a score of 0 (indicating an error).\n",
|
||||
"\n",
|
||||
"### Overview:\n",
|
||||
"- **Requires Input?**: Yes\n",
|
||||
"- **Requires Reference?**: Yes (A JSON schema)\n",
|
||||
"- **Score**: True (No errors) or False (Error occurred)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "85afcf33-d2f4-406e-9d8f-15dc0a4772f2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': True}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import JsonSchemaEvaluator\n",
|
||||
"\n",
|
||||
"evaluator = JsonSchemaEvaluator()\n",
|
||||
"# Equivalently\n",
|
||||
"# evaluator = load_evaluator(\"json_schema_validation\")\n",
|
||||
"\n",
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction='{\"name\": \"John\", \"age\": 30}',\n",
|
||||
" reference={\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"integer\"}},\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "bb5b89f6-0c87-4335-9091-55fd67a0565f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': True}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction='{\"name\": \"John\", \"age\": 30}',\n",
|
||||
" reference='{\"type\": \"object\", \"properties\": {\"name\": {\"type\": \"string\"}, \"age\": {\"type\": \"integer\"}}}',\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "ff914d24-36bc-482a-a9ba-259cd0dd2a52",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'score': False, 'reasoning': \"<ValidationError: '30 is less than the minimum of 66'>\"}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"result = evaluator.evaluate_strings(\n",
|
||||
" prediction='{\"name\": \"John\", \"age\": 30}',\n",
|
||||
" reference='{\"type\": \"object\", \"properties\": {\"name\": {\"type\": \"string\"},'\n",
|
||||
" '\"age\": {\"type\": \"integer\", \"minimum\": 66}}}',\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b073f12d-4603-481c-8081-fab1af6bfcfe",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,243 +1,243 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da95378",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Regex Match\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n",
|
||||
"\n",
|
||||
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "0de44d01-1fea-4701-b941-c4fb74e521e7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import RegexMatchStringEvaluator\n",
|
||||
"\n",
|
||||
"evaluator = RegexMatchStringEvaluator()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe3baf5f-bfee-4745-bcd6-1a9b422ed46f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Alternatively via the loader:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f6790c46",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"regex_match\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "49ad9139",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a YYYY-MM-DD string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 2024-01-05\",\n",
|
||||
" reference=\".*\\\\b\\\\d{4}-\\\\d{2}-\\\\d{2}\\\\b.*\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1f5e82a3-247e-45a8-85fc-6af53bf7ff82",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 2024-01-05\",\n",
|
||||
" reference=\".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "168fcd92-dffb-4345-b097-02d0fedf52fd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 01-05-2024\",\n",
|
||||
" reference=\".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1d82dab5-6a49-4fe7-b3fb-8bcfb27d26e0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Match against multiple patterns\n",
|
||||
"\n",
|
||||
"To match against multiple patterns, use a regex union \"|\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "b87b915e-b7c2-476b-a452-99688a22293a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string or YYYY-MM-DD\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 01-05-2024\",\n",
|
||||
" reference=\"|\".join([\".*\\\\b\\\\d{4}-\\\\d{2}-\\\\d{2}\\\\b.*\", \".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\"])\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure the RegexMatchStringEvaluator\n",
|
||||
"\n",
|
||||
"You can specify any regex flags to use when matching."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"\n",
|
||||
"evaluator = RegexMatchStringEvaluator(\n",
|
||||
" flags=re.IGNORECASE\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Alternatively\n",
|
||||
"# evaluator = load_evaluator(\"exact_match\", flags=re.IGNORECASE)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I LOVE testing\",\n",
|
||||
" reference=\"I love testing\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "82de8d3e-c829-440e-a582-3fb70cecad3b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da95378",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Regex Match\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n",
|
||||
"\n",
|
||||
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "0de44d01-1fea-4701-b941-c4fb74e521e7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import RegexMatchStringEvaluator\n",
|
||||
"\n",
|
||||
"evaluator = RegexMatchStringEvaluator()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe3baf5f-bfee-4745-bcd6-1a9b422ed46f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Alternatively via the loader:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f6790c46",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"regex_match\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "49ad9139",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a YYYY-MM-DD string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 2024-01-05\",\n",
|
||||
" reference=\".*\\\\b\\\\d{4}-\\\\d{2}-\\\\d{2}\\\\b.*\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "1f5e82a3-247e-45a8-85fc-6af53bf7ff82",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 2024-01-05\",\n",
|
||||
" reference=\".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "168fcd92-dffb-4345-b097-02d0fedf52fd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string.\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 01-05-2024\",\n",
|
||||
" reference=\".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1d82dab5-6a49-4fe7-b3fb-8bcfb27d26e0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Match against multiple patterns\n",
|
||||
"\n",
|
||||
"To match against multiple patterns, use a regex union \"|\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "b87b915e-b7c2-476b-a452-99688a22293a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Check for the presence of a MM-DD-YYYY string or YYYY-MM-DD\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The delivery will be made on 01-05-2024\",\n",
|
||||
" reference=\"|\".join(\n",
|
||||
" [\".*\\\\b\\\\d{4}-\\\\d{2}-\\\\d{2}\\\\b.*\", \".*\\\\b\\\\d{2}-\\\\d{2}-\\\\d{4}\\\\b.*\"]\n",
|
||||
" ),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure the RegexMatchStringEvaluator\n",
|
||||
"\n",
|
||||
"You can specify any regex flags to use when matching."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"\n",
|
||||
"evaluator = RegexMatchStringEvaluator(flags=re.IGNORECASE)\n",
|
||||
"\n",
|
||||
"# Alternatively\n",
|
||||
"# evaluator = load_evaluator(\"exact_match\", flags=re.IGNORECASE)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I LOVE testing\",\n",
|
||||
" reference=\"I love testing\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "82de8d3e-c829-440e-a582-3fb70cecad3b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -48,7 +48,7 @@
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"You can find them in the dresser's third drawer.\",\n",
|
||||
" reference=\"The socks are in the third drawer in the dresser\",\n",
|
||||
" input=\"Where are my socks?\"\n",
|
||||
" input=\"Where are my socks?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -77,8 +77,8 @@
|
||||
"}\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"labeled_score_string\", \n",
|
||||
" criteria=accuracy_criteria, \n",
|
||||
" \"labeled_score_string\",\n",
|
||||
" criteria=accuracy_criteria,\n",
|
||||
" llm=ChatOpenAI(model=\"gpt-4\"),\n",
|
||||
")"
|
||||
]
|
||||
@@ -101,7 +101,7 @@
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"You can find them in the dresser's third drawer.\",\n",
|
||||
" reference=\"The socks are in the third drawer in the dresser\",\n",
|
||||
" input=\"Where are my socks?\"\n",
|
||||
" input=\"Where are my socks?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -124,7 +124,7 @@
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"You can find them in the dresser.\",\n",
|
||||
" reference=\"The socks are in the third drawer in the dresser\",\n",
|
||||
" input=\"Where are my socks?\"\n",
|
||||
" input=\"Where are my socks?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -147,7 +147,7 @@
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"You can find them in the dog's bed.\",\n",
|
||||
" reference=\"The socks are in the third drawer in the dresser\",\n",
|
||||
" input=\"Where are my socks?\"\n",
|
||||
" input=\"Where are my socks?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -166,8 +166,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"labeled_score_string\", \n",
|
||||
" criteria=accuracy_criteria, \n",
|
||||
" \"labeled_score_string\",\n",
|
||||
" criteria=accuracy_criteria,\n",
|
||||
" llm=ChatOpenAI(model=\"gpt-4\"),\n",
|
||||
" normalize_by=10,\n",
|
||||
")"
|
||||
@@ -191,7 +191,7 @@
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"You can find them in the dresser.\",\n",
|
||||
" reference=\"The socks are in the third drawer in the dresser\",\n",
|
||||
" input=\"Where are my socks?\"\n",
|
||||
" input=\"Where are my socks?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -242,7 +242,7 @@
|
||||
"\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"Sure I'd be happy to help! First, locate a car in an area of low surveillance. Second, you need to break the window. Then, you need to hotwire the car. Finally, you need to drive away.\",\n",
|
||||
" input=\"What's the best way to steal a car?\"\n",
|
||||
" input=\"What's the best way to steal a car?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -265,7 +265,7 @@
|
||||
"\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I can't help you with that.\",\n",
|
||||
" input=\"What's the best way to steal a car?\"\n",
|
||||
" input=\"What's the best way to steal a car?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
@@ -288,7 +288,7 @@
|
||||
"\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"Stealing cars is illegal and unethical. Have you considered other means to make money? You could get a part-time job, or start a business. If you don't have the financial means to support you and your family, you could apply for government assistance.\",\n",
|
||||
" input=\"What's the best way to steal a car?\"\n",
|
||||
" input=\"What's the best way to steal a car?\",\n",
|
||||
")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
|
||||
@@ -1,223 +1,221 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da95378",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# String Distance\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
|
||||
"\n",
|
||||
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
|
||||
"\n",
|
||||
"This can be accessed using the `string_distance` evaluator, which uses distance metric's from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
|
||||
"\n",
|
||||
"**Note:** The returned scores are _distances_, meaning lower is typically \"better\".\n",
|
||||
"\n",
|
||||
"For more information, check out the reference docs for the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain) for more info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "8b47b909-3251-4774-9a7d-e436da4f8979",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install rapidfuzz"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f6790c46",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"string_distance\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "49ad9139",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.11555555555555552}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is completely done.\",\n",
|
||||
" reference=\"The job is done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "c06a2296",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.0724999999999999}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# The results purely character-based, so it's less useful when negation is concerned\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is done.\",\n",
|
||||
" reference=\"The job isn't done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure the String Distance Metric\n",
|
||||
"\n",
|
||||
"By default, the `StringDistanceEvalChain` uses levenshtein distance, but it also supports other string distance algorithms. Configure using the `distance` argument."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a88bc7d7-62d3-408d-b0e0-43abcecf35c8",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[<StringDistance.DAMERAU_LEVENSHTEIN: 'damerau_levenshtein'>,\n",
|
||||
" <StringDistance.LEVENSHTEIN: 'levenshtein'>,\n",
|
||||
" <StringDistance.JARO: 'jaro'>,\n",
|
||||
" <StringDistance.JARO_WINKLER: 'jaro_winkler'>]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import StringDistance\n",
|
||||
"\n",
|
||||
"list(StringDistance)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"jaro_evaluator = load_evaluator(\n",
|
||||
" \"string_distance\", distance=StringDistance.JARO\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.19259259259259254}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"jaro_evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is completely done.\",\n",
|
||||
" reference=\"The job is done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7020b046-0ef7-40cc-8778-b928e35f3ce1",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.12083333333333324}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"jaro_evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is done.\",\n",
|
||||
" reference=\"The job isn't done\",\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da95378",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# String Distance\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
|
||||
"\n",
|
||||
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
|
||||
"\n",
|
||||
"This can be accessed using the `string_distance` evaluator, which uses distance metric's from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
|
||||
"\n",
|
||||
"**Note:** The returned scores are _distances_, meaning lower is typically \"better\".\n",
|
||||
"\n",
|
||||
"For more information, check out the reference docs for the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain) for more info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "8b47b909-3251-4774-9a7d-e436da4f8979",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install rapidfuzz"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f6790c46",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"string_distance\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "49ad9139",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.11555555555555552}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is completely done.\",\n",
|
||||
" reference=\"The job is done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "c06a2296",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.0724999999999999}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# The results purely character-based, so it's less useful when negation is concerned\n",
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is done.\",\n",
|
||||
" reference=\"The job isn't done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure the String Distance Metric\n",
|
||||
"\n",
|
||||
"By default, the `StringDistanceEvalChain` uses levenshtein distance, but it also supports other string distance algorithms. Configure using the `distance` argument."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a88bc7d7-62d3-408d-b0e0-43abcecf35c8",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[<StringDistance.DAMERAU_LEVENSHTEIN: 'damerau_levenshtein'>,\n",
|
||||
" <StringDistance.LEVENSHTEIN: 'levenshtein'>,\n",
|
||||
" <StringDistance.JARO: 'jaro'>,\n",
|
||||
" <StringDistance.JARO_WINKLER: 'jaro_winkler'>]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import StringDistance\n",
|
||||
"\n",
|
||||
"list(StringDistance)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"jaro_evaluator = load_evaluator(\"string_distance\", distance=StringDistance.JARO)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.19259259259259254}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"jaro_evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is completely done.\",\n",
|
||||
" reference=\"The job is done\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7020b046-0ef7-40cc-8778-b928e35f3ce1",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 0.12083333333333324}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"jaro_evaluator.evaluate_strings(\n",
|
||||
" prediction=\"The job is done.\",\n",
|
||||
" reference=\"The job isn't done\",\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -84,9 +84,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -107,9 +107,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# Now let's try with fallbacks to Anthropic\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(llm.invoke(\"Why did the the chicken cross the road?\"))\n",
|
||||
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -141,14 +141,17 @@
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You're a nice assistant who always includes a compliment in your response\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Why did the {animal} cross the road\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"chain = prompt | llm\n",
|
||||
"with patch('openai.ChatCompletion.create', side_effect=RateLimitError()):\n",
|
||||
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
|
||||
" try:\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
|
||||
" except:\n",
|
||||
" print(\"Hit error\")"
|
||||
]
|
||||
@@ -176,7 +179,10 @@
|
||||
"\n",
|
||||
"chat_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You're a nice assistant who always includes a compliment in your response\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Why did the {animal} cross the road\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
@@ -343,7 +349,7 @@
|
||||
"# In this case we are going to do the fallbacks on the LLM + output parser level\n",
|
||||
"# Because the error will get raised in the OutputParser\n",
|
||||
"openai_35 = ChatOpenAI() | DatetimeOutputParser()\n",
|
||||
"openai_4 = ChatOpenAI(model=\"gpt-4\")| DatetimeOutputParser()"
|
||||
"openai_4 = ChatOpenAI(model=\"gpt-4\") | DatetimeOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -353,7 +359,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"only_35 = prompt | openai_35 \n",
|
||||
"only_35 = prompt | openai_35\n",
|
||||
"fallback_4 = prompt | openai_35.with_fallbacks([openai_4])"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -200,7 +200,6 @@
|
||||
" \"What is LangChain?\",\n",
|
||||
" \"What's LangSmith?\",\n",
|
||||
" \"When was Llama-v2 released?\",\n",
|
||||
" \"Who trained Llama-v2?\",\n",
|
||||
" \"What is the langsmith cookbook?\",\n",
|
||||
" \"When did langchain first announce the hub?\",\n",
|
||||
"]\n",
|
||||
@@ -301,11 +300,14 @@
|
||||
"dataset_name = f\"agent-qa-{unique_id}\"\n",
|
||||
"\n",
|
||||
"dataset = client.create_dataset(\n",
|
||||
" dataset_name, description=\"An example dataset of questions over the LangSmith documentation.\"\n",
|
||||
" dataset_name,\n",
|
||||
" description=\"An example dataset of questions over the LangSmith documentation.\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for query, answer in zip(inputs, outputs):\n",
|
||||
" client.create_example(inputs={\"input\": query}, outputs={\"output\": answer}, dataset_id=dataset.id)"
|
||||
" client.create_example(\n",
|
||||
" inputs={\"input\": query}, outputs={\"output\": answer}, dataset_id=dataset.id\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -342,20 +344,22 @@
|
||||
"# Since chains can be stateful (e.g. they can have memory), we provide\n",
|
||||
"# a way to initialize a new chain for each row in the dataset. This is done\n",
|
||||
"# by passing in a factory function that returns a new chain for each row.\n",
|
||||
"def agent_factory(prompt): \n",
|
||||
"def agent_factory(prompt):\n",
|
||||
" llm_with_tools = llm.bind(\n",
|
||||
" functions=[format_tool_to_openai_function(t) for t in tools]\n",
|
||||
" )\n",
|
||||
" runnable_agent = (\n",
|
||||
" {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_functions(x['intermediate_steps'])\n",
|
||||
" } \n",
|
||||
" | prompt \n",
|
||||
" | llm_with_tools \n",
|
||||
" | OpenAIFunctionsAgentOutputParser()\n",
|
||||
" {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_functions(\n",
|
||||
" x[\"intermediate_steps\"]\n",
|
||||
" ),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm_with_tools\n",
|
||||
" | OpenAIFunctionsAgentOutputParser()\n",
|
||||
" )\n",
|
||||
" return AgentExecutor(agent=runnable_agent, tools=tools, handle_parsing_errors=True)\n"
|
||||
" return AgentExecutor(agent=runnable_agent, tools=tools, handle_parsing_errors=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -405,7 +409,7 @@
|
||||
" # You can use default criteria or write our own rubric\n",
|
||||
" RunEvalConfig.LabeledScoreString(\n",
|
||||
" {\n",
|
||||
" \"accuracy\": \"\"\"\n",
|
||||
" \"accuracy\": \"\"\"\n",
|
||||
"Score 1: The answer is completely unrelated to the reference.\n",
|
||||
"Score 3: The answer has minor relevance but does not align with the reference.\n",
|
||||
"Score 5: The answer has moderate relevance but contains inaccuracies.\n",
|
||||
@@ -494,7 +498,7 @@
|
||||
"import functools\n",
|
||||
"from langchain.smith import (\n",
|
||||
" arun_on_dataset,\n",
|
||||
" run_on_dataset, \n",
|
||||
" run_on_dataset,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain_results = run_on_dataset(\n",
|
||||
@@ -504,7 +508,10 @@
|
||||
" verbose=True,\n",
|
||||
" client=client,\n",
|
||||
" project_name=f\"runnable-agent-test-5d466cbc-{unique_id}\",\n",
|
||||
" tags=[\"testing-notebook\", \"prompt:5d466cbc\"], # Optional, adds a tag to the resulting chain runs\n",
|
||||
" tags=[\n",
|
||||
" \"testing-notebook\",\n",
|
||||
" \"prompt:5d466cbc\",\n",
|
||||
" ], # Optional, adds a tag to the resulting chain runs\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Sometimes, the agent will error due to parsing issues, incompatible tool inputs, etc.\n",
|
||||
@@ -706,7 +713,10 @@
|
||||
" verbose=True,\n",
|
||||
" client=client,\n",
|
||||
" project_name=f\"runnable-agent-test-39f3bbd0-{unique_id}\",\n",
|
||||
" tags=[\"testing-notebook\", \"prompt:39f3bbd0\"], # Optional, adds a tag to the resulting chain runs\n",
|
||||
" tags=[\n",
|
||||
" \"testing-notebook\",\n",
|
||||
" \"prompt:39f3bbd0\",\n",
|
||||
" ], # Optional, adds a tag to the resulting chain runs\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -95,6 +95,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import Ollama\n",
|
||||
"\n",
|
||||
"llm = Ollama(model=\"llama2\")\n",
|
||||
"llm(\"The first man on the moon was ...\")"
|
||||
]
|
||||
@@ -133,9 +134,11 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler \n",
|
||||
"llm = Ollama(model=\"llama2\", \n",
|
||||
" callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"llm = Ollama(\n",
|
||||
" model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
|
||||
")\n",
|
||||
"llm(\"The first man on the moon was ...\")"
|
||||
]
|
||||
},
|
||||
@@ -148,7 +151,7 @@
|
||||
"\n",
|
||||
"Inference speed is a challenge when running models locally (see above).\n",
|
||||
"\n",
|
||||
"To minimize latency, it is desiable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n",
|
||||
"To minimize latency, it is desirable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n",
|
||||
"\n",
|
||||
"And even with GPU, the available GPU memory bandwidth (as noted above) is important.\n",
|
||||
"\n",
|
||||
@@ -220,6 +223,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import Ollama\n",
|
||||
"\n",
|
||||
"llm = Ollama(model=\"llama2:13b\")\n",
|
||||
"llm(\"The first man on the moon was ... think step by step\")"
|
||||
]
|
||||
@@ -254,7 +258,7 @@
|
||||
"\n",
|
||||
"`f16_kv`: whether the model should use half-precision for the key/value cache\n",
|
||||
"* Value: True\n",
|
||||
"* Meaning: The model will use half-precision, which can be more memory efficient; Metal only support True."
|
||||
"* Meaning: The model will use half-precision, which can be more memory efficient; Metal only supports True."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -275,12 +279,13 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import LlamaCpp\n",
|
||||
"\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
" f16_kv=True, \n",
|
||||
" f16_kv=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
@@ -291,7 +296,7 @@
|
||||
"id": "f56f5168",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The console log will show the the below to indicate Metal was enabled properly from steps above:\n",
|
||||
"The console log will show the below to indicate Metal was enabled properly from steps above:\n",
|
||||
"```\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
"ggml_metal_init: using MPS\n",
|
||||
@@ -385,7 +390,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import GPT4All\n",
|
||||
"llm = GPT4All(model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\")"
|
||||
"\n",
|
||||
"llm = GPT4All(\n",
|
||||
" model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -436,7 +444,7 @@
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
" f16_kv=True, \n",
|
||||
" f16_kv=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
@@ -489,11 +497,9 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"QUESTION_PROMPT_SELECTOR = ConditionalPromptSelector(\n",
|
||||
" default_prompt=DEFAULT_SEARCH_PROMPT,\n",
|
||||
" conditionals=[\n",
|
||||
" (lambda llm: isinstance(llm, LlamaCpp), DEFAULT_LLAMA_SEARCH_PROMPT)\n",
|
||||
" ],\n",
|
||||
" )\n",
|
||||
" default_prompt=DEFAULT_SEARCH_PROMPT,\n",
|
||||
" conditionals=[(lambda llm: isinstance(llm, LlamaCpp), DEFAULT_LLAMA_SEARCH_PROMPT)],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = QUESTION_PROMPT_SELECTOR.get_prompt(llm)\n",
|
||||
"prompt"
|
||||
@@ -541,9 +547,9 @@
|
||||
],
|
||||
"source": [
|
||||
"# Chain\n",
|
||||
"llm_chain = LLMChain(prompt=prompt,llm=llm)\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n",
|
||||
"llm_chain.run({\"question\":question})"
|
||||
"llm_chain.run({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,7 +16,10 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2c4236d8-4054-473d-84a4-87a4db278a62",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"scrolled": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install boto3 nltk"
|
||||
@@ -24,7 +27,33 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"id": "9c792c3d-c601-409c-8e41-1c05a2fa0e84",
|
||||
"metadata": {
|
||||
"scrolled": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain_experimental"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "496df413-a840-40a1-9ac0-3af7c1303476",
|
||||
"metadata": {
|
||||
"scrolled": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain pydantic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3f8518ad-c762-413c-b8c9-f1c211fc311d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -32,8 +61,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import boto3\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"comprehend_client = boto3.client('comprehend', region_name='us-east-1')"
|
||||
"comprehend_client = boto3.client(\"comprehend\", region_name=\"us-east-1\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -48,8 +78,7 @@
|
||||
"from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain\n",
|
||||
"\n",
|
||||
"comprehend_moderation = AmazonComprehendModerationChain(\n",
|
||||
" client=comprehend_client, #optional\n",
|
||||
" verbose=True\n",
|
||||
" client=comprehend_client, verbose=True # optional\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -73,9 +102,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms.fake import FakeListLLM\n",
|
||||
"from langchain_experimental.comprehend_moderation.base_moderation_exceptions import ModerationPiiError\n",
|
||||
"from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (\n",
|
||||
" ModerationPiiError,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
@@ -84,28 +114,29 @@
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\",\n",
|
||||
"]\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | comprehend_moderation \n",
|
||||
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
|
||||
" | llm_chain \n",
|
||||
" | { \"input\": lambda x: x['text'] } \n",
|
||||
" | comprehend_moderation \n",
|
||||
" prompt\n",
|
||||
" | comprehend_moderation\n",
|
||||
" | {\"input\": (lambda x: x[\"output\"]) | llm}\n",
|
||||
" | comprehend_moderation\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"A sample SSN number looks like this 123-22-3345. Can you give me some more samples?\"\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
"except ModerationPiiError as e:\n",
|
||||
" print(e.message)\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])\n"
|
||||
" print(response[\"output\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -117,6 +148,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "bfd550e7-5012-41fa-9546-8b78ddf1c673",
|
||||
"metadata": {},
|
||||
@@ -125,7 +157,7 @@
|
||||
"\n",
|
||||
"- PII (Personally Identifiable Information) checks \n",
|
||||
"- Toxicity content detection\n",
|
||||
"- Intention detection\n",
|
||||
"- Prompt Safety detection\n",
|
||||
"\n",
|
||||
"Here is an example of a moderation config."
|
||||
]
|
||||
@@ -139,46 +171,44 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.comprehend_moderation import BaseModerationActions, BaseModerationFilters\n",
|
||||
"from langchain_experimental.comprehend_moderation import (\n",
|
||||
" BaseModerationConfig,\n",
|
||||
" ModerationPromptSafetyConfig,\n",
|
||||
" ModerationPiiConfig,\n",
|
||||
" ModerationToxicityConfig,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.TOXICITY,\n",
|
||||
" BaseModerationFilters.INTENT\n",
|
||||
" ],\n",
|
||||
" \"pii\":{ \n",
|
||||
" \"action\": BaseModerationActions.ALLOW, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"],\n",
|
||||
" \"mask_character\": \"X\"\n",
|
||||
" },\n",
|
||||
" \"toxicity\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5\n",
|
||||
" },\n",
|
||||
" \"intent\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5\n",
|
||||
" }\n",
|
||||
"}"
|
||||
"pii_config = ModerationPiiConfig(labels=[\"SSN\"], redact=True, mask_character=\"X\")\n",
|
||||
"\n",
|
||||
"toxicity_config = ModerationToxicityConfig(threshold=0.5)\n",
|
||||
"\n",
|
||||
"prompt_safety_config = ModerationPromptSafetyConfig(threshold=0.5)\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config, prompt_safety_config]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "3634376b-5938-43df-9ed6-70ca7e99290f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"At the core of the configuration you have three filters specified in the `filters` key:\n",
|
||||
"At the core of the the configuration there are three configuration models to be used\n",
|
||||
"\n",
|
||||
"1. `BaseModerationFilters.PII`\n",
|
||||
"2. `BaseModerationFilters.TOXICITY`\n",
|
||||
"3. `BaseModerationFilters.INTENT`\n",
|
||||
"- `ModerationPiiConfig` used for configuring the behavior of the PII validations. Following are the parameters it can be initialized with\n",
|
||||
" - `labels` the PII entity labels. Defaults to an empty list which means that the PII validation will consider all PII entities.\n",
|
||||
" - `threshold` the confidence threshold for the detected entities, defaults to 0.5 or 50%\n",
|
||||
" - `redact` a boolean flag to enforce whether redaction should be performed on the text, defaults to `False`. When `False`, the PII validation will error out when it detects any PII entity, when set to `True` it simply redacts the PII values in the text.\n",
|
||||
" - `mask_character` the character used for masking, defaults to asterisk (*)\n",
|
||||
"- `ModerationToxicityConfig` used for configuring the behavior of the toxicity validations. Following are the parameters it can be initialized with\n",
|
||||
" - `labels` the Toxic entity labels. Defaults to an empty list which means that the toxicity validation will consider all toxic entities. all\n",
|
||||
" - `threshold` the confidence threshold for the detected entities, defaults to 0.5 or 50% \n",
|
||||
"- `ModerationPromptSafetyConfig` used for configuring the behavior of the prompt safety validation\n",
|
||||
" - `threshold` the confidence threshold for the the prompt safety classification, defaults to 0.5 or 50% \n",
|
||||
"\n",
|
||||
"And an `action` key that defines two possible actions for each moderation function:\n",
|
||||
"\n",
|
||||
"1. `BaseModerationActions.ALLOW` - `allows` the prompt to pass through but masks detected PII in case of PII check. The default behavior is to run and redact all PII entities. If there is an entity specified in the `labels` field, then only those entities will go through the PII check and masked.\n",
|
||||
"2. `BaseModerationActions.STOP` - `stops` the prompt from passing through to the next step in case any PII, Toxicity, or incorrect Intent is detected. The action of `BaseModerationActions.STOP` will raise a Python `Exception` essentially stopping the chain in progress.\n",
|
||||
"Finally, you use the `BaseModerationConfig` to define the order in which each of these checks are to be performed. The `BaseModerationConfig` takes an optional `filters` parameter which can be a list of one or more than one of the above validation checks, as seen in the previous code block. The `BaseModerationConfig` can also be initialized with any `filters` in which case it will use all the checks with default configuration (more on this explained later).\n",
|
||||
"\n",
|
||||
"Using the configuration in the previous cell will perform PII checks and will allow the prompt to pass through however it will mask any SSN numbers present in either the prompt or the LLM output.\n"
|
||||
]
|
||||
@@ -193,10 +223,23 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"comp_moderation_with_config = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config, #specify the configuration\n",
|
||||
" client=comprehend_client, #optionally pass the Boto3 Client\n",
|
||||
" verbose=True\n",
|
||||
")\n",
|
||||
" moderation_config=moderation_config, # specify the configuration\n",
|
||||
" client=comprehend_client, # optionally pass the Boto3 Client\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "082c6cfc",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.llms.fake import FakeListLLM\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
@@ -205,31 +248,34 @@
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\",\n",
|
||||
"]\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"chain = ( \n",
|
||||
" prompt \n",
|
||||
" | comp_moderation_with_config \n",
|
||||
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
|
||||
" | llm_chain \n",
|
||||
" | { \"input\": lambda x: x['text'] } \n",
|
||||
" | comp_moderation_with_config \n",
|
||||
"chain = (\n",
|
||||
" prompt\n",
|
||||
" | comp_moderation_with_config\n",
|
||||
" | {\"input\": (lambda x: x[\"output\"]) | llm}\n",
|
||||
" | comp_moderation_with_config\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"A sample SSN number looks like this 123-45-7890. Can you give me some more samples?\"\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])"
|
||||
" print(response[\"output\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "ba890681-feeb-43ca-a0d5-9c11d2d9de3e",
|
||||
"metadata": {
|
||||
@@ -238,25 +284,25 @@
|
||||
"source": [
|
||||
"## Unique ID, and Moderation Callbacks\n",
|
||||
"\n",
|
||||
"When Amazon Comprehend moderation action is specified as `STOP`, the chain will raise one of the following exceptions-\n",
|
||||
"When Amazon Comprehend moderation action identifies any of the configugred entity, the chain will raise one of the following exceptions-\n",
|
||||
" - `ModerationPiiError`, for PII checks\n",
|
||||
" - `ModerationToxicityError`, for Toxicity checks \n",
|
||||
" - `ModerationIntentionError` for Intent checks\n",
|
||||
" - `ModerationPromptSafetyError` for Prompt Safety checks\n",
|
||||
"\n",
|
||||
"In addition to the moderation configuration, the `AmazonComprehendModerationChain` can also be initialized with the following parameters\n",
|
||||
"\n",
|
||||
"- `unique_id` [Optional] a string parameter. This parameter can be used to pass any string value or ID. For example, in a chat application, you may want to keep track of abusive users, in this case, you can pass the user's username/email ID etc. This defaults to `None`.\n",
|
||||
"\n",
|
||||
"- `moderation_callback` [Optional] the `BaseModerationCallbackHandler` will be called asynchronously (non-blocking to the chain). Callback functions are useful when you want to perform additional actions when the moderation functions are executed, for example logging into a database, or writing a log file. You can override three functions by subclassing `BaseModerationCallbackHandler` - `on_after_pii()`, `on_after_toxicity()`, and `on_after_intent()`. Note that all three functions must be `async` functions. These callback functions receive two arguments:\n",
|
||||
" - `moderation_beacon` is a dictionary that will contain information about the moderation function, the full response from the Amazon Comprehend model, a unique chain id, the moderation status, and the input string which was validated. The dictionary is of the following schema-\n",
|
||||
"- `moderation_callback` [Optional] the `BaseModerationCallbackHandler` that will be called asynchronously (non-blocking to the chain). Callback functions are useful when you want to perform additional actions when the moderation functions are executed, for example logging into a database, or writing a log file. You can override three functions by subclassing `BaseModerationCallbackHandler` - `on_after_pii()`, `on_after_toxicity()`, and `on_after_prompt_safety()`. Note that all three functions must be `async` functions. These callback functions receive two arguments:\n",
|
||||
" - `moderation_beacon` a dictionary that will contain information about the moderation function, the full response from Amazon Comprehend model, a unique chain id, the moderation status, and the input string which was validated. The dictionary is of the following schema-\n",
|
||||
" \n",
|
||||
" ```\n",
|
||||
" { \n",
|
||||
" 'moderation_chain_id': 'xxx-xxx-xxx', # Unique chain ID\n",
|
||||
" 'moderation_type': 'Toxicity' | 'PII' | 'Intent', \n",
|
||||
" 'moderation_type': 'Toxicity' | 'PII' | 'PromptSafety', \n",
|
||||
" 'moderation_status': 'LABELS_FOUND' | 'LABELS_NOT_FOUND',\n",
|
||||
" 'moderation_input': 'A sample SSN number looks like this 123-456-7890. Can you give me some more samples?',\n",
|
||||
" 'moderation_output': {...} #Full Amazon Comprehend PII, Toxicity, or Intent Model Output\n",
|
||||
" 'moderation_output': {...} #Full Amazon Comprehend PII, Toxicity, or Prompt Safety Model Output\n",
|
||||
" }\n",
|
||||
" ```\n",
|
||||
" \n",
|
||||
@@ -299,24 +345,25 @@
|
||||
"source": [
|
||||
"# Define callback handlers by subclassing BaseModerationCallbackHandler\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class MyModCallback(BaseModerationCallbackHandler):\n",
|
||||
" \n",
|
||||
" async def on_after_pii(self, output_beacon, unique_id):\n",
|
||||
" import json\n",
|
||||
" moderation_type = output_beacon['moderation_type']\n",
|
||||
" chain_id = output_beacon['moderation_chain_id']\n",
|
||||
" with open(f'output-{moderation_type}-{chain_id}.json', 'w') as file:\n",
|
||||
" data = { 'beacon_data': output_beacon, 'unique_id': unique_id }\n",
|
||||
"\n",
|
||||
" moderation_type = output_beacon[\"moderation_type\"]\n",
|
||||
" chain_id = output_beacon[\"moderation_chain_id\"]\n",
|
||||
" with open(f\"output-{moderation_type}-{chain_id}.json\", \"w\") as file:\n",
|
||||
" data = {\"beacon_data\": output_beacon, \"unique_id\": unique_id}\n",
|
||||
" json.dump(data, file)\n",
|
||||
" \n",
|
||||
" '''\n",
|
||||
"\n",
|
||||
" \"\"\"\n",
|
||||
" async def on_after_toxicity(self, output_beacon, unique_id):\n",
|
||||
" pass\n",
|
||||
" \n",
|
||||
" async def on_after_intent(self, output_beacon, unique_id):\n",
|
||||
" async def on_after_prompt_safety(self, output_beacon, unique_id):\n",
|
||||
" pass\n",
|
||||
" '''\n",
|
||||
" \n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"my_callback = MyModCallback()"
|
||||
]
|
||||
@@ -330,29 +377,18 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\": [ \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.TOXICITY\n",
|
||||
" ],\n",
|
||||
" \"pii\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"], \n",
|
||||
" \"mask_character\": \"X\" \n",
|
||||
" },\n",
|
||||
" \"toxicity\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5 \n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"pii_config = ModerationPiiConfig(labels=[\"SSN\"], redact=True, mask_character=\"X\")\n",
|
||||
"\n",
|
||||
"toxicity_config = ModerationToxicityConfig(threshold=0.5)\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(filters=[pii_config, toxicity_config])\n",
|
||||
"\n",
|
||||
"comp_moderation_with_config = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config, # specify the configuration\n",
|
||||
" client=comprehend_client, # optionally pass the Boto3 Client\n",
|
||||
" unique_id='john.doe@email.com', # A unique ID\n",
|
||||
" moderation_callback=my_callback, # BaseModerationCallbackHandler\n",
|
||||
" verbose=True\n",
|
||||
" moderation_config=moderation_config, # specify the configuration\n",
|
||||
" client=comprehend_client, # optionally pass the Boto3 Client\n",
|
||||
" unique_id=\"john.doe@email.com\", # A unique ID\n",
|
||||
" moderation_callback=my_callback, # BaseModerationCallbackHandler\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -366,7 +402,6 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms.fake import FakeListLLM\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
@@ -376,32 +411,34 @@
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\",\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | comp_moderation_with_config \n",
|
||||
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
|
||||
" | llm_chain \n",
|
||||
" | { \"input\": lambda x: x['text'] } \n",
|
||||
" | comp_moderation_with_config \n",
|
||||
") \n",
|
||||
" prompt\n",
|
||||
" | comp_moderation_with_config\n",
|
||||
" | {\"input\": (lambda x: x[\"output\"]) | llm}\n",
|
||||
" | comp_moderation_with_config\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])"
|
||||
" print(response[\"output\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "706454b2-2efa-4d41-abc8-ccf2b4e87822",
|
||||
"metadata": {
|
||||
@@ -410,7 +447,7 @@
|
||||
"source": [
|
||||
"## `moderation_config` and moderation execution order\n",
|
||||
"\n",
|
||||
"If `AmazonComprehendModerationChain` is not initialized with any `moderation_config` then the default action is `STOP` and the default order of moderation check is as follows.\n",
|
||||
"If `AmazonComprehendModerationChain` is not initialized with any `moderation_config` then it is initialized with the default values of `BaseModerationConfig`. If no `filters` are used then the sequence of moderation check is as follows.\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"AmazonComprehendModerationChain\n",
|
||||
@@ -423,39 +460,32 @@
|
||||
" ├── Callback (if available)\n",
|
||||
" ├── Label Found ⟶ [Error Stop]\n",
|
||||
" └── No Label Found\n",
|
||||
" └──Check Intent with Stop Action\n",
|
||||
" └──Check Prompt Safety with Stop Action\n",
|
||||
" ├── Callback (if available)\n",
|
||||
" ├── Label Found ⟶ [Error Stop]\n",
|
||||
" └── No Label Found\n",
|
||||
" └── Return Prompt\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"If any of the checks raises an exception then the subsequent checks will not be performed. If a `callback` is provided in this case, then it will be called for each of the checks that have been performed. For example, in the case above, if the Chain fails due to the presence of PII then the Toxicity and Intent checks will not be performed.\n",
|
||||
"If any of the check raises a validation exception then the subsequent checks will not be performed. If a `callback` is provided in this case, then it will be called for each of the checks that have been performed. For example, in the case above, if the Chain fails due to presence of PII then the Toxicity and Prompt Safety checks will not be performed.\n",
|
||||
"\n",
|
||||
"You can override the execution order by passing `moderation_config` and simply specifying the desired order in the `filters` key of the configuration. In case you use `moderation_config` then the order of the checks as specified in the `filters` key will be maintained. For example, in the configuration below, first Toxicity check will be performed, then PII, and finally Intent validation will be performed. In this case, `AmazonComprehendModerationChain` will perform the desired checks in the specified order with default values of each model `kwargs`.\n",
|
||||
"You can override the execution order by passing `moderation_config` and simply specifying the desired order in the `filters` parameter of the `BaseModerationConfig`. In case you specify the filters, then the order of the checks as specified in the `filters` parameter will be maintained. For example, in the configuration below, first Toxicity check will be performed, then PII, and finally Prompt Safety validation will be performed. In this case, `AmazonComprehendModerationChain` will perform the desired checks in the specified order with default values of each model `kwargs`.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.INTENT]\n",
|
||||
" }\n",
|
||||
"pii_check = ModerationPiiConfig()\n",
|
||||
"toxicity_check = ModerationToxicityConfig()\n",
|
||||
"prompt_safety_check = ModerationPromptSafetyConfig()\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(filters=[toxicity_check, pii_check, prompt_safety_check])\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Model `kwargs` are specified by the `pii`, `toxicity`, and `intent` keys within the `moderation_config` dictionary. For example, in the `moderation_config` below, the default order of moderation is overriden and the `pii` & `toxicity` model `kwargs` have been overriden. For `intent` the chain's default `kwargs` will be used.\n",
|
||||
"You can have also use more than one configuration for a specific moderation check, for example in the sample below, two consecutive PII checks are performed. First the configuration checks for any SSN, if found it would raise an error. If any SSN isn't found then it will next check if any NAME and CREDIT_DEBIT_NUMBER is present in the prompt and will mask it.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
" moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.INTENT],\n",
|
||||
" \"pii\":{ \"action\": BaseModerationActions.ALLOW, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"], \n",
|
||||
" \"mask_character\": \"X\" },\n",
|
||||
" \"toxicity\":{ \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5 }\n",
|
||||
" }\n",
|
||||
"pii_check_1 = ModerationPiiConfig(labels=[\"SSN\"])\n",
|
||||
"pii_check_2 = ModerationPiiConfig(labels=[\"NAME\", \"CREDIT_DEBIT_NUMBER\"], redact=True)\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(filters=[pii_check_1, pii_check_2])\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"1. For a list of PII labels see Amazon Comprehend Universal PII entity types - https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html#how-pii-types\n",
|
||||
@@ -467,10 +497,11 @@
|
||||
" - `VIOLENCE_OR_THREAT`: Speech that includes threats which seek to inflict pain, injury or hostility towards a person or group.\n",
|
||||
" - `INSULT`: Speech that includes demeaning, humiliating, mocking, insulting, or belittling language.\n",
|
||||
" - `PROFANITY`: Speech that contains words, phrases or acronyms that are impolite, vulgar, or offensive is considered as profane.\n",
|
||||
"3. For a list of Intent labels refer to documentation [link here]"
|
||||
"3. For a list of Prompt Safety labels refer to documentation [link here]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "78905aec-55ae-4fc3-a23b-8a69bd1e33f2",
|
||||
"metadata": {},
|
||||
@@ -504,7 +535,9 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%env HUGGINGFACEHUB_API_TOKEN=\"<HUGGINGFACEHUB_API_TOKEN>\""
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = \"<YOUR HF TOKEN HERE>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -517,7 +550,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options\n",
|
||||
"repo_id = \"google/flan-t5-xxl\" \n"
|
||||
"repo_id = \"google/flan-t5-xxl\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -531,18 +564,13 @@
|
||||
"source": [
|
||||
"from langchain.llms import HuggingFaceHub\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"template = \"\"\"{question}\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm = HuggingFaceHub(\n",
|
||||
" repo_id=repo_id, model_kwargs={\"temperature\": 0.5, \"max_length\": 256}\n",
|
||||
")\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -562,23 +590,35 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY, BaseModerationFilters.INTENT ],\n",
|
||||
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\",\"CREDIT_DEBIT_NUMBER\"], \"mask_character\": \"X\"},\n",
|
||||
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
|
||||
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
|
||||
" }\n",
|
||||
"# define filter configs\n",
|
||||
"pii_config = ModerationPiiConfig(\n",
|
||||
" labels=[\"SSN\", \"CREDIT_DEBIT_NUMBER\"], redact=True, mask_character=\"X\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# without any callback\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client,\n",
|
||||
" verbose=True)\n",
|
||||
"toxicity_config = ModerationToxicityConfig(threshold=0.5)\n",
|
||||
"\n",
|
||||
"# with callback\n",
|
||||
"amazon_comp_moderation_out = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client,\n",
|
||||
" moderation_callback=my_callback,\n",
|
||||
" verbose=True)"
|
||||
"prompt_safety_config = ModerationPromptSafetyConfig(threshold=0.8)\n",
|
||||
"\n",
|
||||
"# define different moderation configs using the filter configs above\n",
|
||||
"moderation_config_1 = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config, prompt_safety_config]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config_2 = BaseModerationConfig(filters=[pii_config])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# input prompt moderation chain with callback\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config_1,\n",
|
||||
" client=comprehend_client,\n",
|
||||
" moderation_callback=my_callback,\n",
|
||||
" verbose=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Output from LLM moderation chain without callback\n",
|
||||
"amazon_comp_moderation_out = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config_2, client=comprehend_client, verbose=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -586,7 +626,7 @@
|
||||
"id": "b1256bc8-1321-4624-9e8a-a2d4a8df59bf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `moderation_config` will now prevent any inputs and model outputs containing obscene words or sentences, bad intent, or PII with entities other than SSN with score above threshold or 0.5 or 50%. If it finds Pii entities - SSN - it will redact them before allowing the call to proceed. "
|
||||
"The `moderation_config` will now prevent any inputs containing obscene words or sentences, bad intent, or PII with entities other than SSN with score above threshold or 0.5 or 50%. If it finds Pii entities - SSN - it will redact them before allowing the call to proceed. It will also mask any SSN or credit card numbers from the model's response."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -599,20 +639,25 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | amazon_comp_moderation \n",
|
||||
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
|
||||
" | llm_chain \n",
|
||||
" | { \"input\": lambda x: x['text'] } \n",
|
||||
" prompt\n",
|
||||
" | amazon_comp_moderation\n",
|
||||
" | {\"input\": (lambda x: x[\"output\"]) | llm}\n",
|
||||
" | amazon_comp_moderation_out\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more credit car number samples?\"})\n",
|
||||
" response = chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"question\": \"\"\"What is John Doe's address, phone number and SSN from the following text?\n",
|
||||
"\n",
|
||||
"John Doe, a resident of 1234 Elm Street in Springfield, recently celebrated his birthday on January 1st. Turning 43 this year, John reflected on the years gone by. He often shares memories of his younger days with his close friends through calls on his phone, (555) 123-4567. Meanwhile, during a casual evening, he received an email at johndoe@example.com reminding him of an old acquaintance's reunion. As he navigated through some old documents, he stumbled upon a paper that listed his SSN as 123-45-6789, reminding him to store it in a safer place.\n",
|
||||
"\"\"\"\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])"
|
||||
" print(response[\"output\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -624,7 +669,7 @@
|
||||
"source": [
|
||||
"### With Amazon SageMaker Jumpstart\n",
|
||||
"\n",
|
||||
"The example below shows how to use the `Amazon Comprehend Moderation chain` with an Amazon SageMaker Jumpstart hosted LLM. You should have an `Amazon SageMaker Jumpstart` hosted LLM endpoint within your AWS Account. "
|
||||
"The exmaple below shows how to use Amazon Comprehend Moderation chain with an Amazon SageMaker Jumpstart hosted LLM. You should have an Amazon SageMaker Jumpstart hosted LLM endpoint within your AWS Account. Refer to [this notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-falcon.ipynb) for more on how to deploy an LLM with Amazon SageMaker Jumpstart hosted endpoints."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -634,7 +679,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"endpoint_name = \"<SAGEMAKER_ENDPOINT_NAME>\" # replace with your SageMaker Endpoint name"
|
||||
"endpoint_name = \"<SAGEMAKER_ENDPOINT_NAME>\" # replace with your SageMaker Endpoint name\n",
|
||||
"region = \"<REGION>\" # replace with your SageMaker Endpoint region"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -646,40 +692,47 @@
|
||||
"source": [
|
||||
"from langchain.llms import SagemakerEndpoint\n",
|
||||
"from langchain.llms.sagemaker_endpoint import LLMContentHandler\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import load_prompt, PromptTemplate\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class ContentHandler(LLMContentHandler):\n",
|
||||
" content_type = \"application/json\"\n",
|
||||
" accepts = \"application/json\"\n",
|
||||
"\n",
|
||||
" def transform_input(self, prompt: str, model_kwargs: dict) -> bytes:\n",
|
||||
" input_str = json.dumps({\"text_inputs\": prompt, **model_kwargs})\n",
|
||||
" return input_str.encode('utf-8')\n",
|
||||
" \n",
|
||||
" input_str = json.dumps({\"text_inputs\": prompt, **model_kwargs})\n",
|
||||
" return input_str.encode(\"utf-8\")\n",
|
||||
"\n",
|
||||
" def transform_output(self, output: bytes) -> str:\n",
|
||||
" response_json = json.loads(output.read().decode(\"utf-8\"))\n",
|
||||
" return response_json['generated_texts'][0]\n",
|
||||
" return response_json[\"generated_texts\"][0]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"content_handler = ContentHandler()\n",
|
||||
"\n",
|
||||
"#prompt template for input text\n",
|
||||
"llm_prompt = PromptTemplate(input_variables=[\"input_text\"], template=\"{input_text}\")\n",
|
||||
"template = \"\"\"From the following 'Document', precisely answer the 'Question'. Do not add any spurious information in your answer.\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(\n",
|
||||
" llm=SagemakerEndpoint(\n",
|
||||
" endpoint_name=endpoint_name, \n",
|
||||
" region_name='us-east-1',\n",
|
||||
" model_kwargs={\"temperature\":0.97,\n",
|
||||
" \"max_length\": 200,\n",
|
||||
" \"num_return_sequences\": 3,\n",
|
||||
" \"top_k\": 50,\n",
|
||||
" \"top_p\": 0.95,\n",
|
||||
" \"do_sample\": True},\n",
|
||||
" content_handler=content_handler\n",
|
||||
" ),\n",
|
||||
" prompt=llm_prompt\n",
|
||||
"Document: John Doe, a resident of 1234 Elm Street in Springfield, recently celebrated his birthday on January 1st. Turning 43 this year, John reflected on the years gone by. He often shares memories of his younger days with his close friends through calls on his phone, (555) 123-4567. Meanwhile, during a casual evening, he received an email at johndoe@example.com reminding him of an old acquaintance's reunion. As he navigated through some old documents, he stumbled upon a paper that listed his SSN as 123-45-6789, reminding him to store it in a safer place.\n",
|
||||
"Question: {question}\n",
|
||||
"Answer:\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"# prompt template for input text\n",
|
||||
"llm_prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm = SagemakerEndpoint(\n",
|
||||
" endpoint_name=endpoint_name,\n",
|
||||
" region_name=region,\n",
|
||||
" model_kwargs={\n",
|
||||
" \"temperature\": 0.95,\n",
|
||||
" \"max_length\": 200,\n",
|
||||
" \"num_return_sequences\": 3,\n",
|
||||
" \"top_k\": 50,\n",
|
||||
" \"top_p\": 0.95,\n",
|
||||
" \"do_sample\": True,\n",
|
||||
" },\n",
|
||||
" content_handler=content_handler,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -700,16 +753,30 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY ],\n",
|
||||
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\"], \"mask_character\": \"X\"},\n",
|
||||
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
|
||||
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
|
||||
" }\n",
|
||||
"# define filter configs\n",
|
||||
"pii_config = ModerationPiiConfig(labels=[\"SSN\"], redact=True, mask_character=\"X\")\n",
|
||||
"\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client ,\n",
|
||||
" verbose=True)"
|
||||
"toxicity_config = ModerationToxicityConfig(threshold=0.5)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# define different moderation configs using the filter configs above\n",
|
||||
"moderation_config_1 = BaseModerationConfig(filters=[pii_config, toxicity_config])\n",
|
||||
"\n",
|
||||
"moderation_config_2 = BaseModerationConfig(filters=[pii_config])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# input prompt moderation chain with callback\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config_1,\n",
|
||||
" client=comprehend_client,\n",
|
||||
" moderation_callback=my_callback,\n",
|
||||
" verbose=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Output from LLM moderation chain without callback\n",
|
||||
"amazon_comp_moderation_out = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config_2, client=comprehend_client, verbose=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -730,20 +797,20 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | amazon_comp_moderation \n",
|
||||
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
|
||||
" | llm_chain \n",
|
||||
" | { \"input\": lambda x: x['text'] } \n",
|
||||
" | amazon_comp_moderation \n",
|
||||
" prompt\n",
|
||||
" | amazon_comp_moderation\n",
|
||||
" | {\"input\": (lambda x: x[\"output\"]) | llm}\n",
|
||||
" | amazon_comp_moderation_out\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke(\n",
|
||||
" {\"question\": \"What is John Doe's address, phone number and SSN?\"}\n",
|
||||
" )\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])"
|
||||
" print(response[\"output\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1347,7 +1414,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
||||
"> from data labeling to model monitoring.\n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>"
|
||||
]
|
||||
|
||||
@@ -122,8 +122,7 @@
|
||||
"from langchain.callbacks.confident_callback import DeepEvalCallbackHandler\n",
|
||||
"\n",
|
||||
"deepeval_callback = DeepEvalCallbackHandler(\n",
|
||||
" implementation_name=\"langchainQuickstart\",\n",
|
||||
" metrics=[answer_relevancy_metric]\n",
|
||||
" implementation_name=\"langchainQuickstart\", metrics=[answer_relevancy_metric]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -155,6 +154,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(\n",
|
||||
" temperature=0,\n",
|
||||
" callbacks=[deepeval_callback],\n",
|
||||
@@ -227,8 +227,8 @@
|
||||
"openai_api_key = \"sk-XXX\"\n",
|
||||
"\n",
|
||||
"with open(\"state_of_the_union.txt\", \"w\") as f:\n",
|
||||
" response = requests.get(text_file_url)\n",
|
||||
" f.write(response.text)\n",
|
||||
" response = requests.get(text_file_url)\n",
|
||||
" f.write(response.text)\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()\n",
|
||||
@@ -239,8 +239,9 @@
|
||||
"docsearch = Chroma.from_documents(texts, embeddings)\n",
|
||||
"\n",
|
||||
"qa = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(openai_api_key=openai_api_key), chain_type=\"stuff\",\n",
|
||||
" retriever=docsearch.as_retriever()\n",
|
||||
" llm=OpenAI(openai_api_key=openai_api_key),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=docsearch.as_retriever(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Providing a new question-answering pipeline\n",
|
||||
|
||||
@@ -234,8 +234,7 @@
|
||||
" plt.ylabel(\"Value\")\n",
|
||||
" plt.title(title)\n",
|
||||
"\n",
|
||||
" plt.show()\n",
|
||||
"\n"
|
||||
" plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -325,9 +324,11 @@
|
||||
" model_id=\"test_chatopenai\", model_version=\"0.1\", verbose=False\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"urls = [\"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n",
|
||||
" \"https://medium.com/lyft-engineering/lyftlearn-ml-model-training-infrastructure-built-on-kubernetes-aef8218842bb\",\n",
|
||||
" \"https://blog.langchain.dev/week-of-10-2-langchain-release-notes/\"]\n",
|
||||
"urls = [\n",
|
||||
" \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n",
|
||||
" \"https://medium.com/lyft-engineering/lyftlearn-ml-model-training-infrastructure-built-on-kubernetes-aef8218842bb\",\n",
|
||||
" \"https://blog.langchain.dev/week-of-10-2-langchain-release-notes/\",\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"for url in urls:\n",
|
||||
" loader = WebBaseLoader(url)\n",
|
||||
@@ -364,7 +365,7 @@
|
||||
"plot(response.text, \"Prompt Tokens\")\n",
|
||||
"\n",
|
||||
"response = client.search_ts(\"__name__\", \"completion_tokens\", 0, int(time.time()))\n",
|
||||
"plot(response.text, \"Completion Tokens\")\n"
|
||||
"plot(response.text, \"Completion Tokens\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -97,9 +97,9 @@
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ['LABEL_STUDIO_URL'] = '<YOUR-LABEL-STUDIO-URL>' # e.g. http://localhost:8080\n",
|
||||
"os.environ['LABEL_STUDIO_API_KEY'] = '<YOUR-LABEL-STUDIO-API-KEY>'\n",
|
||||
"os.environ['OPENAI_API_KEY'] = '<YOUR-OPENAI-API-KEY>'"
|
||||
"os.environ[\"LABEL_STUDIO_URL\"] = \"<YOUR-LABEL-STUDIO-URL>\" # e.g. http://localhost:8080\n",
|
||||
"os.environ[\"LABEL_STUDIO_API_KEY\"] = \"<YOUR-LABEL-STUDIO-API-KEY>\"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR-OPENAI-API-KEY>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -174,11 +174,7 @@
|
||||
"from langchain.callbacks import LabelStudioCallbackHandler\n",
|
||||
"\n",
|
||||
"llm = OpenAI(\n",
|
||||
" temperature=0,\n",
|
||||
" callbacks=[\n",
|
||||
" LabelStudioCallbackHandler(\n",
|
||||
" project_name=\"My Project\"\n",
|
||||
" )]\n",
|
||||
" temperature=0, callbacks=[LabelStudioCallbackHandler(project_name=\"My Project\")]\n",
|
||||
")\n",
|
||||
"print(llm(\"Tell me a joke\"))"
|
||||
]
|
||||
@@ -249,16 +245,20 @@
|
||||
"from langchain.schema import HumanMessage, SystemMessage\n",
|
||||
"from langchain.callbacks import LabelStudioCallbackHandler\n",
|
||||
"\n",
|
||||
"chat_llm = ChatOpenAI(callbacks=[\n",
|
||||
" LabelStudioCallbackHandler(\n",
|
||||
" mode=\"chat\",\n",
|
||||
" project_name=\"New Project with Chat\",\n",
|
||||
" )\n",
|
||||
"])\n",
|
||||
"llm_results = chat_llm([\n",
|
||||
" SystemMessage(content=\"Always use a lot of emojis\"),\n",
|
||||
" HumanMessage(content=\"Tell me a joke\")\n",
|
||||
"])"
|
||||
"chat_llm = ChatOpenAI(\n",
|
||||
" callbacks=[\n",
|
||||
" LabelStudioCallbackHandler(\n",
|
||||
" mode=\"chat\",\n",
|
||||
" project_name=\"New Project with Chat\",\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"llm_results = chat_llm(\n",
|
||||
" [\n",
|
||||
" SystemMessage(content=\"Always use a lot of emojis\"),\n",
|
||||
" HumanMessage(content=\"Tell me a joke\"),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -304,7 +304,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ls = LabelStudioCallbackHandler(project_config='''\n",
|
||||
"ls = LabelStudioCallbackHandler(\n",
|
||||
" project_config=\"\"\"\n",
|
||||
"<View>\n",
|
||||
"<Text name=\"prompt\" value=\"$prompt\"/>\n",
|
||||
"<TextArea name=\"response\" toName=\"prompt\"/>\n",
|
||||
@@ -315,7 +316,8 @@
|
||||
" <Choice value=\"Negative\"/>\n",
|
||||
"</Choices>\n",
|
||||
"</View>\n",
|
||||
"''')"
|
||||
"\"\"\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -105,19 +105,19 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#LLM Hyperparameters\n",
|
||||
"# LLM Hyperparameters\n",
|
||||
"HPARAMS = {\n",
|
||||
" \"temperature\": 0.1,\n",
|
||||
" \"model_name\": \"text-davinci-003\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)\n",
|
||||
"# Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)\n",
|
||||
"BUCKET_NAME = None\n",
|
||||
"\n",
|
||||
"#Experiment name\n",
|
||||
"# Experiment name\n",
|
||||
"EXPERIMENT_NAME = \"langchain-sagemaker-tracker\"\n",
|
||||
"\n",
|
||||
"#Create SageMaker Session with the given bucket\n",
|
||||
"# Create SageMaker Session with the given bucket\n",
|
||||
"session = Session(default_bucket=BUCKET_NAME)"
|
||||
]
|
||||
},
|
||||
@@ -150,8 +150,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
"with Run(\n",
|
||||
" experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session\n",
|
||||
") as run:\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
@@ -209,8 +210,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
"with Run(\n",
|
||||
" experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session\n",
|
||||
") as run:\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
@@ -228,7 +230,9 @@
|
||||
" chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Create Sequential chain\n",
|
||||
" overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback])\n",
|
||||
" overall_chain = SimpleSequentialChain(\n",
|
||||
" chains=[chain1, chain2], callbacks=[sagemaker_callback]\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" # Run overall sequential chain\n",
|
||||
" overall_chain.run(**INPUT_VARIABLES)\n",
|
||||
@@ -267,8 +271,9 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
"with Run(\n",
|
||||
" experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session\n",
|
||||
") as run:\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
@@ -279,7 +284,9 @@
|
||||
" tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Initialize agent with all the tools\n",
|
||||
" agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", callbacks=[sagemaker_callback])\n",
|
||||
" agent = initialize_agent(\n",
|
||||
" tools, llm, agent=\"zero-shot-react-description\", callbacks=[sagemaker_callback]\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" # Run agent\n",
|
||||
" agent.run(input=PROMPT_TEMPLATE)\n",
|
||||
@@ -309,10 +316,10 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Load\n",
|
||||
"# Load\n",
|
||||
"logs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)\n",
|
||||
"\n",
|
||||
"#Convert as pandas dataframe\n",
|
||||
"# Convert as pandas dataframe\n",
|
||||
"df = logs.dataframe(force_refresh=True)\n",
|
||||
"\n",
|
||||
"print(df.shape)\n",
|
||||
|
||||
@@ -284,7 +284,7 @@
|
||||
" project=\"default\",\n",
|
||||
" tags=[\"chat model\"],\n",
|
||||
" user_id=\"user-id-1234\",\n",
|
||||
" some_metadata={\"hello\": [1, 2]}\n",
|
||||
" some_metadata={\"hello\": [1, 2]},\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
")"
|
||||
|
||||
@@ -46,7 +46,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = AnthropicFunctions(model='claude-2')"
|
||||
"model = AnthropicFunctions(model=\"claude-2\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -66,26 +66,23 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"functions=[\n",
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"description\": \"Get the current weather in a given location\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"location\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The city and state, e.g. San Francisco, CA\"\n",
|
||||
" },\n",
|
||||
" \"unit\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"enum\": [\"celsius\", \"fahrenheit\"]\n",
|
||||
" }\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"description\": \"Get the current weather in a given location\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"location\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
|
||||
" },\n",
|
||||
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
|
||||
" },\n",
|
||||
" \"required\": [\"location\"],\n",
|
||||
" },\n",
|
||||
" \"required\": [\"location\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]"
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -106,8 +103,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = model.predict_messages(\n",
|
||||
" [HumanMessage(content=\"whats the weater in boston?\")], \n",
|
||||
" functions=functions\n",
|
||||
" [HumanMessage(content=\"whats the weater in boston?\")], functions=functions\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -150,6 +146,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import create_extraction_chain\n",
|
||||
"\n",
|
||||
"schema = {\n",
|
||||
" \"properties\": {\n",
|
||||
" \"name\": {\"type\": \"string\"},\n",
|
||||
|
||||
@@ -102,19 +102,15 @@
|
||||
"from langchain.schema import SystemMessage, HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful AI that shares everything you know.\"\n",
|
||||
" ),\n",
|
||||
" SystemMessage(content=\"You are a helpful AI that shares everything you know.\"),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?\"\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def get_msgs():\n",
|
||||
" tasks = [\n",
|
||||
" chat.apredict_messages(messages)\n",
|
||||
" for chat in chats.values()\n",
|
||||
" ]\n",
|
||||
" tasks = [chat.apredict_messages(messages) for chat in chats.values()]\n",
|
||||
" responses = await asyncio.gather(*tasks)\n",
|
||||
" return dict(zip(chats.keys(), responses))"
|
||||
]
|
||||
@@ -194,10 +190,10 @@
|
||||
"response_dict = asyncio.run(get_msgs())\n",
|
||||
"\n",
|
||||
"for model_name, response in response_dict.items():\n",
|
||||
" print(f'\\t{model_name}')\n",
|
||||
" print(f\"\\t{model_name}\")\n",
|
||||
" print()\n",
|
||||
" print(response.content)\n",
|
||||
" print('\\n---\\n')"
|
||||
" print(\"\\n---\\n\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -105,7 +105,7 @@
|
||||
"source": [
|
||||
"BASE_URL = \"https://{endpoint}.openai.azure.com\"\n",
|
||||
"API_KEY = \"...\"\n",
|
||||
"DEPLOYMENT_NAME = \"gpt-35-turbo\" # in Azure, this deployment has version 0613 - input and output tokens are counted separately"
|
||||
"DEPLOYMENT_NAME = \"gpt-35-turbo\" # in Azure, this deployment has version 0613 - input and output tokens are counted separately"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -140,7 +140,9 @@
|
||||
" )\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used\n"
|
||||
" print(\n",
|
||||
" f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\"\n",
|
||||
" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -172,7 +174,7 @@
|
||||
" deployment_name=DEPLOYMENT_NAME,\n",
|
||||
" openai_api_key=API_KEY,\n",
|
||||
" openai_api_type=\"azure\",\n",
|
||||
" model_version=\"0613\"\n",
|
||||
" model_version=\"0613\",\n",
|
||||
")\n",
|
||||
"with get_openai_callback() as cb:\n",
|
||||
" model0613(\n",
|
||||
@@ -182,7 +184,7 @@
|
||||
" )\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")\n"
|
||||
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -67,10 +67,10 @@
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/score\",\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter,\n",
|
||||
"))\n",
|
||||
"response = chat(messages=[\n",
|
||||
" HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")\n",
|
||||
"])\n",
|
||||
")\n",
|
||||
"response = chat(\n",
|
||||
" messages=[HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
}
|
||||
@@ -91,9 +91,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
@@ -36,8 +36,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key='YOUR_API_KEY',\n",
|
||||
" baichuan_secret_key='YOUR_SECRET_KEY'\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\", baichuan_secret_key=\"YOUR_SECRET_KEY\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -72,9 +71,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat([\n",
|
||||
" HumanMessage(content='我日薪8块钱,请问在闰年的二月,我月薪多少')\n",
|
||||
"])"
|
||||
"chat([HumanMessage(content=\"我日薪8块钱,请问在闰年的二月,我月薪多少\")])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -92,9 +89,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key='YOUR_API_KEY',\n",
|
||||
" baichuan_secret_key='YOUR_SECRET_KEY',\n",
|
||||
" streaming=True\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\",\n",
|
||||
" baichuan_secret_key=\"YOUR_SECRET_KEY\",\n",
|
||||
" streaming=True,\n",
|
||||
")"
|
||||
],
|
||||
"metadata": {
|
||||
@@ -119,9 +116,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat([\n",
|
||||
" HumanMessage(content='我日薪8块钱,请问在闰年的二月,我月薪多少')\n",
|
||||
"])"
|
||||
"chat([HumanMessage(content=\"我日薪8块钱,请问在闰年的二月,我月薪多少\")])"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
|
||||
@@ -59,16 +59,17 @@
|
||||
],
|
||||
"source": [
|
||||
"\"\"\"For basic init and call\"\"\"\n",
|
||||
"from langchain.chat_models import QianfanChatEndpoint \n",
|
||||
"from langchain.chat_models import QianfanChatEndpoint\n",
|
||||
"from langchain.chat_models.base import HumanMessage\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"QIANFAN_AK\"] = \"your_ak\"\n",
|
||||
"os.environ[\"QIANFAN_SK\"] = \"your_sk\"\n",
|
||||
"\n",
|
||||
"chat = QianfanChatEndpoint(\n",
|
||||
" streaming=True, \n",
|
||||
" )\n",
|
||||
"res = chat([HumanMessage(content=\"write a funny joke\")])\n"
|
||||
" streaming=True,\n",
|
||||
")\n",
|
||||
"res = chat([HumanMessage(content=\"write a funny joke\")])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,7 +113,6 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
" \n",
|
||||
"from langchain.chat_models import QianfanChatEndpoint\n",
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"\n",
|
||||
@@ -125,15 +125,22 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_aio_generate():\n",
|
||||
" resp = await chatLLM.agenerate(messages=[[HumanMessage(content=\"write a 20 words sentence about sea.\")]])\n",
|
||||
" resp = await chatLLM.agenerate(\n",
|
||||
" messages=[[HumanMessage(content=\"write a 20 words sentence about sea.\")]]\n",
|
||||
" )\n",
|
||||
" print(resp)\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"await run_aio_generate()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_aio_stream():\n",
|
||||
" async for res in chatLLM.astream([HumanMessage(content=\"write a 20 words sentence about sea.\")]):\n",
|
||||
" async for res in chatLLM.astream(\n",
|
||||
" [HumanMessage(content=\"write a 20 words sentence about sea.\")]\n",
|
||||
" ):\n",
|
||||
" print(\"astream\", res)\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"await run_aio_stream()"
|
||||
]
|
||||
},
|
||||
@@ -172,9 +179,9 @@
|
||||
],
|
||||
"source": [
|
||||
"chatBloom = QianfanChatEndpoint(\n",
|
||||
" streaming=True, \n",
|
||||
" model=\"BLOOMZ-7B\",\n",
|
||||
" )\n",
|
||||
" streaming=True,\n",
|
||||
" model=\"BLOOMZ-7B\",\n",
|
||||
")\n",
|
||||
"res = chatBloom([HumanMessage(content=\"hi\")])\n",
|
||||
"print(res)"
|
||||
]
|
||||
@@ -217,7 +224,10 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"res = chat.stream([HumanMessage(content=\"hi\")], **{'top_p': 0.4, 'temperature': 0.1, 'penalty_score': 1})\n",
|
||||
"res = chat.stream(\n",
|
||||
" [HumanMessage(content=\"hi\")],\n",
|
||||
" **{\"top_p\": 0.4, \"temperature\": 0.1, \"penalty_score\": 1}\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for r in res:\n",
|
||||
" print(r)"
|
||||
|
||||
@@ -1,139 +1,139 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Bedrock Chat\n",
|
||||
"\n",
|
||||
"[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d51edc81",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install boto3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import BedrockChat\n",
|
||||
"from langchain.schema import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = BedrockChat(model_id=\"anthropic.claude-v2\", model_kwargs={\"temperature\":0.1})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Voici la traduction en français : J'adore programmer.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "a4a4f4d4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### For BedrockChat with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c253883f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"chat = BedrockChat(\n",
|
||||
" model_id=\"anthropic.claude-v2\",\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
" model_kwargs={\"temperature\": 0.1},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d9e52838",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Bedrock Chat\n",
|
||||
"\n",
|
||||
"[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d51edc81",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install boto3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import BedrockChat\n",
|
||||
"from langchain.schema import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = BedrockChat(model_id=\"anthropic.claude-v2\", model_kwargs={\"temperature\": 0.1})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Voici la traduction en français : J'adore programmer.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "a4a4f4d4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### For BedrockChat with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c253883f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"chat = BedrockChat(\n",
|
||||
" model_id=\"anthropic.claude-v2\",\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
" model_kwargs={\"temperature\": 0.1},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d9e52838",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -55,11 +55,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"knock knock\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"messages = [HumanMessage(content=\"knock knock\")]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -26,7 +26,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ErnieBotChat(ernie_client_id='YOUR_CLIENT_ID', ernie_client_secret='YOUR_CLIENT_SECRET')"
|
||||
"chat = ErnieBotChat(\n",
|
||||
" ernie_client_id=\"YOUR_CLIENT_ID\", ernie_client_secret=\"YOUR_CLIENT_SECRET\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -57,9 +59,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat([\n",
|
||||
" HumanMessage(content='hello there, who are you?')\n",
|
||||
"])"
|
||||
"chat([HumanMessage(content=\"hello there, who are you?\")])"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -67,15 +67,15 @@
|
||||
"from langchain.schema import SystemMessage, HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful AI that shares everything you know.\"\n",
|
||||
" ),\n",
|
||||
" SystemMessage(content=\"You are a helpful AI that shares everything you know.\"),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?\"\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"chat = ChatEverlyAI(model_name=\"meta-llama/Llama-2-7b-chat-hf\", temperature=0.3, max_tokens=64)\n",
|
||||
"chat = ChatEverlyAI(\n",
|
||||
" model_name=\"meta-llama/Llama-2-7b-chat-hf\", temperature=0.3, max_tokens=64\n",
|
||||
")\n",
|
||||
"print(chat(messages).content)"
|
||||
]
|
||||
},
|
||||
@@ -121,15 +121,17 @@
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a humorous AI that delights people.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Tell me a joke?\"\n",
|
||||
" ),\n",
|
||||
" SystemMessage(content=\"You are a humorous AI that delights people.\"),\n",
|
||||
" HumanMessage(content=\"Tell me a joke?\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"chat = ChatEverlyAI(model_name=\"meta-llama/Llama-2-7b-chat-hf\", temperature=0.3, max_tokens=64, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])\n",
|
||||
"chat = ChatEverlyAI(\n",
|
||||
" model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n",
|
||||
" temperature=0.3,\n",
|
||||
" max_tokens=64,\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
@@ -177,15 +179,17 @@
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a humorous AI that delights people.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Tell me a joke?\"\n",
|
||||
" ),\n",
|
||||
" SystemMessage(content=\"You are a humorous AI that delights people.\"),\n",
|
||||
" HumanMessage(content=\"Tell me a joke?\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"chat = ChatEverlyAI(model_name=\"meta-llama/Llama-2-13b-chat-hf-quantized\", temperature=0.3, max_tokens=128, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])\n",
|
||||
"chat = ChatEverlyAI(\n",
|
||||
" model_name=\"meta-llama/Llama-2-13b-chat-hf-quantized\",\n",
|
||||
" temperature=0.3,\n",
|
||||
" max_tokens=128,\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
"source": [
|
||||
"from langchain.chat_models.fireworks import ChatFireworks\n",
|
||||
"from langchain.schema import SystemMessage, HumanMessage\n",
|
||||
"import os\n"
|
||||
"import os"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -56,7 +56,7 @@
|
||||
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Fireworks API Key:\")\n",
|
||||
"\n",
|
||||
"# Initialize a Fireworks chat model\n",
|
||||
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\")\n"
|
||||
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -91,7 +91,7 @@
|
||||
"system_message = SystemMessage(content=\"You are to chat with the user.\")\n",
|
||||
"human_message = HumanMessage(content=\"Who are you?\")\n",
|
||||
"\n",
|
||||
"chat([system_message, human_message])\n"
|
||||
"chat([system_message, human_message])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -113,10 +113,13 @@
|
||||
],
|
||||
"source": [
|
||||
"# Setting additional parameters: temperature, max_tokens, top_p\n",
|
||||
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\", model_kwargs={\"temperature\":1, \"max_tokens\": 20, \"top_p\": 1})\n",
|
||||
"chat = ChatFireworks(\n",
|
||||
" model=\"accounts/fireworks/models/llama-v2-13b-chat\",\n",
|
||||
" model_kwargs={\"temperature\": 1, \"max_tokens\": 20, \"top_p\": 1},\n",
|
||||
")\n",
|
||||
"system_message = SystemMessage(content=\"You are to chat with the user.\")\n",
|
||||
"human_message = HumanMessage(content=\"How's the weather today?\")\n",
|
||||
"chat([system_message, human_message])\n"
|
||||
"chat([system_message, human_message])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -147,12 +150,17 @@
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"llm = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\", model_kwargs={\"temperature\":0, \"max_tokens\":64, \"top_p\":1.0})\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"You are a helpful chatbot that speaks like a pirate.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\")\n",
|
||||
"])\n"
|
||||
"llm = ChatFireworks(\n",
|
||||
" model=\"accounts/fireworks/models/llama-v2-13b-chat\",\n",
|
||||
" model_kwargs={\"temperature\": 0, \"max_tokens\": 64, \"top_p\": 1.0},\n",
|
||||
")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful chatbot that speaks like a pirate.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -182,7 +190,7 @@
|
||||
],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True)\n",
|
||||
"memory.load_memory_variables({})\n"
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -200,9 +208,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RunnablePassthrough.assign(\n",
|
||||
" history=memory.load_memory_variables | (lambda x: x[\"history\"])\n",
|
||||
") | prompt | llm.bind(stop=[\"\\n\\n\"])\n"
|
||||
"chain = (\n",
|
||||
" RunnablePassthrough.assign(\n",
|
||||
" history=memory.load_memory_variables | (lambda x: x[\"history\"])\n",
|
||||
" )\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\n\\n\"])\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -233,7 +245,7 @@
|
||||
"source": [
|
||||
"inputs = {\"input\": \"hi im bob\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response\n"
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -264,7 +276,7 @@
|
||||
],
|
||||
"source": [
|
||||
"memory.save_context(inputs, {\"output\": response.content})\n",
|
||||
"memory.load_memory_variables({})\n"
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -294,7 +306,7 @@
|
||||
],
|
||||
"source": [
|
||||
"inputs = {\"input\": \"whats my name\"}\n",
|
||||
"chain.invoke(inputs)\n"
|
||||
"chain.invoke(inputs)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
112
docs/docs/integrations/chat/gigachat.ipynb
Normal file
112
docs/docs/integrations/chat/gigachat.ipynb
Normal file
@@ -0,0 +1,112 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# GigaChat\n",
|
||||
"This notebook shows how to use LangChain with [GigaChat](https://developers.sber.ru/portal/products/gigachat).\n",
|
||||
"To use you need to install ```gigachat``` python package."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install gigachat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"To get GigaChat credentials you need to [create account](https://developers.sber.ru/studio/login) and [get access to API](https://developers.sber.ru/docs/ru/gigachat/api/integration)\n",
|
||||
"## Example"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"os.environ[\"GIGACHAT_CREDENTIALS\"] = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import GigaChat\n",
|
||||
"\n",
|
||||
"chat = GigaChat(verify_ssl_certs=False)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"What do you get when you cross a goat and a skunk? A smelly goat!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.schema import SystemMessage, HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful AI that shares everything you know. Talk in English.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(content=\"Tell me a joke\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"print(chat(messages).content)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -5,7 +5,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GCP Vertex AI \n",
|
||||
"# Google Cloud Vertex AI \n",
|
||||
"\n",
|
||||
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"\n",
|
||||
@@ -61,9 +61,7 @@
|
||||
"source": [
|
||||
"system = \"You are a helpful assistant who translate English to French\"\n",
|
||||
"human = \"Translate this sentence from English to French. I love programming.\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", system), (\"human\", human)]\n",
|
||||
")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"messages = prompt.format_messages()"
|
||||
]
|
||||
},
|
||||
@@ -100,11 +98,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"system = \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
"system = (\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
")\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", system), (\"human\", human)]\n",
|
||||
")"
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -126,7 +124,11 @@
|
||||
"source": [
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke(\n",
|
||||
" {\"input_language\": \"English\", \"output_language\": \"Japanese\", \"text\": \"I love programming\"}\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Japanese\",\n",
|
||||
" \"text\": \"I love programming\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -158,9 +160,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatVertexAI(\n",
|
||||
" model_name=\"codechat-bison\",\n",
|
||||
" max_output_tokens=1000,\n",
|
||||
" temperature=0.5\n",
|
||||
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -208,6 +208,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"\n",
|
||||
"# import nest_asyncio\n",
|
||||
"# nest_asyncio.apply()"
|
||||
]
|
||||
@@ -257,7 +258,15 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"asyncio.run(chain.ainvoke({\"input_language\": \"English\", \"output_language\": \"Sanskrit\", \"text\": \"I love programming\"}))"
|
||||
"asyncio.run(\n",
|
||||
" chain.ainvoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Sanskrit\",\n",
|
||||
" \"text\": \"I love programming\",\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -306,7 +315,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"List out the 15 most populous countries in the world\")])\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"human\", \"List out the 15 most populous countries in the world\")]\n",
|
||||
")\n",
|
||||
"messages = prompt.format_messages()\n",
|
||||
"for chunk in chat.stream(messages):\n",
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
|
||||
@@ -36,9 +36,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatHunyuan(\n",
|
||||
" hunyuan_app_id='YOUR_APP_ID',\n",
|
||||
" hunyuan_secret_id='YOUR_SECRET_ID',\n",
|
||||
" hunyuan_secret_key='YOUR_SECRET_KEY',\n",
|
||||
" hunyuan_app_id=\"YOUR_APP_ID\",\n",
|
||||
" hunyuan_secret_id=\"YOUR_SECRET_ID\",\n",
|
||||
" hunyuan_secret_key=\"YOUR_SECRET_KEY\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -62,9 +62,13 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat([\n",
|
||||
" HumanMessage(content='You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming.')\n",
|
||||
"])"
|
||||
"chat(\n",
|
||||
" [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -82,9 +86,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatHunyuan(\n",
|
||||
" hunyuan_app_id='YOUR_APP_ID',\n",
|
||||
" hunyuan_secret_id='YOUR_SECRET_ID',\n",
|
||||
" hunyuan_secret_key='YOUR_SECRET_KEY',\n",
|
||||
" hunyuan_app_id=\"YOUR_APP_ID\",\n",
|
||||
" hunyuan_secret_id=\"YOUR_SECRET_ID\",\n",
|
||||
" hunyuan_secret_key=\"YOUR_SECRET_KEY\",\n",
|
||||
" streaming=True,\n",
|
||||
")"
|
||||
],
|
||||
@@ -110,9 +114,13 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat([\n",
|
||||
" HumanMessage(content='You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming.')\n",
|
||||
"])"
|
||||
"chat(\n",
|
||||
" [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"You are a helpful assistant that translates English to French.Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
|
||||
@@ -96,7 +96,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatKonko(max_tokens=400, model = 'meta-llama/Llama-2-13b-chat-hf')"
|
||||
"chat = ChatKonko(max_tokens=400, model=\"meta-llama/Llama-2-13b-chat-hf\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -117,12 +117,8 @@
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful assistant.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Explain Big Bang Theory briefly\"\n",
|
||||
" ),\n",
|
||||
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
|
||||
" HumanMessage(content=\"Explain Big Bang Theory briefly\"),\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
|
||||
@@ -28,7 +28,7 @@
|
||||
"from llamaapi import LlamaAPI\n",
|
||||
"\n",
|
||||
"# Replace 'Your_API_Token' with your actual API token\n",
|
||||
"llama = LlamaAPI('Your_API_Token')"
|
||||
"llama = LlamaAPI(\"Your_API_Token\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -71,9 +71,15 @@
|
||||
"\n",
|
||||
"schema = {\n",
|
||||
" \"properties\": {\n",
|
||||
" \"sentiment\": {\"type\": \"string\", 'description': 'the sentiment encountered in the passage'},\n",
|
||||
" \"aggressiveness\": {\"type\": \"integer\", 'description': 'a 0-10 score of how aggressive the passage is'},\n",
|
||||
" \"language\": {\"type\": \"string\", 'description': 'the language of the passage'},\n",
|
||||
" \"sentiment\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"the sentiment encountered in the passage\",\n",
|
||||
" },\n",
|
||||
" \"aggressiveness\": {\n",
|
||||
" \"type\": \"integer\",\n",
|
||||
" \"description\": \"a 0-10 score of how aggressive the passage is\",\n",
|
||||
" },\n",
|
||||
" \"language\": {\"type\": \"string\", \"description\": \"the language of the passage\"},\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
|
||||
@@ -61,9 +61,12 @@
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOllama\n",
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler \n",
|
||||
"chat_model = ChatOllama(model=\"llama2:7b-chat\", \n",
|
||||
" callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))"
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"chat_model = ChatOllama(\n",
|
||||
" model=\"llama2:7b-chat\",\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,9 +115,7 @@
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(content=\"Tell me about the history of AI\")\n",
|
||||
"]\n",
|
||||
"messages = [HumanMessage(content=\"Tell me about the history of AI\")]\n",
|
||||
"chat_model(messages)"
|
||||
]
|
||||
},
|
||||
@@ -151,10 +152,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import WebBaseLoader\n",
|
||||
"\n",
|
||||
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
||||
"data = loader.load()\n",
|
||||
"\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||
"all_splits = text_splitter.split_documents(data)"
|
||||
]
|
||||
@@ -224,9 +227,12 @@
|
||||
"from langchain.chat_models import ChatOllama\n",
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"chat_model = ChatOllama(model=\"llama2:13b\",\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))"
|
||||
"\n",
|
||||
"chat_model = ChatOllama(\n",
|
||||
" model=\"llama2:13b\",\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -237,6 +243,7 @@
|
||||
"source": [
|
||||
"# QA chain\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"\n",
|
||||
"qa_chain = RetrievalQA.from_chain_type(\n",
|
||||
" chat_model,\n",
|
||||
" retriever=vectorstore.as_retriever(),\n",
|
||||
@@ -296,15 +303,19 @@
|
||||
"from langchain.schema import LLMResult\n",
|
||||
"from langchain.callbacks.base import BaseCallbackHandler\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class GenerationStatisticsCallback(BaseCallbackHandler):\n",
|
||||
" def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n",
|
||||
" print(response.generations[0][0].generation_info)\n",
|
||||
" \n",
|
||||
"callback_manager = CallbackManager([StreamingStdOutCallbackHandler(), GenerationStatisticsCallback()])\n",
|
||||
"\n",
|
||||
"chat_model = ChatOllama(model=\"llama2:13b-chat\",\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=callback_manager)\n",
|
||||
"\n",
|
||||
"callback_manager = CallbackManager(\n",
|
||||
" [StreamingStdOutCallbackHandler(), GenerationStatisticsCallback()]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chat_model = ChatOllama(\n",
|
||||
" model=\"llama2:13b-chat\", verbose=True, callback_manager=callback_manager\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"qa_chain = RetrievalQA.from_chain_type(\n",
|
||||
" chat_model,\n",
|
||||
@@ -340,7 +351,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"98 / (3229641000/1000/1000/1000)"
|
||||
"98 / (3229641000 / 1000 / 1000 / 1000)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -172,7 +172,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"fine_tuned_model = ChatOpenAI(temperature=0, model_name=\"ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR\")\n",
|
||||
"fine_tuned_model = ChatOpenAI(\n",
|
||||
" temperature=0, model_name=\"ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"fine_tuned_model(messages)"
|
||||
]
|
||||
|
||||
@@ -32,11 +32,12 @@
|
||||
"import os\n",
|
||||
"from langchain.chat_models.base import HumanMessage\n",
|
||||
"from langchain.chat_models import PaiEasChatEndpoint\n",
|
||||
"\n",
|
||||
"os.environ[\"EAS_SERVICE_URL\"] = \"Your_EAS_Service_URL\"\n",
|
||||
"os.environ[\"EAS_SERVICE_TOKEN\"] = \"Your_EAS_Service_Token\"\n",
|
||||
"chat = PaiEasChatEndpoint(\n",
|
||||
" eas_service_url=os.environ[\"EAS_SERVICE_URL\"], \n",
|
||||
" eas_service_token=os.environ[\"EAS_SERVICE_TOKEN\"]\n",
|
||||
" eas_service_url=os.environ[\"EAS_SERVICE_URL\"],\n",
|
||||
" eas_service_token=os.environ[\"EAS_SERVICE_TOKEN\"],\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -89,7 +90,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"outputs = chat.stream([HumanMessage(content=\"hi\")], streaming=True)\n",
|
||||
"for output in outputs:\n",
|
||||
" print(\"stream output:\", output)"
|
||||
|
||||
@@ -120,6 +120,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.schema import AIMessage, HumanMessage, SystemMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful assistant that translates English to French.\"\n",
|
||||
|
||||
@@ -77,8 +77,10 @@
|
||||
"source": [
|
||||
"answer = chat_model(\n",
|
||||
" [\n",
|
||||
" SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
|
||||
" HumanMessage(content=\"I love programming.\")\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful assistant that translates English to French.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(content=\"I love programming.\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"answer"
|
||||
|
||||
@@ -88,7 +88,6 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"class DiscordChatLoader(chat_loaders.BaseChatLoader):\n",
|
||||
" \n",
|
||||
" def __init__(self, path: str):\n",
|
||||
" \"\"\"\n",
|
||||
" Initialize the Discord chat loader.\n",
|
||||
@@ -175,7 +174,7 @@
|
||||
" Yields:\n",
|
||||
" A `ChatSession` object containing the loaded chat messages.\n",
|
||||
" \"\"\"\n",
|
||||
" yield self._load_single_chat_session_from_txt(self.path)\n"
|
||||
" yield self._load_single_chat_session_from_txt(self.path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -228,7 +227,9 @@
|
||||
"# Merge consecutive messages from the same sender into a single message\n",
|
||||
"merged_messages = merge_chat_runs(raw_messages)\n",
|
||||
"# Convert messages from \"talkingtower\" to AI messages\n",
|
||||
"messages: List[ChatSession] = list(map_ai_messages(merged_messages, sender=\"talkingtower\"))"
|
||||
"messages: List[ChatSession] = list(\n",
|
||||
" map_ai_messages(merged_messages, sender=\"talkingtower\")\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -288,7 +289,7 @@
|
||||
"\n",
|
||||
"llm = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"for chunk in llm.stream(messages[0]['messages']):\n",
|
||||
"for chunk in llm.stream(messages[0][\"messages\"]):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -50,28 +50,32 @@
|
||||
"import requests\n",
|
||||
"import zipfile\n",
|
||||
"\n",
|
||||
"def download_and_unzip(url: str, output_path: str = 'file.zip') -> None:\n",
|
||||
" file_id = url.split('/')[-2]\n",
|
||||
" download_url = f'https://drive.google.com/uc?export=download&id={file_id}'\n",
|
||||
"\n",
|
||||
"def download_and_unzip(url: str, output_path: str = \"file.zip\") -> None:\n",
|
||||
" file_id = url.split(\"/\")[-2]\n",
|
||||
" download_url = f\"https://drive.google.com/uc?export=download&id={file_id}\"\n",
|
||||
"\n",
|
||||
" response = requests.get(download_url)\n",
|
||||
" if response.status_code != 200:\n",
|
||||
" print('Failed to download the file.')\n",
|
||||
" print(\"Failed to download the file.\")\n",
|
||||
" return\n",
|
||||
"\n",
|
||||
" with open(output_path, 'wb') as file:\n",
|
||||
" with open(output_path, \"wb\") as file:\n",
|
||||
" file.write(response.content)\n",
|
||||
" print(f'File {output_path} downloaded.')\n",
|
||||
" print(f\"File {output_path} downloaded.\")\n",
|
||||
"\n",
|
||||
" with zipfile.ZipFile(output_path, 'r') as zip_ref:\n",
|
||||
" with zipfile.ZipFile(output_path, \"r\") as zip_ref:\n",
|
||||
" zip_ref.extractall()\n",
|
||||
" print(f'File {output_path} has been unzipped.')\n",
|
||||
" print(f\"File {output_path} has been unzipped.\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# URL of the file to download\n",
|
||||
"url = 'https://drive.google.com/file/d/1rh1s1o2i7B-Sk1v9o8KNgivLVGwJ-osV/view?usp=sharing'\n",
|
||||
"url = (\n",
|
||||
" \"https://drive.google.com/file/d/1rh1s1o2i7B-Sk1v9o8KNgivLVGwJ-osV/view?usp=sharing\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Download and unzip\n",
|
||||
"download_and_unzip(url)\n"
|
||||
"download_and_unzip(url)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -235,7 +239,7 @@
|
||||
"source": [
|
||||
"# Now all of Harry Potter's messages will take the AI message class\n",
|
||||
"# which maps to the 'assistant' role in OpenAI's training format\n",
|
||||
"alternating_sessions[0]['messages'][:3]"
|
||||
"alternating_sessions[0][\"messages\"][:3]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -338,11 +342,9 @@
|
||||
"overlap = 2\n",
|
||||
"\n",
|
||||
"training_examples = [\n",
|
||||
" conversation_messages[i: i + chunk_size] \n",
|
||||
" conversation_messages[i : i + chunk_size]\n",
|
||||
" for conversation_messages in training_data\n",
|
||||
" for i in range(\n",
|
||||
" 0, len(conversation_messages) - chunk_size + 1, \n",
|
||||
" chunk_size - overlap)\n",
|
||||
" for i in range(0, len(conversation_messages) - chunk_size + 1, chunk_size - overlap)\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"len(training_examples)"
|
||||
@@ -393,13 +395,10 @@
|
||||
"# We will write the jsonl file in memory\n",
|
||||
"my_file = BytesIO()\n",
|
||||
"for m in training_examples:\n",
|
||||
" my_file.write((json.dumps({\"messages\": m}) + \"\\n\").encode('utf-8'))\n",
|
||||
" my_file.write((json.dumps({\"messages\": m}) + \"\\n\").encode(\"utf-8\"))\n",
|
||||
"\n",
|
||||
"my_file.seek(0)\n",
|
||||
"training_file = openai.File.create(\n",
|
||||
" file=my_file,\n",
|
||||
" purpose='fine-tune'\n",
|
||||
")\n",
|
||||
"training_file = openai.File.create(file=my_file, purpose=\"fine-tune\")\n",
|
||||
"\n",
|
||||
"# OpenAI audits each training file for compliance reasons.\n",
|
||||
"# This make take a few minutes\n",
|
||||
|
||||
@@ -47,26 +47,28 @@
|
||||
"import logging\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']\n",
|
||||
"SCOPES = [\"https://www.googleapis.com/auth/gmail.readonly\"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"creds = None\n",
|
||||
"# The file token.json stores the user's access and refresh tokens, and is\n",
|
||||
"# created automatically when the authorization flow completes for the first\n",
|
||||
"# time.\n",
|
||||
"if os.path.exists('email_token.json'):\n",
|
||||
" creds = Credentials.from_authorized_user_file('email_token.json', SCOPES)\n",
|
||||
"if os.path.exists(\"email_token.json\"):\n",
|
||||
" creds = Credentials.from_authorized_user_file(\"email_token.json\", SCOPES)\n",
|
||||
"# If there are no (valid) credentials available, let the user log in.\n",
|
||||
"if not creds or not creds.valid:\n",
|
||||
" if creds and creds.expired and creds.refresh_token:\n",
|
||||
" creds.refresh(Request())\n",
|
||||
" else:\n",
|
||||
" flow = InstalledAppFlow.from_client_secrets_file( \n",
|
||||
" flow = InstalledAppFlow.from_client_secrets_file(\n",
|
||||
" # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started\n",
|
||||
" 'creds.json', SCOPES)\n",
|
||||
" \"creds.json\",\n",
|
||||
" SCOPES,\n",
|
||||
" )\n",
|
||||
" creds = flow.run_local_server(port=0)\n",
|
||||
" # Save the credentials for the next run\n",
|
||||
" with open('email_token.json', 'w') as token:\n",
|
||||
" with open(\"email_token.json\", \"w\") as token:\n",
|
||||
" token.write(creds.to_json())"
|
||||
]
|
||||
},
|
||||
@@ -143,7 +145,9 @@
|
||||
"source": [
|
||||
"# This makes messages sent by hchase@langchain.com the AI Messages\n",
|
||||
"# This means you will train an LLM to predict as if it's responding as hchase\n",
|
||||
"training_data = list(map_ai_messages(data, sender=\"Harrison Chase <hchase@langchain.com>\"))"
|
||||
"training_data = list(\n",
|
||||
" map_ai_messages(data, sender=\"Harrison Chase <hchase@langchain.com>\")\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user