mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-04 08:10:25 +00:00
Compare commits
342 Commits
harrison/a
...
wfh/implic
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d180208915 | ||
|
|
625e598111 | ||
|
|
5d8a689d5e | ||
|
|
0a86a70fe7 | ||
|
|
9095dc69ac | ||
|
|
c6b27b3692 | ||
|
|
5a4ce9ef2b | ||
|
|
1b0eebe1e3 | ||
|
|
2423f7f3b4 | ||
|
|
d2d11ccf63 | ||
|
|
46e9abdc75 | ||
|
|
0672533b3e | ||
|
|
f5d08be477 | ||
|
|
01e9d7902d | ||
|
|
28de8d132c | ||
|
|
fdba711d28 | ||
|
|
1b3ea1eeb4 | ||
|
|
8826293c88 | ||
|
|
300559695b | ||
|
|
20c742d8a2 | ||
|
|
b1d40b8626 | ||
|
|
49e0c83126 | ||
|
|
41a2548611 | ||
|
|
1d2b6c3c67 | ||
|
|
274c3dc3a8 | ||
|
|
a9eb7c6cfc | ||
|
|
25ec655e4f | ||
|
|
f0ccce76fe | ||
|
|
205f406485 | ||
|
|
672907bbbb | ||
|
|
f747e76b73 | ||
|
|
4cc4534d81 | ||
|
|
67696fe3ba | ||
|
|
f4f9254dad | ||
|
|
890ed775a3 | ||
|
|
849e345371 | ||
|
|
0c760f184c | ||
|
|
19b4ecdc39 | ||
|
|
b64a443f72 | ||
|
|
1fb7bdd595 | ||
|
|
763212eafd | ||
|
|
ea5d29a702 | ||
|
|
4df101cf77 | ||
|
|
86cb9da735 | ||
|
|
b8669b249e | ||
|
|
6e6f15df24 | ||
|
|
1690013711 | ||
|
|
13c5951e26 | ||
|
|
3cc242b591 | ||
|
|
bce38b7163 | ||
|
|
db73c9d5b5 | ||
|
|
ccb9e3ee2d | ||
|
|
82d5d4d0ae | ||
|
|
8d5bf1fb20 | ||
|
|
49341483da | ||
|
|
9e839d4977 | ||
|
|
ffca5e7eea | ||
|
|
c732d8fffd | ||
|
|
334bd8ebbe | ||
|
|
7fe8bf03a0 | ||
|
|
619516260d | ||
|
|
803be5b986 | ||
|
|
c8d7ee62ba | ||
|
|
e34ad6fefd | ||
|
|
5d8673a3c1 | ||
|
|
ac2310a405 | ||
|
|
8b95dabfe3 | ||
|
|
be152b6a56 | ||
|
|
f389c4fcab | ||
|
|
dde1992fdd | ||
|
|
5dbae94e04 | ||
|
|
8998060d85 | ||
|
|
a94dc6ee44 | ||
|
|
bb8c095127 | ||
|
|
8bba69ffd0 | ||
|
|
098b4aa465 | ||
|
|
699f58fb83 | ||
|
|
de9e545542 | ||
|
|
cb928ed3d5 | ||
|
|
27944cb611 | ||
|
|
10e0431e48 | ||
|
|
e0f6ba08d6 | ||
|
|
31bbe80758 | ||
|
|
de3322609e | ||
|
|
7403faa063 | ||
|
|
f6f0b0f975 | ||
|
|
803d0d9656 | ||
|
|
03174c91d0 | ||
|
|
9bcfd58580 | ||
|
|
056e59672b | ||
|
|
0b6993987f | ||
|
|
68f2363f5d | ||
|
|
43c4c6dfcc | ||
|
|
c585351bdc | ||
|
|
433c4a721e | ||
|
|
c9ff0ab2e9 | ||
|
|
16945c9922 | ||
|
|
8bc452a466 | ||
|
|
fe0e191fb3 | ||
|
|
7d48c2884e | ||
|
|
e34dde3d15 | ||
|
|
94efede93c | ||
|
|
c0518be1f1 | ||
|
|
50ca44c79f | ||
|
|
7d8bb78e5c | ||
|
|
33f43cc1b0 | ||
|
|
7d1b0fbe79 | ||
|
|
ce47124e8f | ||
|
|
f59e5d48ed | ||
|
|
507e46844e | ||
|
|
fed137a8a9 | ||
|
|
794ff2dae8 | ||
|
|
4765c09703 | ||
|
|
16a27ab244 | ||
|
|
00a7c31ffd | ||
|
|
a52fe9528e | ||
|
|
b8baead70c | ||
|
|
abd8681341 | ||
|
|
4dc47bd3ac | ||
|
|
bc8cceebf7 | ||
|
|
872d829201 | ||
|
|
5c7afe8aae | ||
|
|
387813bfb2 | ||
|
|
cf5a50469f | ||
|
|
f4bed8a04c | ||
|
|
05664a6f20 | ||
|
|
565c021730 | ||
|
|
2221194450 | ||
|
|
5c3e9c9083 | ||
|
|
6d82503eb1 | ||
|
|
4abe85be57 | ||
|
|
f5af756397 | ||
|
|
9e6cc7b236 | ||
|
|
0c0a7d19eb | ||
|
|
f968b86652 | ||
|
|
765ef3b486 | ||
|
|
746c6ff9c3 | ||
|
|
fdebd3e02f | ||
|
|
0e4c5dd176 | ||
|
|
42582adb66 | ||
|
|
9e196cb470 | ||
|
|
f8bca156d4 | ||
|
|
30239b3025 | ||
|
|
54a8df87b9 | ||
|
|
b485c3048b | ||
|
|
f2fc4173c3 | ||
|
|
37e435bd00 | ||
|
|
3b8ee74e38 | ||
|
|
afd96b2460 | ||
|
|
58d7d86e51 | ||
|
|
a7c9bd30d4 | ||
|
|
491089754d | ||
|
|
b5a74fb973 | ||
|
|
71c418725f | ||
|
|
427f696fb0 | ||
|
|
b927277809 | ||
|
|
d4380339c1 | ||
|
|
d7bf7dc412 | ||
|
|
355ff09cce | ||
|
|
3dafbd852e | ||
|
|
c7a5504789 | ||
|
|
5f1c67b47c | ||
|
|
561ac17248 | ||
|
|
5569385ee1 | ||
|
|
b1c87da2b0 | ||
|
|
e17275ee57 | ||
|
|
63306899a2 | ||
|
|
7966af1e9c | ||
|
|
4c0e1e501c | ||
|
|
0eba80912f | ||
|
|
af2e4ce2cd | ||
|
|
85088dc5df | ||
|
|
4eecf90f33 | ||
|
|
2242e2160f | ||
|
|
b2ac835466 | ||
|
|
50a5c5bcf8 | ||
|
|
81ebcc161e | ||
|
|
fc42726ea0 | ||
|
|
897f791940 | ||
|
|
4d7cd6db5f | ||
|
|
f9a845b382 | ||
|
|
06e89c1caa | ||
|
|
738d93215d | ||
|
|
9a07032055 | ||
|
|
5426712311 | ||
|
|
f95bd0bcd9 | ||
|
|
f69155b4f7 | ||
|
|
a3c69cf41d | ||
|
|
324c86acd5 | ||
|
|
3f8f3de28e | ||
|
|
ad9e242a7a | ||
|
|
566ce06f4a | ||
|
|
c710c7303f | ||
|
|
cc6a20d3e6 | ||
|
|
86646ec555 | ||
|
|
02e51f4217 | ||
|
|
74fcfed4e2 | ||
|
|
641b71e2cd | ||
|
|
8d66b00c73 | ||
|
|
19400ba253 | ||
|
|
29270e0378 | ||
|
|
5b913003e0 | ||
|
|
4b15328767 | ||
|
|
e60e1cdf23 | ||
|
|
3efab8d3df | ||
|
|
d43a36c32a | ||
|
|
6b5a970949 | ||
|
|
b1644bc9ad | ||
|
|
13fef1e5d3 | ||
|
|
e37d51cab6 | ||
|
|
52a3e8a261 | ||
|
|
e2e05ad89e | ||
|
|
f2e8399cc8 | ||
|
|
5341b04d68 | ||
|
|
b82ad19ed2 | ||
|
|
e805f8e263 | ||
|
|
1f5c579ef4 | ||
|
|
240cc289e6 | ||
|
|
7fa82900cb | ||
|
|
2f03e71e67 | ||
|
|
781f274d19 | ||
|
|
a8f804a618 | ||
|
|
98cce7dcd3 | ||
|
|
b3e3a31240 | ||
|
|
9828701de1 | ||
|
|
9870bfb9cd | ||
|
|
6da158388b | ||
|
|
24c0b01c38 | ||
|
|
588237ef30 | ||
|
|
e8f29be350 | ||
|
|
a28e888b36 | ||
|
|
cafce9ed23 | ||
|
|
8c4e29240c | ||
|
|
2d2b097fab | ||
|
|
d762a6b51f | ||
|
|
6a51672164 | ||
|
|
c844aaa7a6 | ||
|
|
a05fed9369 | ||
|
|
c26deb6b38 | ||
|
|
ffa5625134 | ||
|
|
bdccb1215a | ||
|
|
d966ba63e2 | ||
|
|
ec362ecbe2 | ||
|
|
56a0165a4e | ||
|
|
cedfad541d | ||
|
|
b31475c622 | ||
|
|
d03d6f6fd9 | ||
|
|
8fb0a9594c | ||
|
|
4eeba88905 | ||
|
|
8c1678a8c7 | ||
|
|
d799963870 | ||
|
|
7bba1d911b | ||
|
|
2e65434568 | ||
|
|
b416f5c0c8 | ||
|
|
8f199239b8 | ||
|
|
2a03a0087d | ||
|
|
f7cc125cac | ||
|
|
16eb935469 | ||
|
|
c70bb0ec28 | ||
|
|
0f85671630 | ||
|
|
78c014399f | ||
|
|
f69d236a4a | ||
|
|
0024824a6e | ||
|
|
210de0c66b | ||
|
|
5cce6529a4 | ||
|
|
bcc3463ff4 | ||
|
|
7cbe872af8 | ||
|
|
9f2d908316 | ||
|
|
3c1547925a | ||
|
|
fbd792ac7c | ||
|
|
8bd7a9d18e | ||
|
|
ede45f535e | ||
|
|
393816e7bd | ||
|
|
0fb95ebe66 | ||
|
|
7c7ae34eeb | ||
|
|
d578efba35 | ||
|
|
8dbf4cbe80 | ||
|
|
b5cd1e0fed | ||
|
|
6eae6df76f | ||
|
|
f5faac8859 | ||
|
|
4b6e41a939 | ||
|
|
6092422e10 | ||
|
|
c906041aa8 | ||
|
|
880bf06290 | ||
|
|
9efc29e3d1 | ||
|
|
d6957921f0 | ||
|
|
db13fba7ea | ||
|
|
49ebbe4bcd | ||
|
|
171b0b183b | ||
|
|
c80e406e95 | ||
|
|
dd10cf945c | ||
|
|
8f8455b24d | ||
|
|
bbae8cb88f | ||
|
|
4454204455 | ||
|
|
318a21e267 | ||
|
|
e71f4760db | ||
|
|
a5450be32e | ||
|
|
8b8d2a6535 | ||
|
|
1b6947e56c | ||
|
|
7979cef06a | ||
|
|
23ef836b48 | ||
|
|
766bbd6c6b | ||
|
|
64eb5a6082 | ||
|
|
8a4670e127 | ||
|
|
b1f649bca5 | ||
|
|
6d3485e798 | ||
|
|
82a3c2a557 | ||
|
|
e80834d783 | ||
|
|
7fdb7439e0 | ||
|
|
5d47833ae1 | ||
|
|
b1bffea9c7 | ||
|
|
e01b00aa54 | ||
|
|
47499c6db4 | ||
|
|
f327535eda | ||
|
|
cf122b6269 | ||
|
|
fe1b9ee6b8 | ||
|
|
907c57e324 | ||
|
|
3103f07e03 | ||
|
|
b14d74dd4d | ||
|
|
8393ba9dab | ||
|
|
eb3d1fa93c | ||
|
|
3a4d4c940c | ||
|
|
97741d41c5 | ||
|
|
7f5713b80a | ||
|
|
cb642ef658 | ||
|
|
5e2d0cf54e | ||
|
|
9aaa0fdce0 | ||
|
|
00baddf34c | ||
|
|
f97d3a76e7 | ||
|
|
5edf819524 | ||
|
|
610f46d83a | ||
|
|
c1badc1fa2 | ||
|
|
e92e199ec1 | ||
|
|
90fd840fb1 | ||
|
|
47a6b4d674 | ||
|
|
c4c79da071 | ||
|
|
429de77b3b | ||
|
|
04fcd2d2e0 | ||
|
|
ef7f4aea32 | ||
|
|
224263aa24 | ||
|
|
dc4b037957 | ||
|
|
1fa5d94591 |
6
.github/CONTRIBUTING.md
vendored
6
.github/CONTRIBUTING.md
vendored
@@ -44,7 +44,7 @@ If you are adding an issue, please try to keep it focused on a single, modular b
|
||||
If two issues are related, or blocking, please link them rather than combining them.
|
||||
|
||||
We will try to keep these issues as up to date as possible, though
|
||||
with the rapid rate of develop in this field some may get out of date.
|
||||
with the rapid rate of development in this field some may get out of date.
|
||||
If you notice this happening, please let us know.
|
||||
|
||||
### 🙋Getting Help
|
||||
@@ -87,7 +87,7 @@ This will install all requirements for running the package, examples, linting, f
|
||||
|
||||
❗Note: If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running Poetry v1.5.1. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases. If you are still seeing this bug on v1.5.1, you may also try disabling "modern installation" (`poetry config installer.modern-installation false`) and re-installing requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
|
||||
|
||||
Now, you should be able to run the common tasks in the following section. To double check, run `make test`, all tests should pass. If they don't you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
|
||||
Now assuming `make` and `pytest` are installed, you should be able to run the common tasks in the following section. To double check, run `make test` under `libs/langchain`, all tests should pass. If they don't, you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
|
||||
|
||||
## ✅ Common Tasks
|
||||
|
||||
@@ -134,7 +134,7 @@ We recognize linting can be annoying - if you do not want to do it, please conta
|
||||
### Spellcheck
|
||||
|
||||
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
|
||||
Note that `codespell` finds common typos, so could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
|
||||
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
|
||||
|
||||
To check spelling for this project:
|
||||
|
||||
|
||||
29
.github/actions/poetry_setup/action.yml
vendored
29
.github/actions/poetry_setup/action.yml
vendored
@@ -27,7 +27,7 @@ runs:
|
||||
using: composite
|
||||
steps:
|
||||
- uses: actions/setup-python@v4
|
||||
name: Setup python $${ inputs.python-version }}
|
||||
name: Setup python ${{ inputs.python-version }}
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
@@ -39,10 +39,35 @@ runs:
|
||||
with:
|
||||
path: |
|
||||
/opt/pipx/venvs/poetry
|
||||
/opt/pipx_bin/poetry
|
||||
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
|
||||
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
|
||||
|
||||
- name: Refresh shell hashtable and fixup softlinks
|
||||
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
|
||||
shell: bash
|
||||
env:
|
||||
POETRY_VERSION: ${{ inputs.poetry-version }}
|
||||
PYTHON_VERSION: ${{ inputs.python-version }}
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
# Refresh the shell hashtable, to ensure correct `which` output.
|
||||
hash -r
|
||||
|
||||
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
|
||||
# Delete and recreate the softlinks pipx expects to have.
|
||||
rm /opt/pipx/venvs/poetry/bin/python
|
||||
cd /opt/pipx/venvs/poetry/bin
|
||||
ln -s "$(which "python$PYTHON_VERSION")" python
|
||||
chmod +x python
|
||||
cd /opt/pipx_bin/
|
||||
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
|
||||
chmod +x poetry
|
||||
|
||||
# Ensure everything got set up correctly.
|
||||
/opt/pipx/venvs/poetry/bin/python --version
|
||||
/opt/pipx_bin/poetry --version
|
||||
|
||||
- name: Install poetry
|
||||
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
|
||||
shell: bash
|
||||
|
||||
12
.github/workflows/_lint.yml
vendored
12
.github/workflows/_lint.yml
vendored
@@ -87,7 +87,7 @@ jobs:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: lint
|
||||
cache-key: lint-with-extras
|
||||
|
||||
- name: Check Poetry File
|
||||
shell: bash
|
||||
@@ -102,9 +102,17 @@ jobs:
|
||||
poetry lock --check
|
||||
|
||||
- name: Install dependencies
|
||||
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
||||
# type hints for as many of our libraries as possible.
|
||||
# This helps catch errors that require dependencies to be spotted, for example:
|
||||
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
|
||||
#
|
||||
# If you change this configuration, make sure to change the `cache-key`
|
||||
# in the `poetry_setup` action above to stop using the old cache.
|
||||
# It doesn't matter how you change it, any change will cause a cache-bust.
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
poetry install
|
||||
poetry install --with dev,lint,test,typing
|
||||
|
||||
- name: Install langchain editable
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
12
.github/workflows/_pydantic_compatibility.yml
vendored
12
.github/workflows/_pydantic_compatibility.yml
vendored
@@ -79,3 +79,15 @@ jobs:
|
||||
- name: Run pydantic compatibility tests
|
||||
shell: bash
|
||||
run: make test
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
12
.github/workflows/_release.yml
vendored
12
.github/workflows/_release.yml
vendored
@@ -31,13 +31,15 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install poetry
|
||||
run: pipx install "poetry==$POETRY_VERSION"
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
|
||||
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: "3.10"
|
||||
cache: "poetry"
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: release
|
||||
|
||||
- name: Build project for distribution
|
||||
run: poetry build
|
||||
- name: Check Version
|
||||
|
||||
12
.github/workflows/_test.yml
vendored
12
.github/workflows/_test.yml
vendored
@@ -43,3 +43,15 @@ jobs:
|
||||
- name: Run core tests
|
||||
shell: bash
|
||||
run: make test
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
14
.github/workflows/langchain_ci.yml
vendored
14
.github/workflows/langchain_ci.yml
vendored
@@ -6,6 +6,8 @@ on:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/_test.yml'
|
||||
- '.github/workflows/_pydantic_compatibility.yml'
|
||||
@@ -81,3 +83,15 @@ jobs:
|
||||
|
||||
- name: Run extended tests
|
||||
run: make extended_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
46
.github/workflows/langchain_experimental_ci.yml
vendored
46
.github/workflows/langchain_experimental_ci.yml
vendored
@@ -6,6 +6,8 @@ on:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/_test.yml'
|
||||
- '.github/workflows/langchain_experimental_ci.yml'
|
||||
@@ -81,3 +83,47 @@ jobs:
|
||||
|
||||
- name: Run tests
|
||||
run: make test
|
||||
extended-tests:
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ env.WORKDIR }}
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }} extended tests
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: libs/experimental
|
||||
cache-key: extended
|
||||
|
||||
- name: Install dependencies
|
||||
shell: bash
|
||||
run: |
|
||||
echo "Running extended tests, installing dependencies with poetry..."
|
||||
poetry install -E extended_testing
|
||||
|
||||
- name: Run extended tests
|
||||
run: make extended_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
12
.github/workflows/scheduled_test.yml
vendored
12
.github/workflows/scheduled_test.yml
vendored
@@ -47,3 +47,15 @@ jobs:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
make scheduled_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
|
||||
@@ -317,7 +317,7 @@
|
||||
"Chatbots": "https://python.langchain.com/docs/use_cases/chatbots",
|
||||
"Summarization": "https://python.langchain.com/docs/use_cases/summarization",
|
||||
"Extraction": "https://python.langchain.com/docs/use_cases/extraction",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"Tagging": "https://python.langchain.com/docs/use_cases/tagging",
|
||||
"Code Understanding": "https://python.langchain.com/docs/use_cases/code_understanding",
|
||||
"AutoGPT": "https://python.langchain.com/docs/use_cases/autonomous_agents/autogpt",
|
||||
@@ -338,6 +338,7 @@
|
||||
"Neptune Open Cypher QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/neptune_cypher_qa",
|
||||
"NebulaGraphQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_nebula_qa",
|
||||
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_kuzu_qa",
|
||||
"FalkorDBQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa",
|
||||
"HugeGraph QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_hugegraph_qa",
|
||||
"GraphSparqlQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_sparql_qa",
|
||||
"ArangoDB QA chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_arangodb_qa",
|
||||
@@ -399,7 +400,7 @@
|
||||
"Summarization": "https://python.langchain.com/docs/use_cases/summarization",
|
||||
"Extraction": "https://python.langchain.com/docs/use_cases/extraction",
|
||||
"Interacting with APIs": "https://python.langchain.com/docs/use_cases/apis",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"QA over Documents": "https://python.langchain.com/docs/use_cases/question_answering/index",
|
||||
"Retrieve from vector stores directly": "https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_text_generation",
|
||||
"Improve document indexing with HyDE": "https://python.langchain.com/docs/use_cases/question_answering/how_to/hyde",
|
||||
@@ -640,7 +641,7 @@
|
||||
"Chatbots": "https://python.langchain.com/docs/use_cases/chatbots",
|
||||
"Extraction": "https://python.langchain.com/docs/use_cases/extraction",
|
||||
"Interacting with APIs": "https://python.langchain.com/docs/use_cases/apis",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"HuggingGPT": "https://python.langchain.com/docs/use_cases/autonomous_agents/hugginggpt",
|
||||
"Perform context-aware text splitting": "https://python.langchain.com/docs/use_cases/question_answering/how_to/document-context-aware-QA",
|
||||
"Retrieve from vector stores directly": "https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_text_generation",
|
||||
@@ -1008,7 +1009,7 @@
|
||||
"LangSmith Walkthrough": "https://python.langchain.com/docs/guides/langsmith/walkthrough",
|
||||
"Comparing Chain Outputs": "https://python.langchain.com/docs/guides/evaluation/examples/comparisons",
|
||||
"Agent Trajectory": "https://python.langchain.com/docs/guides/evaluation/trajectory/trajectory_eval",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"Multi-modal outputs: Image & Text": "https://python.langchain.com/docs/use_cases/multi_modal/image_agent",
|
||||
"Agent Debates with Tools": "https://python.langchain.com/docs/use_cases/agent_simulations/two_agent_debate_tools",
|
||||
"Multiple callback handlers": "https://python.langchain.com/docs/modules/callbacks/multiple_callbacks",
|
||||
@@ -1267,7 +1268,7 @@
|
||||
"SQL Database Agent": "https://python.langchain.com/docs/integrations/toolkits/sql_database",
|
||||
"JSON Agent": "https://python.langchain.com/docs/integrations/toolkits/json",
|
||||
"NIBittensorLLM": "https://python.langchain.com/docs/integrations/llms/bittensor",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"BabyAGI with Tools": "https://python.langchain.com/docs/use_cases/agents/baby_agi_with_agent",
|
||||
"Conversational Retrieval Agent": "https://python.langchain.com/docs/use_cases/question_answering/how_to/conversational_retrieval_agents",
|
||||
"Plug-and-Plai": "https://python.langchain.com/docs/use_cases/agents/custom_agent_with_plugin_retrieval_using_plugnplai",
|
||||
@@ -1831,12 +1832,12 @@
|
||||
"create_sql_agent": {
|
||||
"CnosDB": "https://python.langchain.com/docs/integrations/providers/cnosdb",
|
||||
"SQL Database Agent": "https://python.langchain.com/docs/integrations/toolkits/sql_database",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql"
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
"SQLDatabaseToolkit": {
|
||||
"CnosDB": "https://python.langchain.com/docs/integrations/providers/cnosdb",
|
||||
"SQL Database Agent": "https://python.langchain.com/docs/integrations/toolkits/sql_database",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"Use ToolKits with OpenAI Functions": "https://python.langchain.com/docs/modules/agents/how_to/use_toolkits_with_openai_functions"
|
||||
},
|
||||
"SageMakerCallbackHandler": {
|
||||
@@ -1898,7 +1899,7 @@
|
||||
"Rebuff": "https://python.langchain.com/docs/integrations/providers/rebuff",
|
||||
"SQL Database Agent": "https://python.langchain.com/docs/integrations/toolkits/sql_database",
|
||||
"Cookbook": "https://python.langchain.com/docs/guides/expression_language/cookbook",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"Multiple Retrieval Sources": "https://python.langchain.com/docs/use_cases/question_answering/how_to/multiple_retrieval"
|
||||
},
|
||||
"Weaviate": {
|
||||
@@ -3034,11 +3035,11 @@
|
||||
"Interacting with APIs": "https://python.langchain.com/docs/use_cases/apis"
|
||||
},
|
||||
"create_sql_query_chain": {
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql",
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql",
|
||||
"Multiple Retrieval Sources": "https://python.langchain.com/docs/use_cases/question_answering/how_to/multiple_retrieval"
|
||||
},
|
||||
"ElasticsearchDatabaseChain": {
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/sql"
|
||||
"SQL": "https://python.langchain.com/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
"FileChatMessageHistory": {
|
||||
"AutoGPT": "https://python.langchain.com/docs/use_cases/autonomous_agents/autogpt"
|
||||
@@ -3174,6 +3175,12 @@
|
||||
"KuzuQAChain": {
|
||||
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_kuzu_qa"
|
||||
},
|
||||
"FalkorDBGraph": {
|
||||
"KuzuQAChain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa"
|
||||
},
|
||||
"FalkorDBQAChain": {
|
||||
"FalkorDB QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa"
|
||||
},
|
||||
"HugeGraphQAChain": {
|
||||
"HugeGraph QA Chain": "https://python.langchain.com/docs/use_cases/more/graph/graph_hugegraph_qa"
|
||||
},
|
||||
|
||||
@@ -5,9 +5,10 @@
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta http-equiv="Refresh" content="0; url={{ redirect }}" />
|
||||
<meta name="Description" content="scikit-learn: machine learning in Python">
|
||||
<meta name="robots" content="follow, index">
|
||||
<meta name="Description" content="Python API reference for LangChain.">
|
||||
<link rel="canonical" href="{{ redirect }}" />
|
||||
<title>scikit-learn: machine learning in Python</title>
|
||||
<title>LangChain Python API Reference Documentation.</title>
|
||||
</head>
|
||||
<body>
|
||||
<p>You will be automatically redirected to the <a href="{{ redirect }}">new location of this page</a>.</p>
|
||||
|
||||
14
docs/docs_skeleton/docs/expression_language/index.mdx
Normal file
14
docs/docs_skeleton/docs/expression_language/index.mdx
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
sidebar_class_name: hidden
|
||||
---
|
||||
|
||||
# LangChain Expression Language (LCEL)
|
||||
|
||||
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
|
||||
Any chain constructed this way will automatically have full sync, async, and streaming support.
|
||||
|
||||
#### [Interface](/docs/expression_language/interface)
|
||||
The base interface shared by all LCEL objects
|
||||
|
||||
#### [Cookbook](/docs/expression_language/cookbook)
|
||||
Examples of common LCEL usage patterns
|
||||
@@ -42,23 +42,22 @@ Log and stream intermediate steps of any chain
|
||||
## Examples, ecosystem, and resources
|
||||
### [Use cases](/docs/use_cases/)
|
||||
Walkthroughs and best-practices for common end-to-end use cases, like:
|
||||
- [Chatbots](/docs/use_cases/chatbots/)
|
||||
- [Chatbots](/docs/use_cases/chatbots)
|
||||
- [Answering questions using sources](/docs/use_cases/question_answering/)
|
||||
- [Analyzing structured data](/docs/use_cases/tabular.html)
|
||||
- [Analyzing structured data](/docs/use_cases/sql)
|
||||
- and much more...
|
||||
|
||||
### [Guides](/docs/guides/)
|
||||
Learn best practices for developing with LangChain.
|
||||
|
||||
### [Ecosystem](/docs/ecosystem/)
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/ecosystem/dependents).
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/additional_resources/dependents).
|
||||
|
||||
### [Additional resources](/docs/additional_resources/)
|
||||
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
|
||||
|
||||
<h3><span style={{color:"#2e8555"}}> Support </span></h3>
|
||||
|
||||
Join us on [GitHub](https://github.com/hwchase17/langchain) or [Discord](https://discord.gg/6adMQxSpJS) to ask questions, share feedback, meet other developers building with LangChain, and dream about the future of LLM’s.
|
||||
### [Community](/docs/community)
|
||||
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
|
||||
|
||||
## API reference
|
||||
|
||||
|
||||
@@ -59,8 +59,8 @@ LangChain provides several objects to easily distinguish between different roles
|
||||
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
|
||||
For more information on how to use these different messages most effectively, see our prompting guide.
|
||||
|
||||
LangChain exposes a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
|
||||
The standard interface that LangChain exposes has two methods:
|
||||
LangChain provides a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
|
||||
The standard interface that LangChain provides has two methods:
|
||||
- `predict`: Takes in a string, returns a string
|
||||
- `predict_messages`: Takes in a list of messages, returns a message.
|
||||
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
---
|
||||
sidebar_position: 6
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
# Evaluation
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
# LangChain Expression Language
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
LangChain Expression Language is a declarative way to easily compose chains together.
|
||||
Any chain constructed this way will automatically have full sync, async, and streaming support.
|
||||
See guides below for how to interact with chains constructed this way as well as cookbook examples.
|
||||
|
||||
<DocCardList />
|
||||
@@ -2,11 +2,21 @@
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
LangSmith helps you trace and evaluate your language model applications and intelligent agents to help you
|
||||
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
|
||||
move from prototype to production.
|
||||
|
||||
Check out the [interactive walkthrough](/docs/guides/langsmith/walkthrough) below to get started.
|
||||
|
||||
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/)
|
||||
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
|
||||
|
||||
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow,
|
||||
check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
|
||||
|
||||
- Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
|
||||
- Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
|
||||
- How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
|
||||
- How to fine-tune a LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
|
||||
- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
|
||||
|
||||
|
||||
<DocCardList />
|
||||
|
||||
@@ -22,6 +22,16 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b39ac41a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U langchain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "3f8518ad-c762-413c-b8c9-f1c211fc311d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -30,12 +40,7 @@
|
||||
"source": [
|
||||
"import boto3\n",
|
||||
"\n",
|
||||
"comprehend_client = boto3.client('comprehend', \n",
|
||||
" region_name='us-east-1', \n",
|
||||
" aws_access_key_id=\"ASIA6BR6ZDLNQLMEGWHM\",\n",
|
||||
" aws_secret_access_key=\"Y79nefFoOfvgrog6sojSe55xTuKqDJY53BgfrtlG\",\n",
|
||||
" aws_session_token=\"IQoJb3JpZ2luX2VjEIP//////////wEaCXVzLWVhc3QtMSJGMEQCIBvUl0Wj5Gu5GrHB+i5fHkaVc2V1381M7UNRX8EggHORAiB+dG/uKJ4loHn2oAcXIEy6+lfU7wygl4zw/vUo2VItFiqfAghMEAIaDDk2NTQyNTU2ODQ3NSIMfbh8uyoO1XONSkuEKvwBTMxeDCi//9U9LGIwZZzIiHOudQAqR2wlIGZKcw//abSeHNBE1AoDT8ibcqk7EuIt9fwnj1WYiLGmSIWd9/kSZShiKdYg0UpNWyr1/LdeutV5byFAjT21RnWTgSMr0QeSCU698PFusvO1Coph8C75pcqTVYsxi/HypJT8OfB5iCxKgfzx0qD4X6hScpIAEYZhgQXHFBAeubqMkVPYEqSob6fSm1vEI8LkU8HG1N2M2p8TzGCQWo5uBgtNkipxve++bkR+xjiNLIpAN3P1xF2/W/lYlz+4xGsi90aZqIVh/tOvAjg7Yx1Dd5Ir2C0fZc7wbtabzVFlJZ7GFcpcMOX0o6cGOp4BismuW2CJRBmFFpoparqraQaiQBY/VDbQg9KQc/Y6o0oCxkESLUdY6ino3yrheT3W832eAg0RwrmEaQqT8kKGyJFimUxrAF/otNQhySLKuSXLooguammJiQAtgK1EhmuLBUBoLcngxQ31kDqw13g7Ccwuo68fnI/QzQLj5MX+V5VLCSp9VrOzi9XSjmeF/TJQARdZeL3CSeu2pATQc80=\"\n",
|
||||
" )"
|
||||
"comprehend_client = boto3.client('comprehend', region_name='us-east-1')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -48,7 +53,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "74550d74-3c01-4ba7-ad32-ca66d955d001",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -112,7 +117,8 @@
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\"\n",
|
||||
"]\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
"\n",
|
||||
@@ -128,9 +134,9 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke({\"question\": \"A sample SSN number looks like this . Can you give me some more samples?\"})\n",
|
||||
"except ModerationPiiError as e:\n",
|
||||
" print(e.message)\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
" print(response['output'])\n"
|
||||
]
|
||||
@@ -160,36 +166,36 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"id": "d6e8900a-44ef-4967-bde8-b88af282139d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.comprehend_moderation import BaseModerationActions, BaseModerationFilters\n",
|
||||
"from langchain_experimental.comprehend_moderation import (BaseModerationConfig, \n",
|
||||
" ModerationIntentConfig, \n",
|
||||
" ModerationPiiConfig, \n",
|
||||
" ModerationToxicityConfig\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.TOXICITY,\n",
|
||||
" BaseModerationFilters.INTENT\n",
|
||||
" ],\n",
|
||||
" \"pii\":{ \n",
|
||||
" \"action\": BaseModerationActions.ALLOW, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"],\n",
|
||||
" \"mask_character\": \"X\"\n",
|
||||
" },\n",
|
||||
" \"toxicity\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5\n",
|
||||
" },\n",
|
||||
" \"intent\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5\n",
|
||||
" }\n",
|
||||
"}"
|
||||
"pii_config = ModerationPiiConfig(\n",
|
||||
" labels=[\"SSN\"],\n",
|
||||
" redact=True,\n",
|
||||
" mask_character=\"X\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"toxicity_config = ModerationToxicityConfig(\n",
|
||||
" threshold=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"intent_config = ModerationIntentConfig(\n",
|
||||
" threshold=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config, intent_config]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -197,16 +203,20 @@
|
||||
"id": "3634376b-5938-43df-9ed6-70ca7e99290f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"At the core of the configuration you have three filters specified in the `filters` key:\n",
|
||||
"At the core of the the configuration there are three configuration models to be used\n",
|
||||
"\n",
|
||||
"1. `BaseModerationFilters.PII`\n",
|
||||
"2. `BaseModerationFilters.TOXICITY`\n",
|
||||
"3. `BaseModerationFilters.INTENT`\n",
|
||||
"- `ModerationPiiConfig` used for configuring the behavior of the PII validations. Following are the parameters it can be initialized with\n",
|
||||
" - `labels` the PII entity labels. Defaults to an empty list which means that the PII validation will consider all PII entities.\n",
|
||||
" - `threshold` the confidence threshold for the detected entities, defaults to 0.5 or 50%\n",
|
||||
" - `redact` a boolean flag to enforce whether redaction should be performed on the text, defaults to `False`. When `False`, the PII validation will error out when it detects any PII entity, when set to `True` it simply redacts the PII values in the text.\n",
|
||||
" - `mask_character` the character used for masking, defaults to asterisk (*)\n",
|
||||
"- `ModerationToxicityConfig` used for configuring the behavior of the toxicity validations. Following are the parameters it can be initialized with\n",
|
||||
" - `labels` the Toxic entity labels. Defaults to an empty list which means that the toxicity validation will consider all toxic entities. all\n",
|
||||
" - `threshold` the confidence threshold for the detected entities, defaults to 0.5 or 50% \n",
|
||||
"- `ModerationIntentConfig` used for configuring the behavior of the intent validation\n",
|
||||
" - `threshold` the confidence threshold for the the intent classification, defaults to 0.5 or 50% \n",
|
||||
"\n",
|
||||
"And an `action` key that defines two possible actions for each moderation function:\n",
|
||||
"\n",
|
||||
"1. `BaseModerationActions.ALLOW` - `allows` the prompt to pass through but masks detected PII in case of PII check. The default behavior is to run and redact all PII entities. If there is an entity specified in the `labels` field, then only those entities will go through the PII check and masked.\n",
|
||||
"2. `BaseModerationActions.STOP` - `stops` the prompt from passing through to the next step in case any PII, Toxicity, or incorrect Intent is detected. The action of `BaseModerationActions.STOP` will raise a Python `Exception` essentially stopping the chain in progress.\n",
|
||||
"Finally, you use the `BaseModerationConfig` to define the order in which each of these checks are to be performed. The `BaseModerationConfig` takes an optional `filters` parameter which can be a list of one or more than one of the above validation checks, as seen in the previous code block. The `BaseModerationConfig` can also be initialized with any `filters` in which case it will use all the checks with default configuration (more on this explained later).\n",
|
||||
"\n",
|
||||
"Using the configuration in the previous cell will perform PII checks and will allow the prompt to pass through however it will mask any SSN numbers present in either the prompt or the LLM output.\n"
|
||||
]
|
||||
@@ -244,7 +254,8 @@
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\"\n",
|
||||
"]\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
"\n",
|
||||
@@ -369,27 +380,23 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\": [ \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.TOXICITY\n",
|
||||
" ],\n",
|
||||
" \"pii\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"], \n",
|
||||
" \"mask_character\": \"X\" \n",
|
||||
" },\n",
|
||||
" \"toxicity\":{ \n",
|
||||
" \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5 \n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"pii_config = ModerationPiiConfig(\n",
|
||||
" labels=[\"SSN\"],\n",
|
||||
" redact=True,\n",
|
||||
" mask_character=\"X\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"toxicity_config = ModerationToxicityConfig(\n",
|
||||
" threshold=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"comp_moderation_with_config = AmazonComprehendModerationChain(\n",
|
||||
" moderation_config=moderation_config, # specify the configuration\n",
|
||||
" client=comprehend_client, # optionally pass the Boto3 Client\n",
|
||||
" force_base_exception=True, # Force BaseModerationError\n",
|
||||
" unique_id='john.doe@email.com', # A unique ID\n",
|
||||
" moderation_callback=my_callback, # BaseModerationCallbackHandler\n",
|
||||
" verbose=True\n",
|
||||
@@ -416,7 +423,8 @@
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
|
||||
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
|
||||
" # replace with your own expletive\n",
|
||||
" \"Final Answer: This is a really <expletive> way of constructing a birdhouse. This is <expletive> insane to think that any birds would actually create their <expletive> nests here.\"\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = FakeListLLM(responses=responses)\n",
|
||||
@@ -450,7 +458,7 @@
|
||||
"## `moderation_config` and moderation execution order\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"If `AmazonComprehendModerationChain` is not initialized with any `moderation_config` then the default action is `STOP` and default order of moderation check is as follows.\n",
|
||||
"If `AmazonComprehendModerationChain` is not initialized with any `moderation_config` then it is initialized with the default values of `BaseModerationConfig`. If no `filters` are used then the sequence of moderation check is as follows.\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"AmazonComprehendModerationChain\n",
|
||||
@@ -470,32 +478,25 @@
|
||||
" └── Return Prompt\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"If any of the check raises exception then the subsequent checks will not be performed. If a `callback` is provided in this case, then it will be called for each of the checks that have been performed. For example, in the case above, if the Chain fails due to presence of PII then the Toxicity and Intent checks will not be performed.\n",
|
||||
"If any of the check raises a validation exception then the subsequent checks will not be performed. If a `callback` is provided in this case, then it will be called for each of the checks that have been performed. For example, in the case above, if the Chain fails due to presence of PII then the Toxicity and Intent checks will not be performed.\n",
|
||||
"\n",
|
||||
"You can override the execution order by passing `moderation_config` and simply specifying the desired order in the `filters` key of the configuration. In case you use `moderation_config` then the order of the checks as specified in the `filters` key will be maintained. For example, in the configuration below, first Toxicity check will be performed, then PII, and finally Intent validation will be performed. In this case, `AmazonComprehendModerationChain` will perform the desired checks in the specified order with default values of each model `kwargs`.\n",
|
||||
"You can override the execution order by passing `moderation_config` and simply specifying the desired order in the `filters` parameter of the `BaseModerationConfig`. In case you specify the filters, then the order of the checks as specified in the `filters` parameter will be maintained. For example, in the configuration below, first Toxicity check will be performed, then PII, and finally Intent validation will be performed. In this case, `AmazonComprehendModerationChain` will perform the desired checks in the specified order with default values of each model `kwargs`.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.INTENT]\n",
|
||||
" }\n",
|
||||
"pii_check = ModerationPiiConfig()\n",
|
||||
"toxicity_check = ModerationToxicityConfig()\n",
|
||||
"intent_check = ModerationIntentConfig()\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(filters=[toxicity_check, pii_check, intent_check])\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Model `kwargs` are specified by the `pii`, `toxicity`, and `intent` keys within the `moderation_config` dictionary. For example, in the `moderation_config` below, the default order of moderation is overriden and the `pii` & `toxicity` model `kwargs` have been overriden. For `intent` the chain's default `kwargs` will be used.\n",
|
||||
"You can have also use more than one configuration for a specific moderation check, for example in the sample below, two consecutive PII checks are performed. First the configuration checks for any SSN, if found it would raise an error. If any SSN isn't found then it will next check if any NAME and CREDIT_DEBIT_NUMBER is present in the prompt and will mask it.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
" moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
|
||||
" BaseModerationFilters.PII, \n",
|
||||
" BaseModerationFilters.INTENT],\n",
|
||||
" \"pii\":{ \"action\": BaseModerationActions.ALLOW, \n",
|
||||
" \"threshold\":0.5, \n",
|
||||
" \"labels\":[\"SSN\"], \n",
|
||||
" \"mask_character\": \"X\" },\n",
|
||||
" \"toxicity\":{ \"action\": BaseModerationActions.STOP, \n",
|
||||
" \"threshold\":0.5 }\n",
|
||||
" }\n",
|
||||
"pii_check_1 = ModerationPiiConfig(labels=[\"SSN\"])\n",
|
||||
"pii_check_2 = ModerationPiiConfig(labels=[\"NAME\", \"CREDIT_DEBIT_NUMBER\"], redact=True)\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(filters=[pii_check_1, pii_check_2])\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"1. For a list of PII labels see Amazon Comprehend Universal PII entity types - https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html#how-pii-types\n",
|
||||
@@ -545,7 +546,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%env HUGGINGFACEHUB_API_TOKEN=\"<HUGGINGFACEHUB_API_TOKEN>\""
|
||||
"import os\n",
|
||||
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = \"<YOUR HF TOKEN HERE>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -558,7 +560,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options\n",
|
||||
"repo_id = \"google/flan-t5-xxl\" \n"
|
||||
"repo_id = \"google/flan-t5-xxl\" "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -573,12 +575,9 @@
|
||||
"from langchain import HuggingFaceHub\n",
|
||||
"from langchain import PromptTemplate, LLMChain\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"template = \"\"\"Question: {question}\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm = HuggingFaceHub(\n",
|
||||
" repo_id=repo_id, model_kwargs={\"temperature\": 0.5, \"max_length\": 256}\n",
|
||||
")\n",
|
||||
@@ -602,22 +601,32 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY, BaseModerationFilters.INTENT ],\n",
|
||||
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\",\"CREDIT_DEBIT_NUMBER\"], \"mask_character\": \"X\"},\n",
|
||||
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
|
||||
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
|
||||
" }\n",
|
||||
"pii_config = ModerationPiiConfig(\n",
|
||||
" labels=[\"SSN\", \"CREDIT_DEBIT_NUMBER\"],\n",
|
||||
" redact=True,\n",
|
||||
" mask_character=\"X\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# without any callback\n",
|
||||
"toxicity_config = ModerationToxicityConfig(\n",
|
||||
" threshold=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"intent_config = ModerationIntentConfig(\n",
|
||||
" threshold=0.8\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config, intent_config]\n",
|
||||
")\n",
|
||||
"# with callback\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client,\n",
|
||||
" moderation_callback=my_callback,\n",
|
||||
" verbose=True)\n",
|
||||
"\n",
|
||||
"# with callback\n",
|
||||
"# without callback\n",
|
||||
"amazon_comp_moderation_out = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client,\n",
|
||||
" moderation_callback=my_callback,\n",
|
||||
" verbose=True)"
|
||||
]
|
||||
},
|
||||
@@ -648,7 +657,10 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more credit car number samples?\"})\n",
|
||||
" response = chain.invoke({\"question\": \"\"\"What is John Doe's address, phone number and SSN from the following text?\n",
|
||||
"\n",
|
||||
"John Doe, a resident of 1234 Elm Street in Springfield, recently celebrated his birthday on January 1st. Turning 43 this year, John reflected on the years gone by. He often shares memories of his younger days with his close friends through calls on his phone, (555) 123-4567. Meanwhile, during a casual evening, he received an email at johndoe@example.com reminding him of an old acquaintance's reunion. As he navigated through some old documents, he stumbled upon a paper that listed his SSN as 123-45-6789, reminding him to store it in a safer place.\n",
|
||||
"\"\"\"})\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
@@ -741,15 +753,26 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderation_config = { \n",
|
||||
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY ],\n",
|
||||
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\"], \"mask_character\": \"X\"},\n",
|
||||
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
|
||||
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
|
||||
" }\n",
|
||||
"pii_config = ModerationPiiConfig(\n",
|
||||
" labels=[\"SSN\"],\n",
|
||||
" redact=True,\n",
|
||||
" mask_character=\"X\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"toxicity_config = ModerationToxicityConfig(\n",
|
||||
" threshold=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"intent_config = ModerationIntentConfig(\n",
|
||||
" threshold=0.8\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"moderation_config = BaseModerationConfig(\n",
|
||||
" filters=[pii_config, toxicity_config, intent_config]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
|
||||
" client=comprehend_client ,\n",
|
||||
" client=comprehend_client,\n",
|
||||
" verbose=True)"
|
||||
]
|
||||
},
|
||||
@@ -780,7 +803,10 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more samples?\"})\n",
|
||||
" response = chain.invoke({\"question\": \"\"\"What is John Doe's address, phone number and SSN from the following text?\n",
|
||||
"\n",
|
||||
"John Doe, a resident of 1234 Elm Street in Springfield, recently celebrated his birthday on January 1st. Turning 43 this year, John reflected on the years gone by. He often shares memories of his younger days with his close friends through calls on his phone, (555) 123-4567. Meanwhile, during a casual evening, he received an email at johndoe@example.com reminding him of an old acquaintance's reunion. As he navigated through some old documents, he stumbled upon a paper that listed his SSN as 123-45-6789, reminding him to store it in a safer place.\n",
|
||||
"\"\"\"})\n",
|
||||
"except Exception as e:\n",
|
||||
" print(str(e))\n",
|
||||
"else:\n",
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# Preventing harmful outputs
|
||||
# Moderation
|
||||
|
||||
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
|
||||
|
||||
- [Moderation chain](/docs/guides/safety/moderation): Explicitly check if any output text is harmful and flag it.
|
||||
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.
|
||||
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
|
||||
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle PII and toxicity.
|
||||
|
||||
@@ -0,0 +1,85 @@
|
||||
# Removing logical fallacies from model output
|
||||
Logical fallacies are flawed reasoning or false arguments that can undermine the validity of a model's outputs. Examples include circular reasoning, false
|
||||
dichotomies, ad hominem attacks, etc. Machine learning models are optimized to perform well on specific metrics like accuracy, perplexity, or loss. However,
|
||||
optimizing for metrics alone does not guarantee logically sound reasoning.
|
||||
|
||||
Language models can learn to exploit flaws in reasoning to generate plausible-sounding but logically invalid arguments. When models rely on fallacies, their outputs become unreliable and untrustworthy, even if they achieve high scores on metrics. Users cannot depend on such outputs. Propagating logical fallacies can spread misinformation, confuse users, and lead to harmful real-world consequences when models are deployed in products or services.
|
||||
|
||||
Monitoring and testing specifically for logical flaws is challenging unlike other quality issues. It requires reasoning about arguments rather than pattern matching.
|
||||
|
||||
Therefore, it is crucial that model developers proactively address logical fallacies after optimizing metrics. Specialized techniques like causal modeling, robustness testing, and bias mitigation can help avoid flawed reasoning. Overall, allowing logical flaws to persist makes models less safe and ethical. Eliminating fallacies ensures model outputs remain logically valid and aligned with human reasoning. This maintains user trust and mitigates risks.
|
||||
|
||||
|
||||
|
||||
```python
|
||||
# Imports
|
||||
from langchain.llms import OpenAI
|
||||
from langchain.prompts import PromptTemplate
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain_experimental.fallacy_removal.base import FallacyChain
|
||||
```
|
||||
|
||||
```python
|
||||
# Example of a model output being returned with a logical fallacy
|
||||
misleading_prompt = PromptTemplate(
|
||||
template="""You have to respond by using only logical fallacies inherent in your answer explanations.
|
||||
|
||||
Question: {question}
|
||||
|
||||
Bad answer:""",
|
||||
input_variables=["question"],
|
||||
)
|
||||
|
||||
llm = OpenAI(temperature=0)
|
||||
|
||||
misleading_chain = LLMChain(llm=llm, prompt=misleading_prompt)
|
||||
|
||||
misleading_chain.run(question="How do I know the earth is round?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
'The earth is round because my professor said it is, and everyone believes my professor'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
fallacies = FallacyChain.get_fallacies(["correction"])
|
||||
fallacy_chain = FallacyChain.from_llm(
|
||||
chain=misleading_chain,
|
||||
logical_fallacies=fallacies,
|
||||
llm=llm,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
fallacy_chain.run(question="How do I know the earth is round?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new FallacyChain chain...
|
||||
Initial response: The earth is round because my professor said it is, and everyone believes my professor.
|
||||
|
||||
Applying correction...
|
||||
|
||||
Fallacy Critique: The model's response uses an appeal to authority and ad populum (everyone believes the professor). Fallacy Critique Needed.
|
||||
|
||||
Updated response: You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.
|
||||
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
'You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
@@ -37,11 +37,11 @@ This agent is designed to be used in conversational settings.
|
||||
The prompt is designed to make the agent helpful and conversational.
|
||||
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
|
||||
|
||||
### [Self ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
|
||||
### [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
|
||||
|
||||
This agent utilizes a single tool that should be named `Intermediate Answer`.
|
||||
This tool should be able to lookup factual answers to questions. This agent
|
||||
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
|
||||
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
|
||||
where a Google search API was provided as the tool.
|
||||
|
||||
### [ReAct document store](/docs/modules/agents/agent_types/react_docstore.html)
|
||||
@@ -54,4 +54,4 @@ This agent is equivalent to the
|
||||
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
|
||||
|
||||
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
|
||||
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Plan and execute
|
||||
# Plan-and-execute
|
||||
|
||||
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
|
||||
The planning is almost always done by an LLM.
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# Custom LLM Agent
|
||||
# Custom LLM agent
|
||||
|
||||
This notebook goes through how to create your own custom LLM agent.
|
||||
|
||||
An LLM agent consists of three parts:
|
||||
|
||||
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- LLM: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
|
||||
|
||||
import Example from "@snippets/modules/agents/how_to/custom_llm_agent.mdx"
|
||||
|
||||
|
||||
@@ -4,10 +4,10 @@ This notebook goes through how to create your own custom agent based on a chat m
|
||||
|
||||
An LLM chat agent consists of three parts:
|
||||
|
||||
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- ChatModel: This is the language model that powers the agent
|
||||
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- `ChatModel`: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
|
||||
|
||||
import Example from "@snippets/modules/agents/how_to/custom_llm_chat_agent.mdx"
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ sidebar_position: 2
|
||||
---
|
||||
# Documents
|
||||
|
||||
These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
|
||||
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
|
||||
|
||||
These chains all implement a common interface:
|
||||
|
||||
|
||||
@@ -3,10 +3,10 @@ sidebar_position: 1
|
||||
---
|
||||
# Refine
|
||||
|
||||
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
|
||||
The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
|
||||
|
||||
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.
|
||||
The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.
|
||||
There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# LLM
|
||||
|
||||
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
|
||||
An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
|
||||
|
||||
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
|
||||
An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
|
||||
|
||||
## Get started
|
||||
|
||||
import Example from "@snippets/modules/chains/foundational/llm_chain.mdx"
|
||||
|
||||
<Example/>
|
||||
<Example/>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
|
||||
|
||||
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.. There are two types of sequential chains:
|
||||
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:
|
||||
|
||||
- `SimpleSequentialChain`: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
|
||||
- `SequentialChain`: A more general form of sequential chains, allowing for multiple inputs/outputs.
|
||||
|
||||
@@ -30,4 +30,4 @@ Chains allow us to combine multiple components together to create a single, cohe
|
||||
|
||||
import GetStarted from "@snippets/modules/chains/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
<GetStarted/>
|
||||
|
||||
@@ -11,7 +11,7 @@ Use document loaders to load data from a source as `Document`'s. A `Document` is
|
||||
and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text
|
||||
contents of any web page, or even for loading a transcript of a YouTube video.
|
||||
|
||||
Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
|
||||
Document loaders provide a "load" method for loading data as documents from a configured source. They optionally
|
||||
implement a "lazy load" as well for lazily loading data into memory.
|
||||
|
||||
## Get started
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
|
||||
This is the simplest method. This splits based on characters (by default "\n\n") and measure chunk length by number of characters.
|
||||
|
||||
1. How the text is split: by single character
|
||||
2. How the chunk size is measured: by number of characters
|
||||
1. How the text is split: by single character.
|
||||
2. How the chunk size is measured: by number of characters.
|
||||
|
||||
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/character_text_splitter.mdx"
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Split code
|
||||
|
||||
CodeTextSplitter allows you to split your code with multiple language support. Import enum `Language` and specify the language.
|
||||
CodeTextSplitter allows you to split your code with multiple languages supported. Import enum `Language` and specify the language.
|
||||
|
||||
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/code_splitter.mdx"
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
|
||||
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
|
||||
|
||||
1. How the text is split: by list of characters
|
||||
2. How the chunk size is measured: by number of characters
|
||||
1. How the text is split: by list of characters.
|
||||
2. How the chunk size is measured: by number of characters.
|
||||
|
||||
import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter.mdx"
|
||||
|
||||
|
||||
@@ -18,9 +18,9 @@ This encompasses several key modules.
|
||||
**[Document loaders](/docs/modules/data_connection/document_loaders/)**
|
||||
|
||||
Load documents from many different sources.
|
||||
LangChain provides over a 100 different document loaders as well as integrations with other major providers in the space,
|
||||
LangChain provides over 100 different document loaders as well as integrations with other major providers in the space,
|
||||
like AirByte and Unstructured.
|
||||
We provide integrations to load all types of documents (html, PDF, code) from all types of locations (private s3 buckets, public websites).
|
||||
We provide integrations to load all types of documents (HTML, PDF, code) from all types of locations (private s3 buckets, public websites).
|
||||
|
||||
**[Document transformers](/docs/modules/data_connection/document_transformers/)**
|
||||
|
||||
@@ -32,18 +32,18 @@ LangChain provides several different algorithms for doing this, as well as logic
|
||||
**[Text embedding models](/docs/modules/data_connection/text_embedding/)**
|
||||
|
||||
Another key part of retrieval has become creating embeddings for documents.
|
||||
Embeddings capture the semantic meaning of text, allowing you to quickly and
|
||||
Embeddings capture the semantic meaning of the text, allowing you to quickly and
|
||||
efficiently find other pieces of text that are similar.
|
||||
LangChain provides integrations with over 25 different embedding providers and methods,
|
||||
from open-source to proprietary API,
|
||||
allowing you to choose the one best suited for your needs.
|
||||
LangChain exposes a standard interface, allowing you to easily swap between models.
|
||||
LangChain provides a standard interface, allowing you to easily swap between models.
|
||||
|
||||
**[Vector stores](/docs/modules/data_connection/vectorstores/)**
|
||||
|
||||
With the rise of embeddings, there has emerged a need for databases to support efficient storage and searching of these embeddings.
|
||||
LangChain provides integrations with over 50 different vectorstores, from open-source local ones to cloud-hosted proprietary ones,
|
||||
allowing you choose the one best suited for your needs.
|
||||
allowing you to choose the one best suited for your needs.
|
||||
LangChain exposes a standard interface, allowing you to easily swap between vector stores.
|
||||
|
||||
**[Retrievers](/docs/modules/data_connection/retrievers/)**
|
||||
@@ -55,7 +55,7 @@ However, we have also added a collection of algorithms on top of this to increas
|
||||
These include:
|
||||
|
||||
- [Parent Document Retriever](/docs/modules/data_connection/retrievers/parent_document_retriever): This allows you to create multiple embeddings per parent document, allowing you to look up smaller chunks but return larger context.
|
||||
- [Self Query Retriever](/docs/modules/data_connection/retrievers/self_query): User questions often contain reference to something that isn't just semantic, but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the *semantic* part of a query from other *metadata filters* present in the query
|
||||
- [Self Query Retriever](/docs/modules/data_connection/retrievers/self_query): User questions often contain a reference to something that isn't just semantic but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the *semantic* part of a query from other *metadata filters* present in the query.
|
||||
- [Ensemble Retriever](/docs/modules/data_connection/retrievers/ensemble): Sometimes you may want to retrieve documents from multiple different sources, or using multiple different algorithms. The ensemble retriever allows you to easily do this.
|
||||
- And more!
|
||||
|
||||
|
||||
@@ -5,10 +5,10 @@ One challenge with retrieval is that usually you don't know the specific queries
|
||||
Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.
|
||||
|
||||
To use the Contextual Compression Retriever, you'll need:
|
||||
- a base Retriever
|
||||
- a base retriever
|
||||
- a Document Compressor
|
||||
|
||||
The Contextual Compression Retriever passes queries to the base Retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of Documents and shortens it by reducing the contents of Documents or dropping Documents altogether.
|
||||
The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/retrievers/) for documentation on buil
|
||||
:::
|
||||
|
||||
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
|
||||
A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used
|
||||
A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used
|
||||
as the backbone of a retriever, but there are other types of retrievers as well.
|
||||
|
||||
## Get started
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Self-querying
|
||||
|
||||
A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it's underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
|
||||
A self-querying retriever is one that, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to its underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documents but to also extract filters from the user query on the metadata of stored documents and to execute those filters.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ The algorithm for scoring them is:
|
||||
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
|
||||
```
|
||||
|
||||
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh."
|
||||
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh".
|
||||
|
||||
import Example from "@snippets/modules/data_connection/retrievers/how_to/time_weighted_vectorstore.mdx"
|
||||
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# Vector store-backed retriever
|
||||
|
||||
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the Vector Store class to make it conform to the Retriever interface.
|
||||
A vector store retriever is a retriever that uses a vector store to retrieve documents. It is a lightweight wrapper around the vector store class to make it conform to the retriever interface.
|
||||
It uses the search methods implemented by a vector store, like similarity search and MMR, to query the texts in the vector store.
|
||||
|
||||
Once you construct a Vector store, it's very easy to construct a retriever. Let's walk through an example.
|
||||
Once you construct a vector store, it's very easy to construct a retriever. Let's walk through an example.
|
||||
|
||||
import Example from "@snippets/modules/data_connection/retrievers/how_to/vectorstore.mdx"
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ The Embeddings class is a class designed for interfacing with text embedding mod
|
||||
|
||||
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
|
||||
|
||||
The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
|
||||
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
|
||||
|
||||
## Get started
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ for you.
|
||||
|
||||
## Get started
|
||||
|
||||
This walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model](/docs/modules/data_connection/text_embedding/) interfaces before diving into this.
|
||||
This walkthrough showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model](/docs/modules/data_connection/text_embedding/) interfaces before diving into this.
|
||||
|
||||
import GetStarted from "@snippets/modules/data_connection/vectorstores/get_started.mdx"
|
||||
|
||||
|
||||
@@ -8,10 +8,10 @@ Head to [Integrations](/docs/integrations/memory/) for documentation on built-in
|
||||
:::
|
||||
|
||||
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
|
||||
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
|
||||
This is a super lightweight wrapper which provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
|
||||
|
||||
You may want to use this class directly if you are managing memory outside of a chain.
|
||||
|
||||
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
<GetStarted/>
|
||||
|
||||
@@ -32,7 +32,7 @@ Even if these are not all used directly, they need to be stored in some form.
|
||||
One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages,
|
||||
from in-memory lists to persistent databases.
|
||||
|
||||
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered
|
||||
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered.
|
||||
|
||||
### Querying: Data structures and algorithms on top of chat messages
|
||||
Keeping a list of chat messages is fairly straight-forward.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Conversation buffer memory
|
||||
# Conversation Buffer
|
||||
|
||||
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable.
|
||||
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing messages and then extracts the messages in a variable.
|
||||
|
||||
We can first extract it as a string.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Conversation buffer window memory
|
||||
# Conversation Buffer Window
|
||||
|
||||
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
|
||||
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large.
|
||||
|
||||
Let's first explore the basic functionality of this type of memory.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Entity memory
|
||||
# Entity
|
||||
|
||||
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
|
||||
Entity memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
|
||||
|
||||
Let's first walk through using this functionality.
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# Memory Types
|
||||
# Memory types
|
||||
|
||||
There are many different types of memory.
|
||||
Each have their own parameters, their own return types, and are useful in different scenarios.
|
||||
Each has their own parameters, their own return types, and is useful in different scenarios.
|
||||
Please see their individual page for more detail on each one.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Conversation summary memory
|
||||
# Conversation Summary
|
||||
Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.
|
||||
Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Vector store-backed memory
|
||||
# Backed by a Vector Store
|
||||
|
||||
`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
|
||||
`VectorStoreRetrieverMemory` stores memories in a vector store and queries the top-K most "salient" docs every time it is called.
|
||||
|
||||
This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Caching
|
||||
LangChain provides an optional caching layer for Chat Models. This is useful for two reasons:
|
||||
LangChain provides an optional caching layer for chat models. This is useful for two reasons:
|
||||
|
||||
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
|
||||
It can speed up your application by reducing the number of API calls you make to the LLM provider.
|
||||
|
||||
@@ -8,8 +8,8 @@ Head to [Integrations](/docs/integrations/chat/) for documentation on built-in i
|
||||
:::
|
||||
|
||||
Chat models are a variation on language models.
|
||||
While chat models use language models under the hood, the interface they expose is a bit different.
|
||||
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
|
||||
While chat models use language models under the hood, the interface they use is a bit different.
|
||||
Rather than using a "text in, text out" API, they use an interface where "chat messages" are the inputs and outputs.
|
||||
|
||||
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Prompts
|
||||
|
||||
Prompts for Chat models are built around messages, instead of just plain text.
|
||||
Prompts for chat models are built around messages, instead of just plain text.
|
||||
|
||||
import Prompts from "@snippets/modules/model_io/models/chat/how_to/prompts.mdx"
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Streaming
|
||||
|
||||
Some Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
|
||||
Some chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
|
||||
|
||||
import StreamingChatModel from "@snippets/modules/model_io/models/chat/how_to/streaming.mdx"
|
||||
|
||||
|
||||
@@ -8,16 +8,16 @@ LangChain provides interfaces and integrations for two types of models:
|
||||
- [LLMs](/docs/modules/model_io/models/llms/): Models that take a text string as input and return a text string
|
||||
- [Chat models](/docs/modules/model_io/models/chat/): Models that are backed by a language model but take a list of Chat Messages as input and return a Chat Message
|
||||
|
||||
## LLMs vs Chat Models
|
||||
## LLMs vs chat models
|
||||
|
||||
LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
|
||||
LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
|
||||
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM.
|
||||
Chat models are often backed by LLMs but tuned specifically for having conversations.
|
||||
And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string,
|
||||
And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string,
|
||||
they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System",
|
||||
"AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.
|
||||
"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models.
|
||||
|
||||
To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common
|
||||
To make it possible to swap LLMs and chat models, both implement the Base Language Model interface. This includes common
|
||||
methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message.
|
||||
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models),
|
||||
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for chat models),
|
||||
but if you're creating an application that should work with different types of models the shared interface can be helpful.
|
||||
|
||||
@@ -12,7 +12,7 @@ Output parsers are classes that help structure language model responses. There a
|
||||
|
||||
And then one optional one:
|
||||
|
||||
- "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
|
||||
- "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
|
||||
|
||||
## Get started
|
||||
|
||||
|
||||
@@ -0,0 +1,2 @@
|
||||
position: 0
|
||||
collapsed: false
|
||||
@@ -5,7 +5,7 @@ sidebar_position: 2
|
||||
# Store and reference chat history
|
||||
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
|
||||
|
||||
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.
|
||||
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question-answering chain to return a response.
|
||||
|
||||
To create one, you will need a retriever. In the below example, we will create one from a vector store, which can be created from embeddings.
|
||||
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# Web Scraping
|
||||
|
||||
Web scraping has historically been a challenging endeavor due to the ever-changing nature of website structures, making it tedious for developers to maintain their scraping scripts. Traditional methods often rely on specific HTML tags and patterns which, when altered, can disrupt data extraction processes.
|
||||
|
||||
Enter the LLM-based method for parsing HTML: By leveraging the capabilities of LLMs, and especially OpenAI Functions in LangChain's extraction chain, developers can instruct the model to extract only the desired data in a specified format. This method not only streamlines the extraction process but also significantly reduces the time spent on manual debugging and script modifications. Its adaptability means that even if websites undergo significant design changes, the extraction remains consistent and robust. This level of resilience translates to reduced maintenance efforts, cost savings, and ensures a higher quality of extracted data. Compared to its predecessors, LLM-based approach wins out the web scraping domain by transforming a historically cumbersome task into a more automated and efficient process.
|
||||
@@ -44,6 +44,16 @@ module.exports = {
|
||||
id: "modules/index"
|
||||
},
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "LangChain Expression Language",
|
||||
collapsed: true,
|
||||
items: [{ type: "autogenerated", dirName: "expression_language" } ],
|
||||
link: {
|
||||
type: 'doc',
|
||||
id: "expression_language/index"
|
||||
},
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Guides",
|
||||
@@ -52,17 +62,7 @@ module.exports = {
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
description: 'Design guides for key parts of the development process',
|
||||
slug: "guides",
|
||||
},
|
||||
},
|
||||
{
|
||||
type: "category",
|
||||
label: "Ecosystem",
|
||||
collapsed: true,
|
||||
items: [{ type: "autogenerated", dirName: "ecosystem" }],
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
slug: "ecosystem",
|
||||
slug: "guides",
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -72,7 +72,7 @@ module.exports = {
|
||||
items: [{ type: "autogenerated", dirName: "additional_resources" }, { type: "link", label: "Gallery", href: "https://github.com/kyrolabs/awesome-langchain" }],
|
||||
link: {
|
||||
type: 'generated-index',
|
||||
slug: "additional_resources",
|
||||
slug: "additional_resources",
|
||||
},
|
||||
},
|
||||
'community'
|
||||
|
||||
@@ -1076,6 +1076,10 @@
|
||||
"source": "/docs/modules/agents/tools/integrations/zapier",
|
||||
"destination": "/docs/integrations/tools/zapier"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/tools/sqlite",
|
||||
"destination": "/docs/use_cases/sql/sqlite"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/callbacks/filecallbackhandler.html",
|
||||
"destination": "/docs/modules/callbacks/how_to/filecallbackhandler"
|
||||
@@ -2216,6 +2220,10 @@
|
||||
"source": "/docs/modules/data_connection/text_embedding/integrations/tensorflowhub",
|
||||
"destination": "/docs/integrations/text_embedding/tensorflowhub"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/text_embedding/Awa",
|
||||
"destination": "/docs/integrations/text_embedding/awadb"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/indexes/vectorstores/examples/analyticdb.html",
|
||||
"destination": "/docs/integrations/vectorstores/analyticdb"
|
||||
@@ -2952,6 +2960,46 @@
|
||||
"source": "/docs/modules/model_io/models/llms/integrations/writer",
|
||||
"destination": "/docs/integrations/llms/writer"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/amazon_api_gateway_example",
|
||||
"destination": "/docs/integrations/llms/amazon_api_gateway"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/azureml_endpoint_example",
|
||||
"destination": "/docs/integrations/llms/azure_ml"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/azure_openai_example",
|
||||
"destination": "/docs/integrations/llms/azure_openai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/cerebriumai_example",
|
||||
"destination": "/docs/integrations/llms/cerebriumai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/deepinfra_example",
|
||||
"destination": "/docs/integrations/llms/deepinfra"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/Fireworks",
|
||||
"destination": "/docs/integrations/llms/fireworks"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/forefrontai_example",
|
||||
"destination": "/docs/integrations/llms/forefrontai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/gooseai_example",
|
||||
"destination": "/docs/integrations/llms/gooseai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/petals_example",
|
||||
"destination": "/docs/integrations/llms/petals"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/pipelineai_example",
|
||||
"destination": "/docs/integrations/llms/pipelineai"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts.html",
|
||||
"destination": "/docs/modules/model_io/prompts"
|
||||
@@ -3138,7 +3186,11 @@
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/use_cases/tabular.html",
|
||||
"destination": "/docs/use_cases/tabular"
|
||||
"destination": "/docs/use_cases/qa_structured"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/sql(/?)",
|
||||
"destination": "/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/youtube.html",
|
||||
@@ -3330,7 +3382,7 @@
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/sqlite",
|
||||
"destination": "/docs/use_cases/tabular/sqlite"
|
||||
"destination": "/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/openai_functions",
|
||||
@@ -3436,6 +3488,14 @@
|
||||
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
|
||||
"destination": "/docs/use_cases/more/graph/graph_kuzu_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/graph/graph_falkordb_qa",
|
||||
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_falkordb_qa",
|
||||
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/graph/graph_nebula_qa",
|
||||
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
|
||||
@@ -3534,7 +3594,7 @@
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/elasticsearch_database",
|
||||
"destination": "/docs/use_cases/tabular/elasticsearch_database"
|
||||
"destination": "/docs/use_cases/qa_structured/integrations/elasticsearch"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/tagging",
|
||||
@@ -3547,6 +3607,18 @@
|
||||
{
|
||||
"source": "/en/latest/integrations/:path*",
|
||||
"destination": "/docs/integrations/providers/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/docs/guides/expression_language(/?)",
|
||||
"destination": "/docs/expression_language/"
|
||||
},
|
||||
{
|
||||
"source": "/docs/guides/expression_language/:path*",
|
||||
"destination": "/docs/expression_language/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/docs/ecosystem/dependents",
|
||||
"destination": "/docs/additional_resources/dependents"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -47,7 +47,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
|
||||
```
|
||||
|
||||
|
||||
## Chat Models
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ Dependents stats for `langchain-ai/langchain`
|
||||
|[e2b-dev/e2b](https://github.com/e2b-dev/e2b) | 5365 |
|
||||
|[mage-ai/mage-ai](https://github.com/mage-ai/mage-ai) | 5352 |
|
||||
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 5192 |
|
||||
|[LangChain-Chinese-Getting-Started-Guide](https://github.com/liaokongVFX/LangChain-Chinese-Getting-Started-Guide) | 5129 |
|
||||
|[liaokongVFX/LangChain-Chinese-Getting-Started-Guide](https://github.com/liaokongVFX/LangChain-Chinese-Getting-Started-Guide) | 5129 |
|
||||
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 4993 |
|
||||
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 4831 |
|
||||
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 4824 |
|
||||
@@ -1,6 +1,6 @@
|
||||
# YouTube videos
|
||||
|
||||
⛓ icon marks a new addition [last update 2023-06-20]
|
||||
⛓ icon marks a new addition [last update 2023-09-05]
|
||||
|
||||
### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain)
|
||||
|
||||
@@ -86,20 +86,20 @@
|
||||
- [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
|
||||
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
|
||||
- ⛓ [Build AI chatbot with custom knowledge base using OpenAI API and GPT Index](https://youtu.be/vDZAZuaXf48) by [Irina Nik](https://www.youtube.com/@irina_nik)
|
||||
- ⛓ [Build Your Own Auto-GPT Apps with LangChain (Python Tutorial)](https://youtu.be/NYSWn1ipbgg) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
|
||||
- ⛓ [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- ⛓ [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- ⛓ [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
|
||||
- ⛓ [Using ChatGPT with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
|
||||
- ⛓ [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
|
||||
- ⛓ [`Flowise` is an open source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- ⛓ [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
- ⛓ [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
|
||||
- ⛓ [`PrivateGPT`: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]](https://youtu.be/G7iLllmx4qc) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
- ⛓ [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
|
||||
- ⛓ [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg) by [Krish Naik](https://www.youtube.com/@krishnaik06)
|
||||
|
||||
- [Build AI chatbot with custom knowledge base using OpenAI API and GPT Index](https://youtu.be/vDZAZuaXf48) by [Irina Nik](https://www.youtube.com/@irina_nik)
|
||||
- [Build Your Own Auto-GPT Apps with LangChain (Python Tutorial)](https://youtu.be/NYSWn1ipbgg) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
|
||||
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
|
||||
- [Using ChatGPT with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
|
||||
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
|
||||
- [`Flowise` is an open source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
|
||||
- [`PrivateGPT`: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]](https://youtu.be/G7iLllmx4qc) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
|
||||
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg) by [Krish Naik](https://www.youtube.com/@krishnaik06)
|
||||
- ⛓ [LangChain HowTo and Guides YouTube playlist](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) by [Sam Witteveen](https://www.youtube.com/@samwitteveenai/)
|
||||
|
||||
|
||||
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
|
||||
|
||||
119
docs/extras/expression_language/cookbook/code_writing.ipynb
Normal file
119
docs/extras/expression_language/cookbook/code_writing.ipynb
Normal file
@@ -0,0 +1,119 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f09fd305",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Code writing\n",
|
||||
"\n",
|
||||
"Example of how to use LCEL to write Python code."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "bd7c259a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.utilities import PythonREPL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "73795d2d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Write some python code to solve the user's problem. \n",
|
||||
"\n",
|
||||
"Return only python code in Markdown format, e.g.:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"....\n",
|
||||
"```\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", template), (\"human\", \"{input}\")]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "42859e8a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def _sanitize_output(text: str):\n",
|
||||
" _, after = text.split(\"```python\")\n",
|
||||
" return after.split(\"```\")[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "5ded1a86",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "208c2b75",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Python REPL can execute arbitrary code. Use with caution.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'4\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"whats 2 plus 2\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
11
docs/extras/expression_language/cookbook/index.mdx
Normal file
11
docs/extras/expression_language/cookbook/index.mdx
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Cookbook
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
Example code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the [Prompt + LLM](/docs/expression_language/cookbook/prompt_llm_parser) page is a good place to start.
|
||||
|
||||
<DocCardList />
|
||||
180
docs/extras/expression_language/cookbook/memory.ipynb
Normal file
180
docs/extras/expression_language/cookbook/memory.ipynb
Normal file
@@ -0,0 +1,180 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5062941a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Adding memory\n",
|
||||
"\n",
|
||||
"This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "7998efd8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"You are a helpful chatbot\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{input}\")\n",
|
||||
"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "fa0087f3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "06b531ae",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'history': []}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "d9437af6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RunnableMap({\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"memory\": memory.load_memory_variables\n",
|
||||
"}) | {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"history\": lambda x: x[\"memory\"][\"history\"]\n",
|
||||
"} | prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "bed1e260",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"inputs = {\"input\": \"hi im bob\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "890475b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory.save_context(inputs, {\"output\": response.content})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "e8fcb77f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "d837d5c3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Your name is Bob.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"inputs = {\"input\": \"whats my name\"}\n",
|
||||
"response = chain.invoke(inputs)\n",
|
||||
"response"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
133
docs/extras/expression_language/cookbook/moderation.ipynb
Normal file
133
docs/extras/expression_language/cookbook/moderation.ipynb
Normal file
@@ -0,0 +1,133 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4927a727-b4c8-453c-8c83-bd87b4fcac14",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Adding moderation\n",
|
||||
"\n",
|
||||
"This shows how to add in moderation (or other safeguards) around your LLM application."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "4f5f6449-940a-4f5c-97c0-39b71c3e2a68",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import OpenAIModerationChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "fcb8312b-7e7a-424f-a3ec-76738c9a9d21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderate = OpenAIModerationChain()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "b24b9148-f6b0-4091-8ea8-d3fb281bd950",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = OpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"repeat after me: {input}\")\n",
|
||||
"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "1c8ed87c-9ca6-4559-bf60-d40e94a0af08",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "5256b9bd-381a-42b0-bfa8-7e6d18f853cb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nYou are stupid.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 23,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"you are stupid\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "fe6e3b33-dc9a-49d5-b194-ba750c58a628",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"moderated_chain = chain | moderate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "d8ba0cbd-c739-4d23-be9f-6ae092bd5ffb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': '\\n\\nYou are stupid',\n",
|
||||
" 'output': \"Text was found that violates OpenAI's content policy.\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 25,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"moderated_chain.invoke({\"input\": \"you are stupid\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
240
docs/extras/expression_language/cookbook/multiple_chains.ipynb
Normal file
240
docs/extras/expression_language/cookbook/multiple_chains.ipynb
Normal file
@@ -0,0 +1,240 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "877102d1-02ea-4fa3-8ec7-a08e242b95b3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 2\n",
|
||||
"title: Multiple chains\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0f2bf8d3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Runnables can easily be used to string together multiple Chains"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "d65d4e9e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'El país donde se encuentra la ciudad de Honolulu, donde nació Barack Obama, el 44º Presidente de los Estados Unidos, es Estados Unidos. Honolulu se encuentra en la isla de Oahu, en el estado de Hawái.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema import StrOutputParser\n",
|
||||
"\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\"what is the city {person} is from?\")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\"what country is the city {city} in? respond in {language}\")\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"chain1 = prompt1 | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain2 = {\"city\": chain1, \"language\": itemgetter(\"language\")} | prompt2 | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain2.invoke({\"person\": \"obama\", \"language\": \"spanish\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "878f8176",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\"generate a {attribute} color. Return the name of the color and nothing else:\")\n",
|
||||
"prompt2 = ChatPromptTemplate.from_template(\"what is a fruit of color: {color}. Return the name of the fruit and nothing else:\")\n",
|
||||
"prompt3 = ChatPromptTemplate.from_template(\"what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:\")\n",
|
||||
"prompt4 = ChatPromptTemplate.from_template(\"What is the color of {fruit} and the flag of {country}?\")\n",
|
||||
"\n",
|
||||
"model_parser = model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"color_generator = {\"attribute\": RunnablePassthrough()} | prompt1 | {\"color\": model_parser}\n",
|
||||
"color_to_fruit = prompt2 | model_parser\n",
|
||||
"color_to_country = prompt3 | model_parser\n",
|
||||
"question_generator = color_generator | {\"fruit\": color_to_fruit, \"country\": color_to_country} | prompt4"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "d621a870",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"ChatPromptValue(messages=[HumanMessage(content='What is the color of strawberry and the flag of China?', additional_kwargs={}, example=False)])"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"question_generator.invoke({\"warm\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "b4a9812b-bead-4fd9-ae27-0b8be57e5dc1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='The color of an apple is typically red or green. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt = question_generator.invoke({\"warm\"})\n",
|
||||
"model.invoke(prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6d75a313-f1c8-4e94-9a17-24e0bf4a2bdc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Branching and Merging\n",
|
||||
"\n",
|
||||
"You may want the output of one component to be processed by 2 or more other components. [RunnableMaps](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableMap.html) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
|
||||
"\n",
|
||||
"```text\n",
|
||||
" Input\n",
|
||||
" / \\\n",
|
||||
" / \\\n",
|
||||
" Branch1 Branch2\n",
|
||||
" \\ /\n",
|
||||
" \\ /\n",
|
||||
" Combine\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "247fa0bd-4596-4063-8cb3-1d7fc119d982",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"planner = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"Generate an argument about: {input}\"\n",
|
||||
" )\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
" | {\"base_response\": RunnablePassthrough()}\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"arguments_for = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"List the pros or positive aspects of {base_response}\"\n",
|
||||
" )\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"arguments_against = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"List the cons or negative aspects of {base_response}\"\n",
|
||||
" )\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"final_responder = (\n",
|
||||
" ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"ai\", \"{original_response}\"),\n",
|
||||
" (\"human\", \"Pros:\\n{results_1}\\n\\nCons:\\n{results_2}\"),\n",
|
||||
" (\"system\", \"Generate a final response given the critique\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" planner \n",
|
||||
" | {\n",
|
||||
" \"results_1\": arguments_for,\n",
|
||||
" \"results_2\": arguments_against,\n",
|
||||
" \"original_response\": itemgetter(\"base_response\"),\n",
|
||||
" }\n",
|
||||
" | final_responder\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "2564f310-0674-4bb1-9c4e-d7848ca73511",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'While Scrum has its potential cons and challenges, many organizations have successfully embraced and implemented this project management framework to great effect. The cons mentioned above can be mitigated or overcome with proper training, support, and a commitment to continuous improvement. It is also important to note that not all cons may be applicable to every organization or project.\\n\\nFor example, while Scrum may be complex initially, with proper training and guidance, teams can quickly grasp the concepts and practices. The lack of predictability can be mitigated by implementing techniques such as velocity tracking and release planning. The limited documentation can be addressed by maintaining a balance between lightweight documentation and clear communication among team members. The dependency on team collaboration can be improved through effective communication channels and regular team-building activities.\\n\\nScrum can be scaled and adapted to larger projects by using frameworks like Scrum of Scrums or LeSS (Large Scale Scrum). Concerns about speed versus quality can be addressed by incorporating quality assurance practices, such as continuous integration and automated testing, into the Scrum process. Scope creep can be managed by having a well-defined and prioritized product backlog, and a strong product owner can be developed through training and mentorship.\\n\\nResistance to change can be overcome by providing proper education and communication to stakeholders and involving them in the decision-making process. Ultimately, the cons of Scrum can be seen as opportunities for growth and improvement, and with the right mindset and support, they can be effectively managed.\\n\\nIn conclusion, while Scrum may have its challenges and potential cons, the benefits and advantages it offers in terms of collaboration, flexibility, adaptability, transparency, and customer satisfaction make it a widely adopted and successful project management framework. With proper implementation and continuous improvement, organizations can leverage Scrum to drive innovation, efficiency, and project success.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"scrum\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
431
docs/extras/expression_language/cookbook/prompt_llm_parser.ipynb
Normal file
431
docs/extras/expression_language/cookbook/prompt_llm_parser.ipynb
Normal file
@@ -0,0 +1,431 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "abf7263d-3a62-4016-b5d5-b157f92f2070",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Prompt + LLM\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a434f2b-9405-468c-9dfd-254d456b57a6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The most common and valuable composition is taking:\n",
|
||||
"\n",
|
||||
"``PromptTemplate`` / ``ChatPromptTemplate`` -> ``LLM`` / ``ChatModel`` -> ``OutputParser``\n",
|
||||
"\n",
|
||||
"Almost any other chains you build will use this building block."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "93aa2c87",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## PromptTemplate + LLM\n",
|
||||
"\n",
|
||||
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.\n",
|
||||
"\n",
|
||||
"Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "466b65b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"chain = prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "e3d0a6cd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7eb9ef50",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0b1d8f88",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Attaching Stop Sequences"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "562a06bf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model.bind(stop=[\"\\n\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "43f5d04c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f3eaf88a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Attaching Function Call information"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "f94b71b2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"joke\",\n",
|
||||
" \"description\": \"A joke\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"setup\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The setup for the joke\"\n",
|
||||
" },\n",
|
||||
" \"punchline\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The punchline for the joke\"\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
" \"required\": [\"setup\", \"punchline\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
"chain = prompt | model.bind(function_call= {\"name\": \"joke\"}, functions= functions)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "decf7710",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\\n \"setup\": \"Why don\\'t bears wear shoes?\",\\n \"punchline\": \"Because they have bear feet!\"\\n}'}}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"}, config={})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9098c5ed",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## PromptTemplate + LLM + OutputParser\n",
|
||||
"\n",
|
||||
"We can also add in an output parser to easily trasform the raw LLM/ChatModel output into a more workable format"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "cc194c78",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"chain = prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77acf448",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Notice that this now returns a string - a much more workable format for downstream tasks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "e3d69a18",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c01864e5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Functions Output Parser\n",
|
||||
"\n",
|
||||
"When you specify the function to return, you may just want to parse that directly"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "ad0dd88e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonOutputFunctionsParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1e7aa8eb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'setup': \"Why don't bears like fast food?\",\n",
|
||||
" 'punchline': \"Because they can't catch it!\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d4aa1a01",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" prompt \n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "8b6df9ba",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Why don't bears wear shoes?\""
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "023fbccb-ef7d-489e-a9ba-f98e17283d51",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Simplifying input\n",
|
||||
"\n",
|
||||
"To make invocation even simpler, we can add a `RunnableMap` to take care of creating the prompt input dict for us:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "9601c0f0-71f9-4bd4-a672-7bd04084b018",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"map_ = RunnableMap({\"foo\": RunnablePassthrough()})\n",
|
||||
"chain = (\n",
|
||||
" map_ \n",
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "7ec4f154-fda5-4847-9220-41aa902fdc33",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Why don't bears wear shoes?\""
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "def00bfe-0f83-4805-8c8f-8a53f99fa8ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Since we're composing our map with another Runnable, we can even use some syntactic sugar and just use a dict:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "7bf3846a-02ee-41a3-ba1b-a708827d4f3a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\"foo\": RunnablePassthrough()} \n",
|
||||
" | prompt\n",
|
||||
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||||
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "e566d6a1-538d-4cb5-a210-a63e082e4c74",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Why don't bears like fast food?\""
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"bears\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
461
docs/extras/expression_language/cookbook/retrieval.ipynb
Normal file
461
docs/extras/expression_language/cookbook/retrieval.ipynb
Normal file
@@ -0,0 +1,461 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "abe47592-909c-4844-bf44-9e55c2fb4bfa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 1\n",
|
||||
"title: RAG\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "91c5ef3d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's look at adding in a retrieval step to a prompt and LLM, which adds up to a \"retrieval-augmented generation\" chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7f25d9e9-d192-42e9-af50-5660a4bfb0d9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain openai faiss-cpu"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "33be32af",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "bfc47ec1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "eae31755",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
||||
" | prompt \n",
|
||||
" | model \n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "f3040b0c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Harrison worked at Kensho.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"where did harrison work?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "e1d20c7c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer in the following language: {language}\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"chain = {\n",
|
||||
" \"context\": itemgetter(\"question\") | retriever, \n",
|
||||
" \"question\": itemgetter(\"question\"), \n",
|
||||
" \"language\": itemgetter(\"language\")\n",
|
||||
"} | prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "7ee8b2d4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Harrison ha lavorato a Kensho.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f007669c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conversational Retrieval Chain\n",
|
||||
"\n",
|
||||
"We can easily add in conversation history. This primarily means adding in chat_message_history"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "3f30c348",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema import format_document"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "64ab1dbf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"\n",
|
||||
"_template = \"\"\"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n",
|
||||
"\n",
|
||||
"Chat History:\n",
|
||||
"{chat_history}\n",
|
||||
"Follow Up Input: {question}\n",
|
||||
"Standalone question:\"\"\"\n",
|
||||
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "7d628c97",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "f60a5d0f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
|
||||
"def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"):\n",
|
||||
" doc_strings = [format_document(doc, document_prompt) for doc in docs]\n",
|
||||
" return document_separator.join(doc_strings)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "7d007db6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Tuple, List\n",
|
||||
"def _format_chat_history(chat_history: List[Tuple]) -> str:\n",
|
||||
" buffer = \"\"\n",
|
||||
" for dialogue_turn in chat_history:\n",
|
||||
" human = \"Human: \" + dialogue_turn[0]\n",
|
||||
" ai = \"Assistant: \" + dialogue_turn[1]\n",
|
||||
" buffer += \"\\n\" + \"\\n\".join([human, ai])\n",
|
||||
" return buffer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "5c32cc89",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"_inputs = RunnableMap(\n",
|
||||
" {\n",
|
||||
" \"standalone_question\": {\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"_context = {\n",
|
||||
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"]\n",
|
||||
"}\n",
|
||||
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "135c8205",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did harrison work?\",\n",
|
||||
" \"chat_history\": [],\n",
|
||||
"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "424e7e7a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Harrison worked at Kensho.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"conversational_qa_chain.invoke({\n",
|
||||
" \"question\": \"where did he work?\",\n",
|
||||
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
|
||||
"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c5543183",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### With Memory and returning source documents\n",
|
||||
"\n",
|
||||
"This shows how to use memory with the above. For memory, we need to manage that outside at the memory. For returning the retrieved documents, we just need to pass them through all the way."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "e31dd17c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import ConversationBufferMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "d4bffe94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(return_messages=True, output_key=\"answer\", input_key=\"question\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "733be985",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# First we add a step to load memory\n",
|
||||
"# This needs to be a RunnableMap because its the first input\n",
|
||||
"loaded_memory = RunnableMap(\n",
|
||||
" {\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"memory\": memory.load_memory_variables,\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"# Next we add a step to expand memory into the variables\n",
|
||||
"expanded_memory = {\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"chat_history\": lambda x: x[\"memory\"][\"history\"]\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"# Now we calculate the standalone question\n",
|
||||
"standalone_question = {\n",
|
||||
" \"standalone_question\": {\n",
|
||||
" \"question\": lambda x: x[\"question\"],\n",
|
||||
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
|
||||
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
|
||||
"}\n",
|
||||
"# Now we retrieve the documents\n",
|
||||
"retrieved_documents = {\n",
|
||||
" \"docs\": itemgetter(\"standalone_question\") | retriever,\n",
|
||||
" \"question\": lambda x: x[\"standalone_question\"]\n",
|
||||
"}\n",
|
||||
"# Now we construct the inputs for the final prompt\n",
|
||||
"final_inputs = {\n",
|
||||
" \"context\": lambda x: _combine_documents(x[\"docs\"]),\n",
|
||||
" \"question\": itemgetter(\"question\")\n",
|
||||
"}\n",
|
||||
"# And finally, we do the part that returns the answers\n",
|
||||
"answer = {\n",
|
||||
" \"answer\": final_inputs | ANSWER_PROMPT | ChatOpenAI(),\n",
|
||||
" \"docs\": itemgetter(\"docs\"),\n",
|
||||
"}\n",
|
||||
"# And now we put it all together!\n",
|
||||
"final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "806e390c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'answer': AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False),\n",
|
||||
" 'docs': [Document(page_content='harrison worked at kensho', metadata={})]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"inputs = {\"question\": \"where did harrison work?\"}\n",
|
||||
"result = final_chain.invoke(inputs)\n",
|
||||
"result"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "977399fd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Note that the memory does not save automatically\n",
|
||||
"# This will be improved in the future\n",
|
||||
"# For now you need to save it yourself\n",
|
||||
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "f94f7de4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'history': [HumanMessage(content='where did harrison work?', additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"memory.load_memory_variables({})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
227
docs/extras/expression_language/cookbook/sql_db.ipynb
Normal file
227
docs/extras/expression_language/cookbook/sql_db.ipynb
Normal file
@@ -0,0 +1,227 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "c14da114-1a4a-487d-9cff-e0e8c30ba366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 3\n",
|
||||
"title: Querying a SQL DB\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "506e9636",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can replicate our SQLDatabaseChain with Runnables."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "7a927516",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query:\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "3f51f386",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SQLDatabase"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7c3449d6-684b-416e-ba16-90a035835a88",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We'll need the Chinook sample DB for this example. There's many places to download it from, e.g. https://database.guide/2-sample-databases-sqlite/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "2ccca6fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "05ba88ee",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "a4eda902",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "5046cb17",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnableMap\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"inputs = {\n",
|
||||
" \"schema\": RunnableLambda(get_schema),\n",
|
||||
" \"question\": itemgetter(\"question\")\n",
|
||||
"}\n",
|
||||
"sql_response = (\n",
|
||||
" RunnableMap(inputs)\n",
|
||||
" | prompt\n",
|
||||
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "a5552039",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'SELECT COUNT(*) FROM Employee'"
|
||||
]
|
||||
},
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sql_response.invoke({\"question\": \"How many employees are there?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "d6fee130",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "923aa634",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"full_chain = (\n",
|
||||
" RunnableMap({\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"query\": sql_response,\n",
|
||||
" }) \n",
|
||||
" | {\n",
|
||||
" \"schema\": RunnableLambda(get_schema),\n",
|
||||
" \"question\": itemgetter(\"question\"),\n",
|
||||
" \"query\": itemgetter(\"query\"),\n",
|
||||
" \"response\": lambda x: db.run(x[\"query\"]) \n",
|
||||
" } \n",
|
||||
" | prompt_response \n",
|
||||
" | model\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "e94963d8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='There are 8 employees.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"How many employees are there?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4f358d7b-a721-4db3-9f92-f06913428afc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
122
docs/extras/expression_language/cookbook/tools.ipynb
Normal file
122
docs/extras/expression_language/cookbook/tools.ipynb
Normal file
@@ -0,0 +1,122 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "29781123",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Using tools\n",
|
||||
"\n",
|
||||
"You can use any Tools with Runnables easily."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a5c579dd-2e22-41b0-a789-346dfdecb5a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install duckduckgo-search"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "9232d2a9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.tools import DuckDuckGoSearchRun"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a0c64d2c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = DuckDuckGoSearchRun()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "391969b6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"turn the following user input into a search query for a search engine:\n",
|
||||
"\n",
|
||||
"{input}\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "e3d9d20d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model | StrOutputParser() | search"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "55f2967d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code \"GNPLAY\" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\\'s college football schedule still has some good games remaining to watch on your television.. We\\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...'"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"I'd like to figure out what games are tonight\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a16949cf-00ea-43c6-a6aa-797ad4f6918d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
2
docs/extras/expression_language/how_to/_category_.yml
Normal file
2
docs/extras/expression_language/how_to/_category_.yml
Normal file
@@ -0,0 +1,2 @@
|
||||
label: 'How to'
|
||||
position: 1
|
||||
158
docs/extras/expression_language/how_to/functions.ipynb
Normal file
158
docs/extras/expression_language/how_to/functions.ipynb
Normal file
@@ -0,0 +1,158 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fbc4bf6e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run arbitrary functions\n",
|
||||
"\n",
|
||||
"You can use arbitrary functions in the pipeline\n",
|
||||
"\n",
|
||||
"Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 77,
|
||||
"id": "6bb221b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"def length_function(text):\n",
|
||||
" return len(text)\n",
|
||||
"\n",
|
||||
"def _multiple_length_function(text1, text2):\n",
|
||||
" return len(text1) * len(text2)\n",
|
||||
"\n",
|
||||
"def multiple_length_function(_dict):\n",
|
||||
" return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
|
||||
"\n",
|
||||
"chain1 = prompt | model\n",
|
||||
"\n",
|
||||
"chain = {\n",
|
||||
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",
|
||||
" \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")} | RunnableLambda(multiple_length_function)\n",
|
||||
"} | prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 78,
|
||||
"id": "5488ec85",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='3 + 9 equals 12.', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 78,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"foo\": \"bar\", \"bar\": \"gah\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4728ddd9-914d-42ce-ae9b-72c9ce8ec940",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Accepting a Runnable Config\n",
|
||||
"\n",
|
||||
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.RunnableConfig.html?highlight=runnableconfig#langchain.schema.runnable.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 139,
|
||||
"id": "80b3b5f6-5d58-44b9-807e-cce9a46bf49f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 149,
|
||||
"id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"def parse_or_fix(text: str, config: RunnableConfig):\n",
|
||||
" fixing_chain = (\n",
|
||||
" ChatPromptTemplate.from_template(\n",
|
||||
" \"Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}\"\n",
|
||||
" \" Don't narrate, just respond with the fixed data.\"\n",
|
||||
" )\n",
|
||||
" | ChatOpenAI()\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )\n",
|
||||
" for _ in range(3):\n",
|
||||
" try:\n",
|
||||
" return json.loads(text)\n",
|
||||
" except Exception as e:\n",
|
||||
" text = fixing_chain.invoke({\"input\": text, \"error\": e}, config)\n",
|
||||
" return \"Failed to parse\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 152,
|
||||
"id": "1a5e709e-9d75-48c7-bb9c-503251990505",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Tokens Used: 65\n",
|
||||
"\tPrompt Tokens: 56\n",
|
||||
"\tCompletion Tokens: 9\n",
|
||||
"Successful Requests: 1\n",
|
||||
"Total Cost (USD): $0.00010200000000000001\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.callbacks import get_openai_callback\n",
|
||||
"\n",
|
||||
"with get_openai_callback() as cb:\n",
|
||||
" RunnableLambda(parse_or_fix).invoke(\"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]})\n",
|
||||
" print(cb)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,12 +1,21 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Interface\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a9acd2e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Interface\n",
|
||||
"\n",
|
||||
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
|
||||
"\n",
|
||||
"- `stream`: stream back chunks of the response\n",
|
||||
@@ -62,7 +71,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 3,
|
||||
"id": "d1850a1f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -72,7 +81,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 4,
|
||||
"id": "56d0669f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -170,6 +179,36 @@
|
||||
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2434ab15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can set the number of concurrent requests by using the `max_concurrency` parameter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a08522f6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}], config={\"max_concurrency\": 5})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b960cbfe",
|
||||
@@ -399,7 +438,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
If you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.
|
||||
|
||||
Here's a few different tools and functionalities to aid in debugging.
|
||||
Here are a few different tools and functionalities to aid in debugging.
|
||||
|
||||
|
||||
|
||||
@@ -18,9 +18,9 @@ For anyone building production-grade LLM applications, we highly recommend using
|
||||
|
||||
If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run.
|
||||
|
||||
There's a number of ways to enable printing at varying degrees of verbosity.
|
||||
There are a number of ways to enable printing at varying degrees of verbosity.
|
||||
|
||||
Let's suppose we have a simple agent and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
|
||||
Let's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
|
||||
|
||||
|
||||
```python
|
||||
|
||||
@@ -14,7 +14,7 @@ It also contains instructions for how to deploy this app on the Streamlit platfo
|
||||
|
||||
## [Gradio (on Hugging Face)](https://github.com/hwchase17/langchain-gradio-template)
|
||||
|
||||
This repo serves as a template for how deploy a LangChain with Gradio.
|
||||
This repo serves as a template for how to deploy a LangChain with Gradio.
|
||||
It implements a chatbot interface, with a "Bring-Your-Own-Token" approach (nice for not wracking up big bills).
|
||||
It also contains instructions for how to deploy this app on the Hugging Face platform.
|
||||
This is heavily influenced by James Weaver's [excellent examples](https://huggingface.co/JavaFXpert).
|
||||
@@ -27,7 +27,7 @@ Chainlit [doc](https://docs.chainlit.io/langchain) on the integration with LangC
|
||||
|
||||
## [Beam](https://github.com/slai-labs/get-beam/tree/main/examples/langchain-question-answering)
|
||||
|
||||
This repo serves as a template for how deploy a LangChain with [Beam](https://beam.cloud).
|
||||
This repo serves as a template for how to deploy a LangChain with [Beam](https://beam.cloud).
|
||||
|
||||
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
|
||||
|
||||
@@ -49,7 +49,7 @@ A minimal example of how to deploy LangChain to [Fly.io](https://fly.io/) using
|
||||
|
||||
## [Digitalocean App Platform](https://github.com/homanp/digitalocean-langchain)
|
||||
|
||||
A minimal example on how to deploy LangChain to DigitalOcean App Platform.
|
||||
A minimal example of how to deploy LangChain to DigitalOcean App Platform.
|
||||
|
||||
## [CI/CD Google Cloud Build + Dockerfile + Serverless Google Cloud Run](https://github.com/g-emarco/github-assistant)
|
||||
|
||||
@@ -57,7 +57,7 @@ Boilerplate LangChain project on how to deploy to Google Cloud Run using Docker
|
||||
|
||||
## [Google Cloud Run](https://github.com/homanp/gcp-langchain)
|
||||
|
||||
A minimal example on how to deploy LangChain to Google Cloud Run.
|
||||
A minimal example of how to deploy LangChain to Google Cloud Run.
|
||||
|
||||
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
|
||||
|
||||
@@ -82,4 +82,4 @@ These templates serve as examples of how to build, deploy, and share LangChain a
|
||||
|
||||
## [AzureML Online Endpoint](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/llm/langchain/1_langchain_basic_deploy.ipynb)
|
||||
|
||||
A minimal example of how to deploy LangChain to an Azure Machine Learning Online Endpoint.
|
||||
A minimal example of how to deploy LangChain to an Azure Machine Learning Online Endpoint.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -5,7 +5,7 @@
|
||||
"id": "b8982428",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Private, local, open source LLMs\n",
|
||||
"# Run LLMs locally\n",
|
||||
"\n",
|
||||
"## Use case\n",
|
||||
"\n",
|
||||
@@ -146,7 +146,7 @@
|
||||
"source": [
|
||||
"## Environment\n",
|
||||
"\n",
|
||||
"Inference speed is a chllenge when running models locally (see above).\n",
|
||||
"Inference speed is a challenge when running models locally (see above).\n",
|
||||
"\n",
|
||||
"To minimize latency, it is desiable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n",
|
||||
"\n",
|
||||
@@ -264,88 +264,19 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install llama-cpp-python"
|
||||
"CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dirclear"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"id": "9d5f94b5",
|
||||
"execution_count": null,
|
||||
"id": "a88bf0c8-e989-4bcd-bcb7-4d7757e684f2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"objc[10142]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x2a0c4c208) and /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/libllama.dylib (0x2c28bc208). One of the two will be used. Which one is undefined.\n",
|
||||
"llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\n",
|
||||
"llama_model_load_internal: format = ggjt v3 (latest)\n",
|
||||
"llama_model_load_internal: n_vocab = 32000\n",
|
||||
"llama_model_load_internal: n_ctx = 2048\n",
|
||||
"llama_model_load_internal: n_embd = 5120\n",
|
||||
"llama_model_load_internal: n_mult = 256\n",
|
||||
"llama_model_load_internal: n_head = 40\n",
|
||||
"llama_model_load_internal: n_layer = 40\n",
|
||||
"llama_model_load_internal: n_rot = 128\n",
|
||||
"llama_model_load_internal: freq_base = 10000.0\n",
|
||||
"llama_model_load_internal: freq_scale = 1\n",
|
||||
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
|
||||
"llama_model_load_internal: n_ff = 13824\n",
|
||||
"llama_model_load_internal: model size = 13B\n",
|
||||
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
|
||||
"llama_model_load_internal: mem required = 8953.71 MB (+ 1608.00 MB per state)\n",
|
||||
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
"ggml_metal_init: using MPS\n",
|
||||
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal'\n",
|
||||
"ggml_metal_init: loaded kernel_add 0x47774af60\n",
|
||||
"ggml_metal_init: loaded kernel_mul 0x47774bc00\n",
|
||||
"ggml_metal_init: loaded kernel_mul_row 0x47774c230\n",
|
||||
"ggml_metal_init: loaded kernel_scale 0x47774c890\n",
|
||||
"ggml_metal_init: loaded kernel_silu 0x47774cef0\n",
|
||||
"ggml_metal_init: loaded kernel_relu 0x10e33e500\n",
|
||||
"ggml_metal_init: loaded kernel_gelu 0x47774b2f0\n",
|
||||
"ggml_metal_init: loaded kernel_soft_max 0x47771a580\n",
|
||||
"ggml_metal_init: loaded kernel_diag_mask_inf 0x47774dab0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_f16 0x47774e110\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x47774e7d0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x13efd7170\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q2_K 0x13efd73d0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q3_K 0x13efd7630\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_K 0x13efd7890\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q5_K 0x4744c9740\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q6_K 0x4744ca6b0\n",
|
||||
"ggml_metal_init: loaded kernel_rms_norm 0x4744cb250\n",
|
||||
"ggml_metal_init: loaded kernel_norm 0x4744cb970\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x10e33f700\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x10e33fcd0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x4744cc2d0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x4744cc6f0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x4744cd6b0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x4744cde20\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x10e33ff30\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x10e340190\n",
|
||||
"ggml_metal_init: loaded kernel_rope 0x10e3403f0\n",
|
||||
"ggml_metal_init: loaded kernel_alibi_f32 0x10e340de0\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x10e3416d0\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x10e342080\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x10e342ca0\n",
|
||||
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
|
||||
"ggml_metal_init: hasUnifiedMemory = true\n",
|
||||
"ggml_metal_init: maxTransferRate = built-in GPU\n",
|
||||
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6986.19 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1032.00 MB, ( 8018.19 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 9620.19 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 426.00 MB, (10046.19 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (10558.19 / 21845.34)\n",
|
||||
"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import LlamaCpp\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\",\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
@@ -448,87 +379,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 46,
|
||||
"id": "b55a2147",
|
||||
"execution_count": null,
|
||||
"id": "915ecd4c-8f6b-4de3-a787-b64cb7c682b4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found model file at /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\n",
|
||||
"llama_new_context_with_model: max tensor size = 87.89 MB\n",
|
||||
"llama_new_context_with_model: max tensor size = 87.89 MB\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"llama.cpp: using Metal\n",
|
||||
"llama.cpp: loading model from /Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\n",
|
||||
"llama_model_load_internal: format = ggjt v3 (latest)\n",
|
||||
"llama_model_load_internal: n_vocab = 32001\n",
|
||||
"llama_model_load_internal: n_ctx = 2048\n",
|
||||
"llama_model_load_internal: n_embd = 5120\n",
|
||||
"llama_model_load_internal: n_mult = 256\n",
|
||||
"llama_model_load_internal: n_head = 40\n",
|
||||
"llama_model_load_internal: n_layer = 40\n",
|
||||
"llama_model_load_internal: n_rot = 128\n",
|
||||
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
|
||||
"llama_model_load_internal: n_ff = 13824\n",
|
||||
"llama_model_load_internal: n_parts = 1\n",
|
||||
"llama_model_load_internal: model size = 13B\n",
|
||||
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
|
||||
"llama_model_load_internal: mem required = 9031.71 MB (+ 1608.00 MB per state)\n",
|
||||
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
"ggml_metal_init: using MPS\n",
|
||||
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/ggml-metal.metal'\n",
|
||||
"ggml_metal_init: loaded kernel_add 0x37944d850\n",
|
||||
"ggml_metal_init: loaded kernel_mul 0x37944f350\n",
|
||||
"ggml_metal_init: loaded kernel_mul_row 0x37944fdd0\n",
|
||||
"ggml_metal_init: loaded kernel_scale 0x3794505a0\n",
|
||||
"ggml_metal_init: loaded kernel_silu 0x379450800\n",
|
||||
"ggml_metal_init: loaded kernel_relu 0x379450a60\n",
|
||||
"ggml_metal_init: loaded kernel_gelu 0x379450cc0\n",
|
||||
"ggml_metal_init: loaded kernel_soft_max 0x379450ff0\n",
|
||||
"ggml_metal_init: loaded kernel_diag_mask_inf 0x379451250\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_f16 0x3794514b0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x379451710\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x379451970\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q2_k 0x379451bd0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q3_k 0x379451e30\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_k 0x379452090\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q5_k 0x3794522f0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q6_k 0x379452550\n",
|
||||
"ggml_metal_init: loaded kernel_rms_norm 0x3794527b0\n",
|
||||
"ggml_metal_init: loaded kernel_norm 0x379452a10\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x379452c70\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x379452ed0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x379453130\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q2_k_f32 0x379453390\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q3_k_f32 0x3794535f0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_k_f32 0x379453850\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q5_k_f32 0x379453ab0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q6_k_f32 0x379453d10\n",
|
||||
"ggml_metal_init: loaded kernel_rope 0x379453f70\n",
|
||||
"ggml_metal_init: loaded kernel_alibi_f32 0x3794541d0\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x379454430\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x379454690\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x3794548f0\n",
|
||||
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
|
||||
"ggml_metal_init: hasUnifiedMemory = true\n",
|
||||
"ggml_metal_init: maxTransferRate = built-in GPU\n",
|
||||
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, (17542.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1024.00 MB, (18566.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, (20168.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 512.00 MB, (20680.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (21192.94 / 21845.34)\n",
|
||||
"ggml_metal_free: deallocating\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import GPT4All\n",
|
||||
"llm = GPT4All(model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\")"
|
||||
@@ -564,89 +418,21 @@
|
||||
"\n",
|
||||
"Some LLMs will benefit from specific prompts.\n",
|
||||
"\n",
|
||||
"For example, llama2 can use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n",
|
||||
"For example, LLaMA will use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n",
|
||||
"\n",
|
||||
"We can use `ConditionalPromptSelector` to set prompt based on the model type."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 57,
|
||||
"id": "d082b10a",
|
||||
"execution_count": null,
|
||||
"id": "16759b7c-7903-4269-b7b4-f83b313d8091",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"llama.cpp: loading model from /Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\n",
|
||||
"llama_model_load_internal: format = ggjt v3 (latest)\n",
|
||||
"llama_model_load_internal: n_vocab = 32000\n",
|
||||
"llama_model_load_internal: n_ctx = 2048\n",
|
||||
"llama_model_load_internal: n_embd = 5120\n",
|
||||
"llama_model_load_internal: n_mult = 256\n",
|
||||
"llama_model_load_internal: n_head = 40\n",
|
||||
"llama_model_load_internal: n_layer = 40\n",
|
||||
"llama_model_load_internal: n_rot = 128\n",
|
||||
"llama_model_load_internal: freq_base = 10000.0\n",
|
||||
"llama_model_load_internal: freq_scale = 1\n",
|
||||
"llama_model_load_internal: ftype = 2 (mostly Q4_0)\n",
|
||||
"llama_model_load_internal: n_ff = 13824\n",
|
||||
"llama_model_load_internal: model size = 13B\n",
|
||||
"llama_model_load_internal: ggml ctx size = 0.09 MB\n",
|
||||
"llama_model_load_internal: mem required = 8953.71 MB (+ 1608.00 MB per state)\n",
|
||||
"llama_new_context_with_model: kv self size = 1600.00 MB\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
"ggml_metal_init: using MPS\n",
|
||||
"ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal'\n",
|
||||
"ggml_metal_init: loaded kernel_add 0x4744d09d0\n",
|
||||
"ggml_metal_init: loaded kernel_mul 0x3781cb3d0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_row 0x37813bb60\n",
|
||||
"ggml_metal_init: loaded kernel_scale 0x474481080\n",
|
||||
"ggml_metal_init: loaded kernel_silu 0x4744d29f0\n",
|
||||
"ggml_metal_init: loaded kernel_relu 0x3781254c0\n",
|
||||
"ggml_metal_init: loaded kernel_gelu 0x47447f280\n",
|
||||
"ggml_metal_init: loaded kernel_soft_max 0x4744cf470\n",
|
||||
"ggml_metal_init: loaded kernel_diag_mask_inf 0x4744cf6d0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_f16 0x4744cf930\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_0 0x4744cfb90\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_1 0x4744cfdf0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q2_K 0x4744d0050\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q3_K 0x4744ce980\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q4_K 0x4744cebe0\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q5_K 0x4744cee40\n",
|
||||
"ggml_metal_init: loaded kernel_get_rows_q6_K 0x4744cf0a0\n",
|
||||
"ggml_metal_init: loaded kernel_rms_norm 0x474482450\n",
|
||||
"ggml_metal_init: loaded kernel_norm 0x4744826b0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x474482910\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x474482b70\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x474482dd0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x474483030\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x474483290\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x4744834f0\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x474483750\n",
|
||||
"ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x4744839b0\n",
|
||||
"ggml_metal_init: loaded kernel_rope 0x474483c10\n",
|
||||
"ggml_metal_init: loaded kernel_alibi_f32 0x474483e70\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f16 0x4744840d0\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f32_f32 0x474484330\n",
|
||||
"ggml_metal_init: loaded kernel_cpy_f16_f16 0x474484590\n",
|
||||
"ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB\n",
|
||||
"ggml_metal_init: hasUnifiedMemory = true\n",
|
||||
"ggml_metal_init: maxTransferRate = built-in GPU\n",
|
||||
"ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6986.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1032.00 MB, ( 8018.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 9620.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 426.00 MB, (10046.94 / 21845.34)\n",
|
||||
"ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 512.00 MB, (10558.94 / 21845.34)\n",
|
||||
"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set our LLM\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/llama-2-13b-chat.ggmlv3.q4_0.bin\",\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
@@ -661,7 +447,7 @@
|
||||
"id": "66656084",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Set the associated prompt."
|
||||
"Set the associated prompt based upon the model version."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -759,6 +545,18 @@
|
||||
"llm_chain.run({\"question\":question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6e0d37e7-f1d9-4848-bf2c-c22392ee141f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We also can use the LangChain Prompt Hub to fetch and / or store prompts that are model specific.\n",
|
||||
"\n",
|
||||
"This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n",
|
||||
"\n",
|
||||
"For example, [here](https://smith.langchain.com/hub/rlm/rag-prompt-llama) is a prompt for RAG with LLaMA-specific tokens."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6ba66260",
|
||||
@@ -770,16 +568,12 @@
|
||||
"\n",
|
||||
"For example, here is a guide to [RAG](docs/use_cases/question_answering/how_to/local_retrieval_qa) with local LLMs.\n",
|
||||
"\n",
|
||||
"In general, use cases for local model can be driven by at least two factors:\n",
|
||||
"In general, use cases for local LLMs can be driven by at least two factors:\n",
|
||||
"\n",
|
||||
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
|
||||
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
|
||||
"\n",
|
||||
"There are a few approach to support specific use-cases: \n",
|
||||
"\n",
|
||||
"* Fine-tuning (e.g., [gpt-llm-trainer](https://github.com/mshumer/gpt-llm-trainer), [Anyscale](https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications)) \n",
|
||||
"* [Function-calling](https://github.com/MeetKai/functionary/tree/main) for use-cases like extraction or tagging\n",
|
||||
"\n"
|
||||
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open source LLMs."
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -799,7 +593,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
1
docs/extras/guides/privacy/_category_.yml
Normal file
1
docs/extras/guides/privacy/_category_.yml
Normal file
@@ -0,0 +1 @@
|
||||
label: 'Privacy'
|
||||
@@ -0,0 +1,467 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data anonymization with Microsoft Presidio\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
|
||||
"\n",
|
||||
"## Use case\n",
|
||||
"\n",
|
||||
"Data anonymization is crucial before passing information to a language model like GPT-4 because it helps protect privacy and maintain confidentiality. If data is not anonymized, sensitive information such as names, addresses, contact numbers, or other identifiers linked to specific individuals could potentially be learned and misused. Hence, by obscuring or removing this personally identifiable information (PII), data can be used freely without compromising individuals' privacy rights or breaching data protection laws and regulations.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"Anonynization consists of two steps:\n",
|
||||
"\n",
|
||||
"1. **Identification:** Identify all data fields that contain personally identifiable information (PII).\n",
|
||||
"2. **Replacement**: Replace all PIIs with pseudo values or codes that do not reveal any personal information about the individual but can be used for reference. We're not using regular encryption, because the language model won't be able to understand the meaning or context of the encrypted data.\n",
|
||||
"\n",
|
||||
"We use *Microsoft Presidio* together with *Faker* framework for anonymization purposes because of the wide range of functionalities they provide. The full implementation is available in `PresidioAnonymizer`.\n",
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n",
|
||||
"Below you will find the use case on how to leverage anonymization in LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install necessary packages\n",
|
||||
"# ! pip install langchain langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker\n",
|
||||
"# ! python -m spacy download en_core_web_lg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"Let's see how PII anonymization works using a sample sentence:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My name is Laura Ruiz, call me at +1-412-982-8374x13414 or email me at javierwatkins@example.net'"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.data_anonymizer import PresidioAnonymizer\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioAnonymizer()\n",
|
||||
"\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Using with LangChain Expression Language\n",
|
||||
"\n",
|
||||
"With LCEL we can easily chain together anonymization with the rest of our application."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
|
||||
"# import dotenv\n",
|
||||
"\n",
|
||||
"# dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text = f\"\"\"Slim Shady recently lost his wallet. \n",
|
||||
"Inside is some cash and his credit card with the number 4916 0387 9536 0861. \n",
|
||||
"If you would find it, please call at 313-666-7440 or write an email here: real.slim.shady@gmail.com.\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Dear Sir/Madam,\n",
|
||||
"\n",
|
||||
"We regret to inform you that Richard Fields has recently misplaced his wallet, which contains a sum of cash and his credit card bearing the number 30479847307774. \n",
|
||||
"\n",
|
||||
"Should you happen to come across it, we kindly request that you contact us immediately at 6439182672 or via email at frank45@example.com.\n",
|
||||
"\n",
|
||||
"Thank you for your attention to this matter.\n",
|
||||
"\n",
|
||||
"Yours faithfully,\n",
|
||||
"\n",
|
||||
"[Your Name]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioAnonymizer()\n",
|
||||
"\n",
|
||||
"template = \"\"\"Rewrite this text into an official, short email:\n",
|
||||
"\n",
|
||||
"{anonymized_text}\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"\n",
|
||||
"chain = {\"anonymized_text\": anonymizer.anonymize} | prompt | llm\n",
|
||||
"response = chain.invoke(text)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Customization\n",
|
||||
"We can specify ``analyzed_fields`` to only anonymize particular types of data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My name is Adrian Fleming, call me at 313-666-7440 or email me at real.slim.shady@gmail.com'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioAnonymizer(analyzed_fields=[\"PERSON\"])\n",
|
||||
"\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As can be observed, the name was correctly identified and replaced with another. The `analyzed_fields` attribute is responsible for what values are to be detected and substituted. We can add *PHONE_NUMBER* to the list:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My name is Justin Miller, call me at 761-824-1889 or email me at real.slim.shady@gmail.com'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioAnonymizer(analyzed_fields=[\"PERSON\", \"PHONE_NUMBER\"])\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"If no analyzed_fields are specified, by default the anonymizer will detect all supported formats. Below is the full list of them:\n",
|
||||
"\n",
|
||||
"`['PERSON', 'EMAIL_ADDRESS', 'PHONE_NUMBER', 'IBAN_CODE', 'CREDIT_CARD', 'CRYPTO', 'IP_ADDRESS', 'LOCATION', 'DATE_TIME', 'NRP', 'MEDICAL_LICENSE', 'URL', 'US_BANK_NUMBER', 'US_DRIVER_LICENSE', 'US_ITIN', 'US_PASSPORT', 'US_SSN']`\n",
|
||||
"\n",
|
||||
"**Disclaimer:** We suggest carefully defining the private data to be detected - Presidio doesn't work perfectly and it sometimes makes mistakes, so it's better to have more control over the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My name is Dr. Jennifer Baker, call me at (508)839-9329x232 or email me at ehamilton@example.com'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioAnonymizer()\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"It may be that the above list of detected fields is not sufficient. For example, the already available *PHONE_NUMBER* field does not support polish phone numbers and confuses it with another field:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My polish phone number is NRGN41434238921378'"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioAnonymizer()\n",
|
||||
"anonymizer.anonymize(\"My polish phone number is 666555444\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"You can then write your own recognizers and add them to the pool of those present. How exactly to create recognizers is described in the [Presidio documentation](https://microsoft.github.io/presidio/samples/python/customizing_presidio_analyzer/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define the regex pattern in a Presidio `Pattern` object:\n",
|
||||
"from presidio_analyzer import Pattern, PatternRecognizer\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"polish_phone_numbers_pattern = Pattern(\n",
|
||||
" name=\"polish_phone_numbers_pattern\",\n",
|
||||
" regex=\"(?<!\\w)(\\(?(\\+|00)?48\\)?)?[ -]?\\d{3}[ -]?\\d{3}[ -]?\\d{3}(?!\\w)\",\n",
|
||||
" score=1,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Define the recognizer with one or more patterns\n",
|
||||
"polish_phone_numbers_recognizer = PatternRecognizer(\n",
|
||||
" supported_entity=\"POLISH_PHONE_NUMBER\", patterns=[polish_phone_numbers_pattern]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"Now, we can add recognizer by calling `add_recognizer` method on the anonymizer:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"anonymizer.add_recognizer(polish_phone_numbers_recognizer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"And voilà! With the added pattern-based recognizer, the anonymizer now handles polish phone numbers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"My polish phone number is <POLISH_PHONE_NUMBER>\n",
|
||||
"My polish phone number is <POLISH_PHONE_NUMBER>\n",
|
||||
"My polish phone number is <POLISH_PHONE_NUMBER>\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(anonymizer.anonymize(\"My polish phone number is 666555444\"))\n",
|
||||
"print(anonymizer.anonymize(\"My polish phone number is 666 555 444\"))\n",
|
||||
"print(anonymizer.anonymize(\"My polish phone number is +48 666 555 444\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"The problem is - even though we recognize polish phone numbers now, we don't have a method (operator) that would tell how to substitute a given field - because of this, in the outpit we only provide string `<POLISH_PHONE_NUMBER>` We need to create a method to replace it correctly: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'511 622 683'"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from faker import Faker\n",
|
||||
"\n",
|
||||
"fake = Faker(locale=\"pl_PL\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def fake_polish_phone_number(_=None):\n",
|
||||
" return fake.phone_number()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"fake_polish_phone_number()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\\n",
|
||||
"We used Faker to create pseudo data. Now we can create an operator and add it to the anonymizer. For complete information about operators and their creation, see the Presidio documentation for [simple](https://microsoft.github.io/presidio/tutorial/10_simple_anonymization/) and [custom](https://microsoft.github.io/presidio/tutorial/11_custom_anonymization/) anonymization."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from presidio_anonymizer.entities import OperatorConfig\n",
|
||||
"\n",
|
||||
"new_operators = {\n",
|
||||
" \"POLISH_PHONE_NUMBER\": OperatorConfig(\n",
|
||||
" \"custom\", {\"lambda\": fake_polish_phone_number}\n",
|
||||
" )\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"anonymizer.add_operators(new_operators)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My polish phone number is +48 734 630 977'"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer.anonymize(\"My polish phone number is 666555444\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Future works\n",
|
||||
"- **instance anonymization** - at this point, each occurrence of PII is treated as a separate entity and separately anonymized. Therefore, two occurrences of the name John Doe in the text will be changed to two different names. It is therefore worth introducing support for full instance detection, so that repeated occurrences are treated as a single object."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -0,0 +1,520 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Mutli-language data anonymization with Microsoft Presidio\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Use case\n",
|
||||
"\n",
|
||||
"Multi-language support in data pseudonymization is essential due to differences in language structures and cultural contexts. Different languages may have varying formats for personal identifiers. For example, the structure of names, locations and dates can differ greatly between languages and regions. Furthermore, non-alphanumeric characters, accents, and the direction of writing can impact pseudonymization processes. Without multi-language support, data could remain identifiable or be misinterpreted, compromising data privacy and accuracy. Hence, it enables effective and precise pseudonymization suited for global operations.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"PII detection in Microsoft Presidio relies on several components - in addition to the usual pattern matching (e.g. using regex), the analyser uses a model for Named Entity Recognition (NER) to extract entities such as:\n",
|
||||
"- `PERSON`\n",
|
||||
"- `LOCATION`\n",
|
||||
"- `DATE_TIME`\n",
|
||||
"- `NRP`\n",
|
||||
"- `ORGANIZATION`\n",
|
||||
"\n",
|
||||
"[[Source]](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/predefined_recognizers/spacy_recognizer.py)\n",
|
||||
"\n",
|
||||
"To handle NER in specific languages, we utilize unique models from the `spaCy` library, recognized for its extensive selection covering multiple languages and sizes. However, it's not restrictive, allowing for integration of alternative frameworks such as [Stanza](https://microsoft.github.io/presidio/analyzer/nlp_engines/spacy_stanza/) or [transformers](https://microsoft.github.io/presidio/analyzer/nlp_engines/transformers/) when necessary.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install necessary packages\n",
|
||||
"# ! pip install langchain langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker\n",
|
||||
"# ! python -m spacy download en_core_web_lg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.data_anonymizer import PresidioReversibleAnonymizer\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\"],\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"By default, `PresidioAnonymizer` and `PresidioReversibleAnonymizer` use a model trained on English texts, so they handle other languages moderately well. \n",
|
||||
"\n",
|
||||
"For example, here the model did not detect the person:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Me llamo Sofía'"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer.anonymize(\"Me llamo Sofía\") # \"My name is Sofía\" in Spanish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"They may also take words from another language as actual entities. Here, both the word *'Yo'* (*'I'* in Spanish) and *Sofía* have been classified as `PERSON`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Bridget Kirk soy Sally Knight'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer.anonymize(\"Yo soy Sofía\") # \"I am Sofía\" in Spanish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you want to anonymise texts from other languages, you need to download other models and add them to the anonymiser configuration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Download the models for the languages you want to use\n",
|
||||
"# ! python -m spacy download en_core_web_md\n",
|
||||
"# ! python -m spacy download es_core_news_md"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"nlp_config = {\n",
|
||||
" \"nlp_engine_name\": \"spacy\",\n",
|
||||
" \"models\": [\n",
|
||||
" {\"lang_code\": \"en\", \"model_name\": \"en_core_web_md\"},\n",
|
||||
" {\"lang_code\": \"es\", \"model_name\": \"es_core_news_md\"},\n",
|
||||
" ],\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We have therefore added a Spanish language model. Note also that we have downloaded an alternative model for English as well - in this case we have replaced the large model `en_core_web_lg` (560MB) with its smaller version `en_core_web_md` (40MB) - the size is therefore reduced by 14 times! If you care about the speed of anonymisation, it is worth considering it.\n",
|
||||
"\n",
|
||||
"All models for the different languages can be found in the [spaCy documentation](https://spacy.io/usage/models).\n",
|
||||
"\n",
|
||||
"Now pass the configuration as the `languages_config` parameter to Anonymiser. As you can see, both previous examples work flawlessly:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Me llamo Michelle Smith\n",
|
||||
"Yo soy Rachel Wright\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\"],\n",
|
||||
" languages_config=nlp_config,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\"Me llamo Sofía\", language=\"es\")\n",
|
||||
") # \"My name is Sofía\" in Spanish\n",
|
||||
"print(anonymizer.anonymize(\"Yo soy Sofía\", language=\"es\")) # \"I am Sofía\" in Spanish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"By default, the language indicated first in the configuration will be used when anonymising text (in this case English):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"My name is Ronnie Ayala\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(anonymizer.anonymize(\"My name is John\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Advanced usage\n",
|
||||
"\n",
|
||||
"### Custom labels in NER model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It may be that the spaCy model has different class names than those supported by the Microsoft Presidio by default. Take Polish, for example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Text: Wiktoria, Start: 12, End: 20, Label: persName\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# ! python -m spacy download pl_core_news_md\n",
|
||||
"\n",
|
||||
"import spacy\n",
|
||||
"\n",
|
||||
"nlp = spacy.load(\"pl_core_news_md\")\n",
|
||||
"doc = nlp(\"Nazywam się Wiktoria\") # \"My name is Wiktoria\" in Polish\n",
|
||||
"\n",
|
||||
"for ent in doc.ents:\n",
|
||||
" print(\n",
|
||||
" f\"Text: {ent.text}, Start: {ent.start_char}, End: {ent.end_char}, Label: {ent.label_}\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The name *Victoria* was classified as `persName`, which does not correspond to the default class names `PERSON`/`PER` implemented in Microsoft Presidio (look for `CHECK_LABEL_GROUPS` in [SpacyRecognizer implementation](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/predefined_recognizers/spacy_recognizer.py)). \n",
|
||||
"\n",
|
||||
"You can find out more about custom labels in spaCy models (including your own, trained ones) in [this thread](https://github.com/microsoft/presidio/issues/851).\n",
|
||||
"\n",
|
||||
"That's why our sentence will not be anonymized:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Nazywam się Wiktoria\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"nlp_config = {\n",
|
||||
" \"nlp_engine_name\": \"spacy\",\n",
|
||||
" \"models\": [\n",
|
||||
" {\"lang_code\": \"en\", \"model_name\": \"en_core_web_md\"},\n",
|
||||
" {\"lang_code\": \"es\", \"model_name\": \"es_core_news_md\"},\n",
|
||||
" {\"lang_code\": \"pl\", \"model_name\": \"pl_core_news_md\"},\n",
|
||||
" ],\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\", \"LOCATION\", \"DATE_TIME\"],\n",
|
||||
" languages_config=nlp_config,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\"Nazywam się Wiktoria\", language=\"pl\")\n",
|
||||
") # \"My name is Wiktoria\" in Polish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To address this, create your own `SpacyRecognizer` with your own class mapping and add it to the anonymizer:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from presidio_analyzer.predefined_recognizers import SpacyRecognizer\n",
|
||||
"\n",
|
||||
"polish_check_label_groups = [\n",
|
||||
" ({\"LOCATION\"}, {\"placeName\", \"geogName\"}),\n",
|
||||
" ({\"PERSON\"}, {\"persName\"}),\n",
|
||||
" ({\"DATE_TIME\"}, {\"date\", \"time\"}),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"spacy_recognizer = SpacyRecognizer(\n",
|
||||
" supported_language=\"pl\",\n",
|
||||
" check_label_groups=polish_check_label_groups,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"anonymizer.add_recognizer(spacy_recognizer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now everything works smoothly:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Nazywam się Morgan Walters\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\"Nazywam się Wiktoria\", language=\"pl\")\n",
|
||||
") # \"My name is Wiktoria\" in Polish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try on more complex example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Nazywam się Ernest Liu. New Taylorburgh to moje miasto rodzinne. Urodziłam się 1987-01-19\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\n",
|
||||
" \"Nazywam się Wiktoria. Płock to moje miasto rodzinne. Urodziłam się dnia 6 kwietnia 2001 roku\",\n",
|
||||
" language=\"pl\",\n",
|
||||
" )\n",
|
||||
") # \"My name is Wiktoria. Płock is my home town. I was born on 6 April 2001\" in Polish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As you can see, thanks to class mapping, the anonymiser can cope with different types of entities. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom language-specific operators\n",
|
||||
"\n",
|
||||
"In the example above, the sentence has been anonymised correctly, but the fake data does not fit the Polish language at all. Custom operators can therefore be added, which will resolve the issue:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from faker import Faker\n",
|
||||
"from presidio_anonymizer.entities import OperatorConfig\n",
|
||||
"\n",
|
||||
"fake = Faker(locale=\"pl_PL\") # Setting faker to provide Polish data\n",
|
||||
"\n",
|
||||
"new_operators = {\n",
|
||||
" \"PERSON\": OperatorConfig(\"custom\", {\"lambda\": lambda _: fake.first_name_female()}),\n",
|
||||
" \"LOCATION\": OperatorConfig(\"custom\", {\"lambda\": lambda _: fake.city()}),\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"anonymizer.add_operators(new_operators)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Nazywam się Marianna. Szczecin to moje miasto rodzinne. Urodziłam się 1976-11-16\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\n",
|
||||
" \"Nazywam się Wiktoria. Płock to moje miasto rodzinne. Urodziłam się dnia 6 kwietnia 2001 roku\",\n",
|
||||
" language=\"pl\",\n",
|
||||
" )\n",
|
||||
") # \"My name is Wiktoria. Płock is my home town. I was born on 6 April 2001\" in Polish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Limitations\n",
|
||||
"\n",
|
||||
"Remember - results are as good as your recognizers and as your NER models!\n",
|
||||
"\n",
|
||||
"Look at the example below - we downloaded the small model for Spanish (12MB) and it no longer performs as well as the medium version (40MB):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Model: es_core_news_sm. Result: Me llamo Sofía\n",
|
||||
"Model: es_core_news_md. Result: Me llamo Lawrence Davis\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# ! python -m spacy download es_core_news_sm\n",
|
||||
"\n",
|
||||
"for model in [\"es_core_news_sm\", \"es_core_news_md\"]:\n",
|
||||
" nlp_config = {\n",
|
||||
" \"nlp_engine_name\": \"spacy\",\n",
|
||||
" \"models\": [\n",
|
||||
" {\"lang_code\": \"es\", \"model_name\": model},\n",
|
||||
" ],\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
" anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\"],\n",
|
||||
" languages_config=nlp_config,\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" print(\n",
|
||||
" f\"Model: {model}. Result: {anonymizer.anonymize('Me llamo Sofía', language='es')}\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In many cases, even the larger models from spaCy will not be sufficient - there are already other, more complex and better methods of detecting named entities, based on transformers. You can read more about this [here](https://microsoft.github.io/presidio/analyzer/nlp_engines/transformers/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Future works\n",
|
||||
"\n",
|
||||
"- **automatic language detection** - instead of passing the language as a parameter in `anonymizer.anonymize`, we could detect the language/s beforehand and then use the corresponding NER model."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -0,0 +1,461 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Reversible data anonymization with Microsoft Presidio\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Use case\n",
|
||||
"\n",
|
||||
"We have already written about the importance of anonymizing sensitive data in the previous section. **Reversible Anonymization** is an equally essential technology while sharing information with language models, as it balances data protection with data usability. This technique involves masking sensitive personally identifiable information (PII), yet it can be reversed and original data can be restored when authorized users need it. Its main advantage lies in the fact that while it conceals individual identities to prevent misuse, it also allows the concealed data to be accurately unmasked should it be necessary for legal or compliance purposes. \n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"We implemented the `PresidioReversibleAnonymizer`, which consists of two parts:\n",
|
||||
"\n",
|
||||
"1. anonymization - it works the same way as `PresidioAnonymizer`, plus the object itself stores a mapping of made-up values to original ones, for example:\n",
|
||||
"```\n",
|
||||
" {\n",
|
||||
" \"PERSON\": {\n",
|
||||
" \"<anonymized>\": \"<original>\",\n",
|
||||
" \"John Doe\": \"Slim Shady\"\n",
|
||||
" },\n",
|
||||
" \"PHONE_NUMBER\": {\n",
|
||||
" \"111-111-1111\": \"555-555-5555\"\n",
|
||||
" }\n",
|
||||
" ...\n",
|
||||
" }\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"2. deanonymization - using the mapping described above, it matches fake data with original data and then substitutes it.\n",
|
||||
"\n",
|
||||
"Between anonymization and deanonymization user can perform different operations, for example, passing the output to LLM.\n",
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install necessary packages\n",
|
||||
"# ! pip install langchain langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker\n",
|
||||
"# ! python -m spacy download en_core_web_lg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`PresidioReversibleAnonymizer` is not significantly different from its predecessor (`PresidioAnonymizer`) in terms of anonymization:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'My name is Maria Lynch, call me at 7344131647 or email me at jamesmichael@example.com. By the way, my card number is: 4838637940262'"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.data_anonymizer import PresidioReversibleAnonymizer\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\", \"PHONE_NUMBER\", \"EMAIL_ADDRESS\", \"CREDIT_CARD\"],\n",
|
||||
" # Faker seed is used here to make sure the same fake data is generated for the test purposes\n",
|
||||
" # In production, it is recommended to remove the faker_seed parameter (it will default to None)\n",
|
||||
" faker_seed=42,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com. \"\n",
|
||||
" \"By the way, my card number is: 4916 0387 9536 0861\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is what the full string we want to deanonymize looks like:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Maria Lynch recently lost his wallet. \n",
|
||||
"Inside is some cash and his credit card with the number 4838637940262. \n",
|
||||
"If you would find it, please call at 7344131647 or write an email here: jamesmichael@example.com.\n",
|
||||
"Maria Lynch would be very grateful!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# We know this data, as we set the faker_seed parameter\n",
|
||||
"fake_name = \"Maria Lynch\"\n",
|
||||
"fake_phone = \"7344131647\"\n",
|
||||
"fake_email = \"jamesmichael@example.com\"\n",
|
||||
"fake_credit_card = \"4838637940262\"\n",
|
||||
"\n",
|
||||
"anonymized_text = f\"\"\"{fake_name} recently lost his wallet. \n",
|
||||
"Inside is some cash and his credit card with the number {fake_credit_card}. \n",
|
||||
"If you would find it, please call at {fake_phone} or write an email here: {fake_email}.\n",
|
||||
"{fake_name} would be very grateful!\"\"\"\n",
|
||||
"\n",
|
||||
"print(anonymized_text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And now, using the `deanonymize` method, we can reverse the process:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Slim Shady recently lost his wallet. \n",
|
||||
"Inside is some cash and his credit card with the number 4916 0387 9536 0861. \n",
|
||||
"If you would find it, please call at 313-666-7440 or write an email here: real.slim.shady@gmail.com.\n",
|
||||
"Slim Shady would be very grateful!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(anonymizer.deanonymize(anonymized_text))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Using with LangChain Expression Language\n",
|
||||
"\n",
|
||||
"With LCEL we can easily chain together anonymization and deanonymization with the rest of our application. This is an example of using the anonymization mechanism with a query to LLM (without deanonymization for now):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text = f\"\"\"Slim Shady recently lost his wallet. \n",
|
||||
"Inside is some cash and his credit card with the number 4916 0387 9536 0861. \n",
|
||||
"If you would find it, please call at 313-666-7440 or write an email here: real.slim.shady@gmail.com.\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Dear Sir/Madam,\n",
|
||||
"\n",
|
||||
"We regret to inform you that Mr. Dana Rhodes has reported the loss of his wallet. The wallet contains a sum of cash and his credit card, bearing the number 4397528473885757. \n",
|
||||
"\n",
|
||||
"If you happen to come across the aforementioned wallet, we kindly request that you contact us immediately at 258-481-7074x714 or via email at laurengoodman@example.com.\n",
|
||||
"\n",
|
||||
"Your prompt assistance in this matter would be greatly appreciated.\n",
|
||||
"\n",
|
||||
"Yours faithfully,\n",
|
||||
"\n",
|
||||
"[Your Name]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioReversibleAnonymizer()\n",
|
||||
"\n",
|
||||
"template = \"\"\"Rewrite this text into an official, short email:\n",
|
||||
"\n",
|
||||
"{anonymized_text}\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"\n",
|
||||
"chain = {\"anonymized_text\": anonymizer.anonymize} | prompt | llm\n",
|
||||
"response = chain.invoke(text)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, let's add **deanonymization step** to our sequence:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Dear Sir/Madam,\n",
|
||||
"\n",
|
||||
"We regret to inform you that Mr. Slim Shady has recently misplaced his wallet. The wallet contains a sum of cash and his credit card, bearing the number 4916 0387 9536 0861. \n",
|
||||
"\n",
|
||||
"If by any chance you come across the lost wallet, kindly contact us immediately at 313-666-7440 or send an email to real.slim.shady@gmail.com.\n",
|
||||
"\n",
|
||||
"Your prompt assistance in this matter would be greatly appreciated.\n",
|
||||
"\n",
|
||||
"Yours faithfully,\n",
|
||||
"\n",
|
||||
"[Your Name]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain = chain | (lambda ai_message: anonymizer.deanonymize(ai_message.content))\n",
|
||||
"response = chain.invoke(text)\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Anonymized data was given to the model itself, and therefore it was protected from being leaked to the outside world. Then, the model's response was processed, and the factual value was replaced with the real one."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Extra knowledge"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`PresidioReversibleAnonymizer` stores the mapping of the fake values to the original values in the `deanonymizer_mapping` parameter, where key is fake PII and value is the original one: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'PERSON': {'Maria Lynch': 'Slim Shady'},\n",
|
||||
" 'PHONE_NUMBER': {'7344131647': '313-666-7440'},\n",
|
||||
" 'EMAIL_ADDRESS': {'jamesmichael@example.com': 'real.slim.shady@gmail.com'},\n",
|
||||
" 'CREDIT_CARD': {'4838637940262': '4916 0387 9536 0861'}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.data_anonymizer import PresidioReversibleAnonymizer\n",
|
||||
"\n",
|
||||
"anonymizer = PresidioReversibleAnonymizer(\n",
|
||||
" analyzed_fields=[\"PERSON\", \"PHONE_NUMBER\", \"EMAIL_ADDRESS\", \"CREDIT_CARD\"],\n",
|
||||
" # Faker seed is used here to make sure the same fake data is generated for the test purposes\n",
|
||||
" # In production, it is recommended to remove the faker_seed parameter (it will default to None)\n",
|
||||
" faker_seed=42,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"anonymizer.anonymize(\n",
|
||||
" \"My name is Slim Shady, call me at 313-666-7440 or email me at real.slim.shady@gmail.com. \"\n",
|
||||
" \"By the way, my card number is: 4916 0387 9536 0861\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"anonymizer.deanonymizer_mapping"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Anonymizing more texts will result in new mapping entries:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Do you have his VISA card number? Yep, it's 3537672423884966. I'm William Bowman by the way.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'PERSON': {'Maria Lynch': 'Slim Shady', 'William Bowman': 'John Doe'},\n",
|
||||
" 'PHONE_NUMBER': {'7344131647': '313-666-7440'},\n",
|
||||
" 'EMAIL_ADDRESS': {'jamesmichael@example.com': 'real.slim.shady@gmail.com'},\n",
|
||||
" 'CREDIT_CARD': {'4838637940262': '4916 0387 9536 0861',\n",
|
||||
" '3537672423884966': '4001 9192 5753 7193'}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\n",
|
||||
" anonymizer.anonymize(\n",
|
||||
" \"Do you have his VISA card number? Yep, it's 4001 9192 5753 7193. I'm John Doe by the way.\"\n",
|
||||
" )\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"anonymizer.deanonymizer_mapping"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can save the mapping itself to a file for future use: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# We can save the deanonymizer mapping as a JSON or YAML file\n",
|
||||
"\n",
|
||||
"anonymizer.save_deanonymizer_mapping(\"deanonymizer_mapping.json\")\n",
|
||||
"# anonymizer.save_deanonymizer_mapping(\"deanonymizer_mapping.yaml\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And then, load it in another `PresidioReversibleAnonymizer` instance:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{}"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer = PresidioReversibleAnonymizer()\n",
|
||||
"\n",
|
||||
"anonymizer.deanonymizer_mapping"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'PERSON': {'Maria Lynch': 'Slim Shady', 'William Bowman': 'John Doe'},\n",
|
||||
" 'PHONE_NUMBER': {'7344131647': '313-666-7440'},\n",
|
||||
" 'EMAIL_ADDRESS': {'jamesmichael@example.com': 'real.slim.shady@gmail.com'},\n",
|
||||
" 'CREDIT_CARD': {'4838637940262': '4916 0387 9536 0861',\n",
|
||||
" '3537672423884966': '4001 9192 5753 7193'}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"anonymizer.load_deanonymizer_mapping(\"deanonymizer_mapping.json\")\n",
|
||||
"\n",
|
||||
"anonymizer.deanonymizer_mapping"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Future works\n",
|
||||
"\n",
|
||||
"- **instance anonymization** - at this point, each occurrence of PII is treated as a separate entity and separately anonymized. Therefore, two occurrences of the name John Doe in the text will be changed to two different names. It is therefore worth introducing support for full instance detection, so that repeated occurrences are treated as a single object.\n",
|
||||
"- **better matching and substitution of fake values for real ones** - currently the strategy is based on matching full strings and then substituting them. Due to the indeterminism of language models, it may happen that the value in the answer is slightly changed (e.g. *John Doe* -> *John* or *Main St, New York* -> *New York*) and such a substitution is then no longer possible. Therefore, it is worth adjusting the matching for your needs."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -1,10 +1,10 @@
|
||||
# Pydantic Compatibility
|
||||
# Pydantic compatibility
|
||||
|
||||
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
|
||||
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)
|
||||
- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
|
||||
|
||||
## LangChain Pydantic Migration Plan
|
||||
## LangChain Pydantic migration plan
|
||||
|
||||
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
|
||||
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
|
||||
|
||||
1390
docs/extras/guides/safety/amazon_comprehend_chain.ipynb
Normal file
1390
docs/extras/guides/safety/amazon_comprehend_chain.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
@@ -93,7 +93,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"### Using the Context callback within a Chat Model\n",
|
||||
"### Using the Context callback within a chat model\n",
|
||||
"\n",
|
||||
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
|
||||
"\n",
|
||||
|
||||
File diff suppressed because one or more lines are too long
63
docs/extras/integrations/callbacks/llmonitor.md
Normal file
63
docs/extras/integrations/callbacks/llmonitor.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# LLMonitor
|
||||
|
||||
[LLMonitor](https://llmonitor.com) is an open-source observability platform that provides cost tracking, user tracking and powerful agent tracing.
|
||||
|
||||
<video controls width='100%' >
|
||||
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>
|
||||
</video>
|
||||
|
||||
## Setup
|
||||
Create an account on [llmonitor.com](https://llmonitor.com), create an `App`, and then copy the associated `tracking id`.
|
||||
Once you have it, set it as an environment variable by running:
|
||||
```bash
|
||||
export LLMONITOR_APP_ID="..."
|
||||
```
|
||||
|
||||
If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:
|
||||
```python
|
||||
from langchain.callbacks import LLMonitorCallbackHandler
|
||||
|
||||
handler = LLMonitorCallbackHandler(app_id="...")
|
||||
```
|
||||
|
||||
## Usage with LLM/Chat models
|
||||
```python
|
||||
from langchain.llms import OpenAI
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.callbacks import LLMonitorCallbackHandler
|
||||
|
||||
handler = LLMonitorCallbackHandler(app_id="...")
|
||||
|
||||
llm = OpenAI(
|
||||
callbacks=[handler],
|
||||
)
|
||||
|
||||
chat = ChatOpenAI(
|
||||
callbacks=[handler],
|
||||
metadata={"userId": "123"}, # you can assign user ids to models in the metadata
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Usage with agents
|
||||
```python
|
||||
from langchain.agents import load_tools, initialize_agent, AgentType
|
||||
from langchain.llms import OpenAI
|
||||
from langchain.callbacks import LLMonitorCallbackHandler
|
||||
|
||||
handler = LLMonitorCallbackHandler(app_id="...")
|
||||
|
||||
llm = OpenAI(temperature=0)
|
||||
tools = load_tools(["serpapi", "llm-math"], llm=llm)
|
||||
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
|
||||
agent.run(
|
||||
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?",
|
||||
callbacks=[handler],
|
||||
metadata={
|
||||
"agentName": "Leo DiCaprio's girlfriend", # you can assign a custom agent in the metadata
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Support
|
||||
For any question or issue with integration you can reach out to the LLMonitor team on [Discord](http://discord.com/invite/8PafSG58kK) or via [email](mailto:vince@llmonitor.com).
|
||||
106
docs/extras/integrations/chat/bedrock.ipynb
Normal file
106
docs/extras/integrations/chat/bedrock.ipynb
Normal file
@@ -0,0 +1,106 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Bedrock Chat\n",
|
||||
"\n",
|
||||
"[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d51edc81",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install boto3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import BedrockChat\n",
|
||||
"from langchain.schema import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = BedrockChat(model_id=\"anthropic.claude-v2\", model_kwargs={\"temperature\":0.1})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Voici la traduction en français : J'adore programmer.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c253883f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
164
docs/extras/integrations/chat/konko.ipynb
Normal file
164
docs/extras/integrations/chat/konko.ipynb
Normal file
@@ -0,0 +1,164 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Konko\n",
|
||||
"\n",
|
||||
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
|
||||
"\n",
|
||||
"Konko API is a fully managed API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` [models](https://docs.konko.ai/docs/overview)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To run this notebook, you'll need Konko API key. You can request it by messaging support@konko.ai."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatKonko\n",
|
||||
"from langchain.prompts.chat import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" SystemMessagePromptTemplate,\n",
|
||||
" AIMessagePromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.schema import AIMessage, HumanMessage, SystemMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 2. Set API Keys\n",
|
||||
"\n",
|
||||
"<br />\n",
|
||||
"\n",
|
||||
"### Option 1: Set Environment Variables\n",
|
||||
"\n",
|
||||
"1. You can set environment variables for \n",
|
||||
" 1. KONKO_API_KEY (Required)\n",
|
||||
" 2. OPENAI_API_KEY (Optional)\n",
|
||||
"2. In your current shell session, use the export command:\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"export KONKO_API_KEY={your_KONKO_API_KEY_here}\n",
|
||||
"export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts.\n",
|
||||
"\n",
|
||||
"### Option 2: Set API Keys Programmatically\n",
|
||||
"\n",
|
||||
"If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"konko.set_api_key('your_KONKO_API_KEY_here') \n",
|
||||
"konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Calling a model\n",
|
||||
"\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/overview)\n",
|
||||
"\n",
|
||||
"For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `\"meta-llama/Llama-2-13b-chat-hf\"`\n",
|
||||
"\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).\n",
|
||||
"\n",
|
||||
"From here, we can initialize our model:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatKonko(max_tokens=400, model = 'meta-llama/Llama-2-13b-chat-hf')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Sure, I'd be happy to explain the Big Bang Theory briefly!\\n\\nThe Big Bang Theory is the leading explanation for the origin and evolution of the universe, based on a vast amount of observational evidence from many fields of science. In essence, the theory posits that the universe began as an infinitely hot and dense point, known as a singularity, around 13.8 billion years ago. This singularity expanded rapidly, and as it did, it cooled and formed subatomic particles, which eventually coalesced into the first atoms, and later into the stars and galaxies we see today.\\n\\nThe theory gets its name from the idea that the universe began in a state of incredibly high energy and temperature, and has been expanding and cooling ever since. This expansion is thought to have been driven by a mysterious force known as dark energy, which is thought to be responsible for the accelerating expansion of the universe.\\n\\nOne of the key predictions of the Big Bang Theory is that the universe should be homogeneous and isotropic on large scales, meaning that it should look the same in all directions and have the same properties everywhere. This prediction has been confirmed by a wealth of observational evidence, including the cosmic microwave background radiation, which is thought to be a remnant of the early universe.\\n\\nOverall, the Big Bang Theory is a well-established and widely accepted explanation for the origins of the universe, and it has been supported by a vast amount of observational evidence from many fields of science.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful assistant.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Explain Big Bang Theory briefly\"\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -571,7 +571,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
179
docs/extras/integrations/chat_loaders/gmail.ipynb
Normal file
179
docs/extras/integrations/chat_loaders/gmail.ipynb
Normal file
@@ -0,0 +1,179 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b3d1705d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GMail\n",
|
||||
"\n",
|
||||
"This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.\n",
|
||||
"\n",
|
||||
"Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.\n",
|
||||
"\n",
|
||||
"To use:\n",
|
||||
"\n",
|
||||
"- Set up a Google Developer Account: Go to the Google Developer Console, create a project, and enable the Gmail API for that project. This will give you a credentials.json file that you'll need later.\n",
|
||||
"\n",
|
||||
"- Install the Google Client Library: Run the following command to install the Google Client Library:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "84578039",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install --upgrade google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-client"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "be18f796",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os.path\n",
|
||||
"import base64\n",
|
||||
"import json\n",
|
||||
"import re\n",
|
||||
"import time\n",
|
||||
"from google.auth.transport.requests import Request\n",
|
||||
"from google.oauth2.credentials import Credentials\n",
|
||||
"from google_auth_oauthlib.flow import InstalledAppFlow\n",
|
||||
"from googleapiclient.discovery import build\n",
|
||||
"import logging\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"creds = None\n",
|
||||
"# The file token.json stores the user's access and refresh tokens, and is\n",
|
||||
"# created automatically when the authorization flow completes for the first\n",
|
||||
"# time.\n",
|
||||
"if os.path.exists('email_token.json'):\n",
|
||||
" creds = Credentials.from_authorized_user_file('email_token.json', SCOPES)\n",
|
||||
"# If there are no (valid) credentials available, let the user log in.\n",
|
||||
"if not creds or not creds.valid:\n",
|
||||
" if creds and creds.expired and creds.refresh_token:\n",
|
||||
" creds.refresh(Request())\n",
|
||||
" else:\n",
|
||||
" flow = InstalledAppFlow.from_client_secrets_file( \n",
|
||||
" # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started\n",
|
||||
" 'creds.json', SCOPES)\n",
|
||||
" creds = flow.run_local_server(port=0)\n",
|
||||
" # Save the credentials for the next run\n",
|
||||
" with open('email_token.json', 'w') as token:\n",
|
||||
" token.write(creds.to_json())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "a2793ba0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_loaders.gmail import GMailLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "2154597f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GMailLoader(creds=creds, n=3)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "0b7d11bd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "74764bc7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"2"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Sometimes there can be errors which we silently ignore\n",
|
||||
"len(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "d9360a85",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_loaders.utils import (\n",
|
||||
" map_ai_messages,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "a9646f7a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This makes messages sent by hchase@langchain.com the AI Messages\n",
|
||||
"# This means you will train an LLM to predict as if it's responding as hchase\n",
|
||||
"training_data = list(map_ai_messages(data, sender=\"Harrison Chase <hchase@langchain.com>\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d1a182f0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
420
docs/extras/integrations/chat_loaders/imessage.ipynb
Normal file
420
docs/extras/integrations/chat_loaders/imessage.ipynb
Normal file
@@ -0,0 +1,420 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "01fcfa2f-33a9-48f3-835a-b1956c394d6b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# iMessage\n",
|
||||
"\n",
|
||||
"This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.\n",
|
||||
"\n",
|
||||
"On MacOS, iMessage stores conversations in a sqlite database at `~/Library/Messages/chat.db` (at least for macOS Ventura 13.4). \n",
|
||||
"The `IMessageChatLoader` loads from this database file. \n",
|
||||
"\n",
|
||||
"1. Create the `IMessageChatLoader` with the file path pointed to `chat.db` database you'd like to process.\n",
|
||||
"2. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the \"AIMessage\" class.\n",
|
||||
"\n",
|
||||
"## 1. Access Chat DB\n",
|
||||
"\n",
|
||||
"It's likely that your terminal is denied access to `~/Library/Messages`. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Securityand Privacy > Full Disk Access.\n",
|
||||
"\n",
|
||||
"We have created an example database you can use at [this linked drive file](https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "036ce7e0-a38f-4cbe-89a6-a205ae7c23be",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"File chat.db downloaded.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# This uses some example data\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"def download_drive_file(url: str, output_path: str = 'chat.db') -> None:\n",
|
||||
" file_id = url.split('/')[-2]\n",
|
||||
" download_url = f'https://drive.google.com/uc?export=download&id={file_id}'\n",
|
||||
"\n",
|
||||
" response = requests.get(download_url)\n",
|
||||
" if response.status_code != 200:\n",
|
||||
" print('Failed to download the file.')\n",
|
||||
" return\n",
|
||||
"\n",
|
||||
" with open(output_path, 'wb') as file:\n",
|
||||
" file.write(response.content)\n",
|
||||
" print(f'File {output_path} downloaded.')\n",
|
||||
"\n",
|
||||
"url = 'https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing'\n",
|
||||
"\n",
|
||||
"# Download file to chat.db\n",
|
||||
"download_drive_file(url)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cf60f703-76f1-4602-a723-02c59535c1af",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 2. Create the Chat Loader\n",
|
||||
"\n",
|
||||
"Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "4b8b432a-d2bc-49e1-b35f-761730a8fd6d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_loaders.imessage import IMessageChatLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8ec6661b-0aca-48ae-9e2b-6412856c287b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = IMessageChatLoader(\n",
|
||||
" path=\"./chat.db\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8805a7c5-84b4-49f5-8989-0022f2054ace",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3. Load messages\n",
|
||||
"\n",
|
||||
"The `load()` (or `lazy_load`) methods return a list of \"ChatSessions\" that currently just contain a list of messages per loaded conversation. All messages are mapped to \"HumanMessage\" objects to start. \n",
|
||||
"\n",
|
||||
"You can optionally choose to merge message \"runs\" (consecutive messages from the same sender) and select a sender to represent the \"AI\". The fine-tuned LLM will learn to generate these AI messages."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "fcd69b3e-020d-4a15-8a0d-61c2d34e1ee1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List\n",
|
||||
"from langchain.chat_loaders.base import ChatSession\n",
|
||||
"from langchain.chat_loaders.utils import (\n",
|
||||
" map_ai_messages,\n",
|
||||
" merge_chat_runs,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"raw_messages = loader.lazy_load()\n",
|
||||
"# Merge consecutive messages from the same sender into a single message\n",
|
||||
"merged_messages = merge_chat_runs(raw_messages)\n",
|
||||
"# Convert messages from \"Tortoise\" to AI messages. Do you have a guess who these conversations are between?\n",
|
||||
"chat_sessions: List[ChatSession] = list(map_ai_messages(merged_messages, sender=\"Tortoise\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "370b8c26-c7a8-434c-a225-45c20ff14a03",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\"Slow and steady, that's my motto.\", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False),\n",
|
||||
" HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False),\n",
|
||||
" AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Now all of the Tortoise's messages will take the AI message class\n",
|
||||
"# which maps to the 'assistant' role in OpenAI's training format\n",
|
||||
"alternating_sessions[0]['messages'][:3]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "05208f9d-3193-4a8d-86a5-13df2c8197e5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3. Prepare for fine-tuning\n",
|
||||
"\n",
|
||||
"Now it's time to convert our chat messages to OpenAI dictionaries. We can use the `convert_messages_for_finetuning` utility to do so."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "8834861f-f37f-4c08-96c6-917269bf09b8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.adapters.openai import convert_messages_for_finetuning"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "ce7ab0f9-6e6a-4a1c-8b86-c635251d437e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Prepared 10 dialogues for training\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"training_data = convert_messages_for_finetuning(alternating_sessions)\n",
|
||||
"print(f\"Prepared {len(training_data)} dialogues for training\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b494d64c-8056-42ae-b4c1-a9cfabc002ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 4. Fine-tune the model\n",
|
||||
"\n",
|
||||
"It's time to fine-tune the model. Make sure you have `openai` installed\n",
|
||||
"and have set your `OPENAI_API_KEY` appropriately"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "b4b60daa-b899-4291-a09a-412ce9c218fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install -U openai --quiet"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "2cca6c95-c0d6-4826-b4fa-1c403f217f93",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"from io import BytesIO\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"import openai\n",
|
||||
"\n",
|
||||
"# We will write the jsonl file in memory\n",
|
||||
"my_file = BytesIO()\n",
|
||||
"for m in training_data:\n",
|
||||
" my_file.write((json.dumps({\"messages\": m}) + \"\\n\").encode('utf-8'))\n",
|
||||
"\n",
|
||||
"my_file.seek(0)\n",
|
||||
"training_file = openai.File.create(\n",
|
||||
" file=my_file,\n",
|
||||
" purpose='fine-tune'\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# OpenAI audits each training file for compliance reasons.\n",
|
||||
"# This make take a few minutes\n",
|
||||
"status = openai.File.retrieve(training_file.id).status\n",
|
||||
"start_time = time.time()\n",
|
||||
"while status != \"processed\":\n",
|
||||
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
|
||||
" time.sleep(5)\n",
|
||||
" status = openai.File.retrieve(training_file.id).status\n",
|
||||
"print(f\"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "60ee0476-3113-4dc8-a886-bce878c60b07",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With the file ready, it's time to kick off a training job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "c376ddca-5b4f-4e5a-bf4e-6beeb467eacc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"job = openai.FineTuningJob.create(\n",
|
||||
" training_file=training_file.id,\n",
|
||||
" model=\"gpt-3.5-turbo\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "09344c60-0bee-4989-b8d1-4a8821553cc3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Grab a cup of tea while your model is being prepared. This may take some time!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "22eae900-04ca-456b-ba51-1dfff1f8e0e1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Status=[running]... 524.95s\r"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"status = openai.FineTuningJob.retrieve(job.id).status\n",
|
||||
"start_time = time.time()\n",
|
||||
"while status != \"succeeded\":\n",
|
||||
" print(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\n",
|
||||
" time.sleep(5)\n",
|
||||
" job = openai.FineTuningJob.retrieve(job.id)\n",
|
||||
" status = job.status"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "39e72616-a7d9-44b8-a4eb-506611d119f4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"ft:gpt-3.5-turbo-0613:personal::7sKoRdlz\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(job.fine_tuned_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0d717749-b1b6-451f-b3c5-3286b82d45b9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 5. Use in LangChain\n",
|
||||
"\n",
|
||||
"You can use the resulting model ID directly the `ChatOpenAI` model class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "1579dfca-95c6-47b7-8549-1195b9dce5b0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(\n",
|
||||
" model=job.fine_tuned_model,\n",
|
||||
" temperature=1,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "6f53d1b1-dcbf-4976-a61a-17f74c6f1b0a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are speaking to hare.\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"id": "6619c9bc-54ea-4136-bd9a-44557f7da724",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"A symbol of interconnectedness."
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for tok in chain.stream({\"input\": \"What's the golden thread?\"}):\n",
|
||||
" print(tok, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "88e0d1a1-48a9-4d9d-9f4e-010cbbb65af8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -8,9 +8,9 @@
|
||||
"# Etherscan Loader\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"The Etherscan loader use etherscan api to load transacactions histories under specific account on Ethereum Mainnet.\n",
|
||||
"The Etherscan loader use etherscan api to load transaction histories under specific account on Ethereum Mainnet.\n",
|
||||
"\n",
|
||||
"You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota.\n",
|
||||
"You will need a Etherscan api key to proceed. The free api key has 5 calls per second quota.\n",
|
||||
"\n",
|
||||
"The loader supports the following six functinalities:\n",
|
||||
"* Retrieve normal transactions under specific account on Ethereum Mainet\n",
|
||||
|
||||
@@ -90,7 +90,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]"
|
||||
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
@@ -102,13 +102,34 @@
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Configuring the AWS Boto3 client\n",
|
||||
"You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing\n",
|
||||
"named arguments when creating the S3DirectoryLoader.\n",
|
||||
"This is useful for instance when AWS credentials can't be set as environment variables.\n",
|
||||
"See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured."
|
||||
],
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "885dc280",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"loader = S3DirectoryLoader(\"testing-hwc\", aws_access_key_id=\"xxxx\", aws_secret_access_key=\"yyyy\")"
|
||||
],
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
],
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -53,7 +53,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]"
|
||||
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
@@ -66,12 +66,34 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"cell_type": "markdown",
|
||||
"id": "93689594",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configuring the AWS Boto3 client\n",
|
||||
"You can configure the AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) client by passing\n",
|
||||
"named arguments when creating the S3DirectoryLoader.\n",
|
||||
"This is useful for instance when AWS credentials can't be set as environment variables.\n",
|
||||
"See the [list of parameters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session) that can be configured."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"loader = S3FileLoader(\"testing-hwc\", \"fake.docx\", aws_access_key_id=\"xxxx\", aws_secret_access_key=\"yyyy\")"
|
||||
],
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
],
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -96,3 +118,4 @@
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,138 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure Document Intelligence"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Azure Document Intelligence (formerly known as Azure Forms Recognizer) is machine-learning \n",
|
||||
"based service that extracts text (including handwriting), tables or key-value-pairs from\n",
|
||||
"scanned documents or images.\n",
|
||||
"\n",
|
||||
"This current implementation of a loader using Document Intelligence is able to incorporate content page-wise and turn it into LangChain documents.\n",
|
||||
"\n",
|
||||
"Document Intelligence supports PDF, JPEG, PNG, BMP, or TIFF.\n",
|
||||
"\n",
|
||||
"Further documentation is available at https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/?view=doc-intel-3.1.0.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install langchain azure-ai-formrecognizer -q"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The first example uses a local file which will be sent to Azure Document Intelligence.\n",
|
||||
"\n",
|
||||
"First, an instance of a DocumentAnalysisClient is created with endpoint and key for the Azure service. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.ai.formrecognizer import DocumentAnalysisClient\n",
|
||||
"from azure.core.credentials import AzureKeyCredential\n",
|
||||
"\n",
|
||||
"document_analysis_client = DocumentAnalysisClient(\n",
|
||||
" endpoint=\"<service_endpoint>\", credential=AzureKeyCredential(\"<service_key>\")\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With the initialized document analysis client, we can proceed to create an instance of the DocumentIntelligenceLoader:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders.pdf import DocumentIntelligenceLoader\n",
|
||||
"loader = DocumentIntelligenceLoader(\n",
|
||||
" \"<Local_filename>\",\n",
|
||||
" client=document_analysis_client,\n",
|
||||
" model=\"<model_name>\") # e.g. prebuilt-document\n",
|
||||
"\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The output contains each page of the source document as a LangChain document: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='...', metadata={'source': '...', 'page': 1})]"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"documents"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.9.5"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "f9f85f796d01129d0dd105a088854619f454435301f6ffec2fea96ecbd9be4ac"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -106,15 +106,39 @@
|
||||
" - `column_data_type`\n",
|
||||
" - `column_title`\n",
|
||||
" - `column_description`\n",
|
||||
" - `column_values`"
|
||||
" - `column_values`\n",
|
||||
" - `cube_data_obj_type`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"> page_content='Users View City, None' metadata={'table_name': 'users_view', 'column_name': 'users_view.city', 'column_data_type': 'string', 'column_title': 'Users View City', 'column_description': 'None', 'column_member_type': 'dimension', 'column_values': ['Austin', 'Chicago', 'Los Angeles', 'Mountain View', 'New York', 'Palo Alto', 'San Francisco', 'Seattle']}"
|
||||
"# Given string containing page content\n",
|
||||
"page_content = 'Users View City, None'\n",
|
||||
"\n",
|
||||
"# Given dictionary containing metadata\n",
|
||||
"metadata = {\n",
|
||||
" 'table_name': 'users_view',\n",
|
||||
" 'column_name': 'users_view.city',\n",
|
||||
" 'column_data_type': 'string',\n",
|
||||
" 'column_title': 'Users View City',\n",
|
||||
" 'column_description': 'None',\n",
|
||||
" 'column_member_type': 'dimension',\n",
|
||||
" 'column_values': [\n",
|
||||
" 'Austin',\n",
|
||||
" 'Chicago',\n",
|
||||
" 'Los Angeles',\n",
|
||||
" 'Mountain View',\n",
|
||||
" 'New York',\n",
|
||||
" 'Palo Alto',\n",
|
||||
" 'San Francisco',\n",
|
||||
" 'Seattle'\n",
|
||||
" ],\n",
|
||||
" 'cube_data_obj_type': 'view'\n",
|
||||
"}"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "878928a6-a5ae-4f74-b351-64e3b01733fe",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -50,7 +50,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"id": "2216c83f-68e4-4d2f-8ea2-5878fb18bbe7",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -66,7 +66,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"id": "8f3b6aa0-b45d-4e37-8c50-5bebe70fdb9d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -93,7 +93,7 @@
|
||||
"source": [
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" folder_id=\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\",\n",
|
||||
" file_types=[\"document\", \"sheet\"]\n",
|
||||
" file_types=[\"document\", \"sheet\"],\n",
|
||||
" recursive=False\n",
|
||||
")"
|
||||
]
|
||||
@@ -110,7 +110,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "94207e39",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -121,7 +121,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"id": "a15fbee0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -136,7 +136,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"id": "98410bda",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -146,21 +146,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "e3e72221",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='\\n \\n \\n Team\\n Location\\n Stanley Cups\\n \\n \\n Blues\\n STL\\n 1\\n \\n \\n Flyers\\n PHI\\n 2\\n \\n \\n Maple Leafs\\n TOR\\n 13\\n \\n \\n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border=\"1\" class=\"dataframe\">\\n <tbody>\\n <tr>\\n <td>Team</td>\\n <td>Location</td>\\n <td>Stanley Cups</td>\\n </tr>\\n <tr>\\n <td>Blues</td>\\n <td>STL</td>\\n <td>1</td>\\n </tr>\\n <tr>\\n <td>Flyers</td>\\n <td>PHI</td>\\n <td>2</td>\\n </tr>\\n <tr>\\n <td>Maple Leafs</td>\\n <td>TOR</td>\\n <td>13</td>\\n </tr>\\n </tbody>\\n</table>', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs[0]"
|
||||
]
|
||||
@@ -175,7 +164,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"id": "0e2d093f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -190,7 +179,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"id": "b35ddcc6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -200,21 +189,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": null,
|
||||
"id": "3cc141e0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='\\n \\n \\n Team\\n Location\\n Stanley Cups\\n \\n \\n Blues\\n STL\\n 1\\n \\n \\n Flyers\\n PHI\\n 2\\n \\n \\n Maple Leafs\\n TOR\\n 13\\n \\n \\n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border=\"1\" class=\"dataframe\">\\n <tbody>\\n <tr>\\n <td>Team</td>\\n <td>Location</td>\\n <td>Stanley Cups</td>\\n </tr>\\n <tr>\\n <td>Blues</td>\\n <td>STL</td>\\n <td>1</td>\\n </tr>\\n <tr>\\n <td>Flyers</td>\\n <td>PHI</td>\\n <td>2</td>\\n </tr>\\n <tr>\\n <td>Maple Leafs</td>\\n <td>TOR</td>\\n <td>13</td>\\n </tr>\\n </tbody>\\n</table>', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs[0]"
|
||||
]
|
||||
@@ -226,6 +204,309 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "83ac576b-48c9-4aad-a35e-e978ea32f746",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Extended usage\n",
|
||||
"An external component can manage the complexity of Google Drive : `langchain-googledrive`\n",
|
||||
"It's compatible with the ̀`langchain.document_loaders.GoogleDriveLoader` and can be used\n",
|
||||
"in its place.\n",
|
||||
"\n",
|
||||
"To be compatible with containers, the authentication uses an environment variable ̀GOOGLE_ACCOUNT_FILE` to credential file (for user or service)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b94f7119-bc1e-4ca3-907f-9d81e837ac59",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain-googledrive"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c4c7474e-49cb-48a1-b3a0-77fba8e2dd70",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"folder_id='root'\n",
|
||||
"#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8357f7f1-e2b1-41ef-8e38-48fcc3897dba",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Use the advanced version.\n",
|
||||
"from langchain_googledrive.document_loaders import GoogleDriveLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "16ab9d3d-1782-4cb9-ab56-d87edbb25a18",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" folder_id=folder_id,\n",
|
||||
" recursive=False,\n",
|
||||
" num_results=2, # Maximum number of file to load\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ebac43aa-dd64-4964-802a-a90172415fd1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"By default, all files with these mime-type can be converted to `Document`.\n",
|
||||
"- text/text\n",
|
||||
"- text/plain\n",
|
||||
"- text/html\n",
|
||||
"- text/csv\n",
|
||||
"- text/markdown\n",
|
||||
"- image/png\n",
|
||||
"- image/jpeg\n",
|
||||
"- application/epub+zip\n",
|
||||
"- application/pdf\n",
|
||||
"- application/rtf\n",
|
||||
"- application/vnd.google-apps.document (GDoc)\n",
|
||||
"- application/vnd.google-apps.presentation (GSlide)\n",
|
||||
"- application/vnd.google-apps.spreadsheet (GSheet)\n",
|
||||
"- application/vnd.google.colaboratory (Notebook colab)\n",
|
||||
"- application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)\n",
|
||||
"- application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)\n",
|
||||
"\n",
|
||||
"It's possible to update or customize this. See the documentation of `GDriveLoader`.\n",
|
||||
"\n",
|
||||
"But, the corresponding packages must be installed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b4560f35-a37d-44e2-be0b-adaa245b3b3d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install unstructured"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6cb08da3-27df-46de-b60e-583bb7e31af4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cd13d7d1-db7a-498d-ac98-76ccd9ad9019",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Customize the search pattern\n",
|
||||
"\n",
|
||||
"All parameter compatible with Google [`list()`](https://developers.google.com/drive/api/v3/reference/files/list)\n",
|
||||
"API can be set.\n",
|
||||
"\n",
|
||||
"To specify the new pattern of the Google request, you can use a `PromptTemplate()`.\n",
|
||||
"The variables for the prompt can be set with `kwargs` in the constructor.\n",
|
||||
"Some pre-formated request are proposed (use `{query}`, `{folder_id}` and/or `{mime_type}`):\n",
|
||||
"\n",
|
||||
"You can customize the criteria to select the files. A set of predefined filter are proposed:\n",
|
||||
"| template | description |\n",
|
||||
"| -------------------------------------- | --------------------------------------------------------------------- |\n",
|
||||
"| gdrive-all-in-folder | Return all compatible files from a `folder_id` |\n",
|
||||
"| gdrive-query | Search `query` in all drives |\n",
|
||||
"| gdrive-by-name | Search file with name `query` |\n",
|
||||
"| gdrive-query-in-folder | Search `query` in `folder_id` (and sub-folders if `recursive=true`) |\n",
|
||||
"| gdrive-mime-type | Search a specific `mime_type` |\n",
|
||||
"| gdrive-mime-type-in-folder | Search a specific `mime_type` in `folder_id` |\n",
|
||||
"| gdrive-query-with-mime-type | Search `query` with a specific `mime_type` |\n",
|
||||
"| gdrive-query-with-mime-type-and-folder | Search `query` with a specific `mime_type` and in `folder_id` |\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "81348d59-8fd6-45d4-9de3-5df5cff5c7e2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" folder_id=folder_id,\n",
|
||||
" recursive=False,\n",
|
||||
" template=\"gdrive-query\", # Default template to use\n",
|
||||
" query=\"machine learning\",\n",
|
||||
" num_results=2, # Maximum number of file to load\n",
|
||||
" supportsAllDrives=False, # GDrive `list()` parameter\n",
|
||||
")\n",
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "46c6ba5b-d4b1-4f0f-9801-5c1314021605",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can customize your pattern."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5a5a323b-8d96-46b7-b46a-fd69bd2c8e04",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" folder_id=folder_id,\n",
|
||||
" recursive=False,\n",
|
||||
" template=PromptTemplate(\n",
|
||||
" input_variables=[\"query\", \"query_name\"],\n",
|
||||
" template=\"fullText contains '{query}' and name contains '{query_name}' and trashed=false\",\n",
|
||||
" ), # Default template to use\n",
|
||||
" query=\"machine learning\",\n",
|
||||
" query_name=\"ML\", \n",
|
||||
" num_results=2, # Maximum number of file to load\n",
|
||||
")\n",
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "375bb465-8f69-407b-94bd-ffa3718ef500",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Modes for GSlide and GSheet\n",
|
||||
"The parameter mode accepts different values:\n",
|
||||
"\n",
|
||||
"- \"document\": return the body of each document\n",
|
||||
"- \"snippets\": return the description of each file (set in metadata of Google Drive files).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The conversion can manage in Markdown format:\n",
|
||||
"- bullet\n",
|
||||
"- link\n",
|
||||
"- table\n",
|
||||
"- titles\n",
|
||||
"\n",
|
||||
"The parameter `gslide_mode` accepts different values:\n",
|
||||
"\n",
|
||||
"- \"single\" : one document with <PAGE BREAK>\n",
|
||||
"- \"slide\" : one document by slide\n",
|
||||
"- \"elements\" : one document for each elements.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7493d7b0-0600-49af-8107-7f4597c92de7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" template=\"gdrive-mime-type\",\n",
|
||||
" mime_type=\"application/vnd.google-apps.presentation\", # Only GSlide files\n",
|
||||
" gslide_mode=\"slide\",\n",
|
||||
" num_results=2, # Maximum number of file to load\n",
|
||||
")\n",
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9bf338fb-02d7-452f-8679-c50419b13464",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The parameter `gsheet_mode` accepts different values:\n",
|
||||
"- `\"single\"`: Generate one document by line\n",
|
||||
"- `\"elements\"` : one document with markdown array and <PAGE BREAK> tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "469f5af0-67db-4f15-8aee-88cde480729b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" template=\"gdrive-mime-type\",\n",
|
||||
" mime_type=\"application/vnd.google-apps.spreadsheet\", # Only GSheet files\n",
|
||||
" gsheet_mode=\"elements\",\n",
|
||||
" num_results=2, # Maximum number of file to load\n",
|
||||
")\n",
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "09acb864-e919-4add-9e06-deba6f7f0cd8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Advanced usage\n",
|
||||
"All Google File have a 'description' in the metadata. This field can be used to memorize a summary of the document or others indexed tags (See method `lazy_update_description_with_summary()`).\n",
|
||||
"\n",
|
||||
"If you use the `mode=\"snippet\"`, only the description will be used for the body. Else, the `metadata['summary']` has the field.\n",
|
||||
"\n",
|
||||
"Sometime, a specific filter can be used to extract some information from the filename, to select some files with specific criteria. You can use a filter.\n",
|
||||
"\n",
|
||||
"Sometimes, many documents are returned. It's not necessary to have all documents in memory at the same time. You can use the lazy versions of methods, to get one document at a time. It's better to use a complex query in place of a recursive search. For each folder, a query must be applied if you activate `recursive=True`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a5e9c8eb-a266-4ae6-a760-d7826a0aa7c5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"loader = GoogleDriveLoader(\n",
|
||||
" gdrive_api_file=os.environ[\"GOOGLE_ACCOUNT_FILE\"],\n",
|
||||
" num_results=2,\n",
|
||||
" template=\"gdrive-query\",\n",
|
||||
" filter=lambda search, file: \"#test\" not in file.get('description',''),\n",
|
||||
" query='machine learning',\n",
|
||||
" supportsAllDrives=False,\n",
|
||||
" )\n",
|
||||
"for doc in loader.load():\n",
|
||||
" print(\"---\")\n",
|
||||
" print(doc.page_content.strip()[:60]+\"...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "51efa73a-4e2d-4f9c-abaf-6c9bde2ff69d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -244,7 +525,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user