Compare commits

..

2 Commits

Author SHA1 Message Date
Erick Friis
cba2a6ca40 Merge branch 'master' into erick/testutils 2024-02-22 16:21:22 -08:00
Erick Friis
e4c6b66455 testutils 2024-02-22 14:53:08 -08:00
2466 changed files with 150272 additions and 228360 deletions

View File

@@ -3,18 +3,18 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Thanks for your interest in 🦜️🔗 LangChain!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
this is time that we cannot spend on adding new features, fixing bugs, write documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
By asking questions in a structured way (following this) it will be much easier to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
And there's a high chance that you will find the solution along the way and you won't even have to submit it and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
As there are too many questions, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓

View File

@@ -35,8 +35,6 @@ body:
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- type: textarea
id: reproduction
validations:

View File

@@ -9,7 +9,7 @@ body:
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
or are a regular contributor to LangChain with previous merged merged pull requests.
- type: checkboxes
id: privileged
attributes:

View File

@@ -1,24 +1,17 @@
import json
import sys
import os
from typing import Dict
LANGCHAIN_DIRS = [
LANGCHAIN_DIRS = {
"libs/core",
"libs/text-splitters",
"libs/community",
"libs/langchain",
"libs/experimental",
]
"libs/community",
}
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run: Dict[str, set] = {
"lint": set(),
"test": set(),
"extended-test": set(),
}
dirs_to_run = set()
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
@@ -31,56 +24,27 @@ if __name__ == "__main__":
".github/workflows",
".github/tools",
".github/actions",
"libs/core",
".github/scripts/check_diff.py",
)
):
# add all LANGCHAIN_DIRS for infra changes
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if file.startswith(dir_):
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
filename
for filename in os.listdir(f"libs/partners/{partner_dir}")
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file.startswith("libs/"):
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
dirs_to_run.update(LANGCHAIN_DIRS)
elif "libs/community" in file:
dirs_to_run.update(
("libs/community", "libs/langchain", "libs/experimental")
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
dirs_to_run["lint"].add(".")
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
}
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}") # noqa: T201
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}"):
dirs_to_run.add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
dirs_to_run.add("libs/experimental")
elif file.startswith("libs/"):
dirs_to_run.update(LANGCHAIN_DIRS)
else:
pass
json_output = json.dumps(list(dirs_to_run))
print(f"dirs-to-run={json_output}") # noqa: T201

View File

@@ -4,25 +4,17 @@ import tomllib
from packaging.version import parse as parse_version
import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
]
MIN_VERSION_LIBS = ["langchain-core", "langchain-community", "langchain"]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(f"^\\^({vstring})$", version)
_match = re.match(r"^\^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
# case >=x.x.x,<y.y.y
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
_match = re.match(r"^>=(\d+(?:\.\d+){0,2}),<(\d+(?:\.\d+){0,2})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
@@ -30,7 +22,7 @@ def get_min_version(version: str) -> str:
return _min
# case x.x.x
_match = re.match(f"^({vstring})$", version)
_match = re.match(r"^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
@@ -55,9 +47,6 @@ def get_min_version_from_toml(toml_path: str):
# Get the version string
version_string = dependencies[lib]
if isinstance(version_string, dict):
version_string = version_string["version"]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
@@ -67,13 +56,12 @@ def get_min_version_from_toml(toml_path: str):
return min_versions
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201

126
.github/workflows/_all_ci.yml vendored Normal file
View File

@@ -0,0 +1,126 @@
---
name: langchain CI
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
- libs/community
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ inputs.working-directory }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
jobs:
lint:
name: "-"
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
test:
name: "-"
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
compile-integration-tests:
name: "-"
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
dependencies:
name: "-"
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
extended-tests:
name: "make extended_tests #${{ matrix.python-version }}"
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
defaults:
run:
working-directory: ${{ inputs.working-directory }}
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_end:
name: "CI Success"
needs: [lint, test, compile-integration-tests, dependencies, extended-tests]
if: ${{ always() }}
runs-on: ubuntu-latest
steps:
- name: "CI Success"
if: ${{ !failure() }}
run: |
echo "Success"
exit 0
- name: "CI Failure"
if: ${{ failure() }}
run: |
echo "Failure"
exit 1

View File

@@ -63,8 +63,6 @@ jobs:
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
run: |
# Determine the major part of pydantic version
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
@@ -99,8 +97,6 @@ jobs:
fi
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
- name: Run pydantic compatibility tests
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
shell: bash
run: make test

View File

@@ -70,13 +70,6 @@ jobs:
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
run: |
make integration_tests

View File

@@ -55,7 +55,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -157,24 +157,6 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
@@ -209,15 +191,27 @@ jobs:
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
publish:
needs:
- build
@@ -247,7 +241,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -286,7 +280,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: dist
path: ${{ inputs.working-directory }}/dist/

View File

@@ -1,50 +0,0 @@
name: test_doc_imports
on:
workflow_call:
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
run: |
poetry run pip install -e libs/core libs/langchain libs/community libs/experimental
- name: Check doc imports
shell: bash
run: |
poetry run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -48,7 +48,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
@@ -76,7 +76,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/

52
.github/workflows/api_doc_build.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: API docs build
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: bagatur/api_docs_build
- name: Set Git config
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: Merge master
run: |
git fetch origin master
git merge origin/master -m "Merge master" --allow-unrelated-histories -X theirs
- name: Set up Python ${{ env.PYTHON_VERSION }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: api-docs
- name: Install dependencies
run: |
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
poetry run python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
poetry run python -m pip install ./libs/partners/*
poetry run python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- name: Build docs
run: |
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
poetry run python docs/api_reference/create_api_rst.py
poetry run python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference api_reference_build/html -j auto
# https://github.com/marketplace/actions/add-commit
- uses: EndBug/add-and-commit@v9
with:
message: 'Update API docs build'

View File

@@ -1,24 +0,0 @@
name: Check Broken Links
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
jobs:
check-links:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: Install dependencies
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: Check broken links
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -16,9 +16,6 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
@@ -33,125 +30,15 @@ jobs:
run: |
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
lint:
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-lint != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-lint) }}
uses: ./.github/workflows/_lint.yml
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-run) }}
uses: ./.github/workflows/_all_ci.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test_doc_imports:
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
compile-integration-tests:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
dependencies:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
extended-tests:
name: "cd ${{ matrix.working-directory }} / make extended_tests #${{ matrix.python-version }}"
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-extended-test != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-extended-test) }}
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests]
if: |
always()
runs-on: ubuntu-latest
env:
JOBS_JSON: ${{ toJSON(needs) }}
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON
echo "Exiting with $EXIT_CODE"
exit $EXIT_CODE

View File

@@ -32,6 +32,6 @@ jobs:
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
skip: guide_imports.json
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py

37
.github/workflows/doc_lint.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
---
name: CI / cd .
on:
push:
branches: [ master ]
pull_request:
paths:
- 'docs/**'
- 'templates/**'
- 'cookbook/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/doc_lint.yml'
workflow_dispatch:
jobs:
check:
name: Check for "from langchain import x" imports
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run import check
run: |
# We should not encourage imports directly from main init file
# Expect for hub
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
lint:
name: "-"
uses:
./.github/workflows/_lint.yml
with:
working-directory: "."
secrets: inherit

6
.gitignore vendored
View File

@@ -115,11 +115,13 @@ celerybeat.pid
# Environments
.env
.envrc
.venv*
venv*
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject

View File

@@ -1,56 +1,44 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^##//p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
# Default target executed when no arguments are given to make.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
######################
# DOCUMENTATION
######################
## docs_build: Build the documentation.
clean: docs_clean api_docs_clean
docs_build:
docs/.local_build.sh
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
echo "Nothing to clean."; \
fi
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
rm -f docs/api_reference/api_reference.rst
cd docs/api_reference && poetry run make clean
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
poetry run linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
poetry run codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
poetry run codespell --toml pyproject.toml -w
@@ -58,14 +46,29 @@ spell_fix:
# LINTING AND FORMATTING
######################
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
######################
# HELP
######################
help:
@echo '===================='
@echo '-- DOCUMENTATION --'
@echo 'clean - run docs_clean and api_docs_clean'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
@echo 'api_docs_build - build the API Reference documentation'
@echo 'api_docs_clean - clean the API Reference documentation build artifacts'
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
@echo 'spell_check - run codespell on the project'
@echo 'spell_fix - run codespell on the project and fix the errors'
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'

View File

@@ -34,40 +34,34 @@ conda install langchain -c conda-forge
## 🤔 What is LangChain?
**LangChain** is a framework for developing applications powered by large language models (LLMs).
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
For these applications, LangChain simplifies the entire application lifecycle:
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`[LangGraph](https://python.langchain.com/docs/langgraph)`**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Question answering with RAG**
**❓ Retrieval augmented generation**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
**💬 Analyzing structured data**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
- [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2)
**🤖 Chatbots**
@@ -78,51 +72,34 @@ And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cas
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
**📚 Retrieval:**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
**🤖 Agents:**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)
- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews
- [Reference](https://api.python.langchain.com): full API docs
## 💁 Contributing

View File

@@ -1,61 +1,6 @@
# Security Policy
## Reporting OSS Vulnerabilities
## Reporting a Vulnerability
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
understand what we consider to be a security vulnerability vs. developer
responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
- Code documented with security notices. This will be decided done on a case by
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs see below.
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
### Other Security Concerns
For any other security concerns, please contact us at `security@langchain.dev`.
Please report security vulnerabilities by email to `security@langchain.dev`.
This email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.

View File

@@ -1,932 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "BYejgj8Zf-LG",
"tags": []
},
"source": [
"## Getting started with LangChain and Gemma, running locally or in the Cloud"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2IxjMb9-jIJ8"
},
"source": [
"### Installing dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 9436,
"status": "ok",
"timestamp": 1708975187360,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "XZaTsXfcheTF",
"outputId": "eb21d603-d824-46c5-f99f-087fb2f618b1",
"tags": []
},
"outputs": [],
"source": [
"!pip install --upgrade langchain langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXmAujvC3Kwp"
},
"source": [
"### Running the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CI8Elyc5gBQF"
},
"source": [
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint it ready, you need to copy its number."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "gv1j8FrVftsC"
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"project: str = \"PUT_YOUR_PROJECT_ID_HERE\" # @param {type:\"string\"}\n",
"endpoint_id: str = \"PUT_YOUR_ENDPOINT_ID_HERE\" # @param {type:\"string\"}\n",
"location: str = \"PUT_YOUR_ENDPOINT_LOCAtION_HERE\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "ok",
"timestamp": 1708975440503,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "bhIHsFGYjtFt",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:15:10.457149: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:15:10.508925: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:15:10.508957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:15:10.510289: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:15:10.518898: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 351,
"status": "ok",
"timestamp": 1708975440852,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "WJv-UVWwh0lk",
"tags": []
},
"outputs": [],
"source": [
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 714,
"status": "ok",
"timestamp": 1708975441564,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "6kM7cEFdiN9h",
"outputId": "fb420c56-5614-4745-cda8-0ee450a3e539",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" Who am I? Why do I exist? These are questions I have struggled with\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\")\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zzep9nfmuUcO"
},
"source": [
"We can also use Gemma as a multi-turn chat model:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 964,
"status": "ok",
"timestamp": 1708976298189,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "8tPHoM5XiZOl",
"outputId": "7b8fb652-9aed-47b0-c096-aa1abfc3a2a9",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of'\n",
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nPrompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of<end_of_turn>\\n<start_of_turn>user\\nHow much is 3+3?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nOutput:\\n3-years old.<end_of_turn>\\n\\n<'\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")\n",
"\n",
"message1 = HumanMessage(content=\"How much is 2+2?\")\n",
"answer1 = llm.invoke([message1])\n",
"print(answer1)\n",
"\n",
"message2 = HumanMessage(content=\"How much is 3+3?\")\n",
"answer2 = llm.invoke([message1, answer1, message2])\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process response to avoid repetitions:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Output:\\n<<humming>>: 2+2 = 4.\\n<end'\n",
"content='Output:\\nOutput:\\n<<humming>>: 3+3 = 6.'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], parse_response=True)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VEfjqo7fjARR"
},
"source": [
"## Running Gemma locally from Kaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gVW8QDzHu7TA"
},
"source": [
"In order to run Gemma locally, you can download it from Kaggle first. In order to do this, you'll need to login into the Kaggle platform, create a API key and download a `kaggle.json` Read more about Kaggle auth [here](https://www.kaggle.com/docs/api)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S1EsXQ3XvZkQ"
},
"source": [
"### Installation"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"executionInfo": {
"elapsed": 335,
"status": "ok",
"timestamp": 1708976305471,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "p8SMwpKRvbef",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
}
],
"source": [
"!mkdir -p ~/.kaggle && cp kaggle.json ~/.kaggle/kaggle.json"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"executionInfo": {
"elapsed": 7802,
"status": "ok",
"timestamp": 1708976363010,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "Yr679aePv9Fq",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorstore 0.1.54 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install keras>=3 keras_nlp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E9zn8nYpv3QZ"
},
"source": [
"### Usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"executionInfo": {
"elapsed": 8536,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "0LFRmY8TjCkI",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:38:40.797559: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:38:40.848444: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:38:40.848478: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:38:40.849728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:38:40.857936: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaLocalKaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v-o7oXVavdMQ"
},
"source": [
"You can specify the keras backend (by default it's `tensorflow`, but you can change it be `jax` or `torch`)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"executionInfo": {
"elapsed": 9,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "vvTUH8DNj5SF",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 40836,
"status": "ok",
"timestamp": 1708976761257,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "YOmrqxo5kHXK",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:23:14.661164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "Zu6yPDUgkQtQ",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1709051129.518076 774855 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=30)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ChatModel"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MSctpRE4u43N"
},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:22.331067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:58:22.382948: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:58:22.382978: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:58:22.384312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:58:22.392767: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalKaggle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:29.001922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaChatLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "aborted",
"timestamp": 1708976382957,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "JrJmvZqwwLqj"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:49.848412: I external/local_xla/xla/service/service.cc:168] XLA service 0x55adc0cf2c10 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2024-02-27 16:58:49.848458: I external/local_xla/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA L4, Compute Capability 8.9\n",
"2024-02-27 16:58:50.116614: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n",
"2024-02-27 16:58:54.389324: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8900\n",
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
"I0000 00:00:1709053145.225207 784891 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n",
"W0000 00:00:1709053145.284227 784891 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=30)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process the response if you want to avoid multi-turn statements:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n Tampoco\\nI'm a model.\"\n",
"content='I can help you with your modeling.\\n Tampoco\\nI can'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=30, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EiZnztso7hyF"
},
"source": [
"## Running Gemma locally from HuggingFace"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "qqAqsz5R7nKf",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:02:21.832409: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:02:21.883625: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:02:21.883656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:02:21.884987: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:02:21.893340: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalHF, GemmaLocalHF"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "tsyntzI08cOr",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"hf_access_token: str = \"PUT_YOUR_TOKEN_HERE\" # @param {type:\"string\"}\n",
"model_name: str = \"google/gemma-2b\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "JWrqEkOo8sm9",
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a0d6de5542254ed1b6d3ba65465e050e",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaLocalHF(model_name=\"google/gemma-2b\", hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "VX96Jf4Y84k-",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has been asked by philosophers, theologians, and scientists for centuries.\n",
"\n",
"And its the question that\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=50)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "9x-jmEBg9Mk1"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c9a0b8e161d74a6faca83b1be96dee27",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaChatLocalHF(model_name=model_name, hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "qv_OSaMm9PVy"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=60)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\\nI can help you with anything.\\n<\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=140)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the same with posprocessing:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n<end_of_turn>\\n\"\n",
"content='I can help you with anything.\\n<end_of_turn>\\n<end_of_turn>\\n'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=60, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=120, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"environment": {
"kernel": "python3",
"name": ".m116",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/:m116"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -38,9 +38,9 @@
"\n",
"To run locally, we use Ollama.ai. \n",
"\n",
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
"See [here](https://python.langchain.com/docs/integrations/chat/ollama) for details on installation and setup.\n",
"\n",
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
"Also, see [here](https://python.langchain.com/docs/guides/local_llms) for our full guide on local LLMs.\n",
" \n",
"To use an external API, which is not private, we can use Replicate."
]

View File

@@ -116,7 +116,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import CharacterTextSplitter\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from unstructured.partition.pdf import partition_pdf\n",
"\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -8,7 +8,6 @@ Notebook | Description
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
[amazon_personalize_how_to.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/amazon_personalize_how_to.ipynb) | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps
[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.

View File

@@ -191,15 +191,15 @@
"source": [
"## Multi-vector retriever\n",
"\n",
"Use [multi-vector-retriever](/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"Use [multi-vector-retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"\n",
"Summaries are used to retrieve raw tables and / or raw chunks of text.\n",
"\n",
"### Text and Table summaries\n",
"\n",
"Here, we use Ollama to run LLaMA2 locally. \n",
"Here, we use ollama.ai to run LLaMA2 locally. \n",
"\n",
"See details on installation [here](/docs/guides/development/local_llms)."
"See details on installation [here](https://python.langchain.com/docs/guides/local_llms)."
]
},
{

View File

@@ -68,7 +68,7 @@
"pdf_pages = loader.load()\n",
"\n",
"# Split\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits_pypdf = text_splitter.split_documents(pdf_pages)\n",

View File

@@ -28,9 +28,9 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"llm = OpenAI(temperature=0)"
]

View File

@@ -1,200 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-airbyte"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"GITHUB_TOKEN = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_airbyte import AirbyteLoader\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"loader = AirbyteLoader(\n",
" source=\"source-github\",\n",
" stream=\"pull_requests\",\n",
" config={\n",
" \"credentials\": {\"personal_access_token\": GITHUB_TOKEN},\n",
" \"repositories\": [\"langchain-ai/langchain\"],\n",
" },\n",
" template=PromptTemplate.from_template(\n",
" \"\"\"# {title}\n",
"by {user[login]}\n",
"\n",
"{body}\"\"\"\n",
" ),\n",
" include_metadata=False,\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# Updated partners/ibm README\n",
"by williamdevena\n",
"\n",
"## PR title\n",
"partners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\n",
"\n",
"## PR message\n",
"Description: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\n",
"\n",
"The README includes:\n",
"\n",
"- Brief description\n",
"- Installation\n",
"- Setting-up instructions (API key, project id, ...)\n",
"- Basic usage:\n",
" - Loading the model\n",
" - Direct inference\n",
" - Chain invoking\n",
" - Streaming the model output\n",
" \n",
"Issue: https://github.com/langchain-ai/langchain/issues/17545\n",
"\n",
"Dependencies: None\n",
"\n",
"Twitter handle: None\n"
]
}
],
"source": [
"print(docs[-2].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10283"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" docs,\n",
" embedding=OpenAIEmbeddings(\n",
" disallowed_special=(enc.special_tokens_set - {\"<|endofprompt|>\"})\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\\r\\n\\r\\n## PR message\\r\\nDescription: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\nThe README includes:\\r\\n\\r\\n- Brief description\\r\\n- Installation\\r\\n- Setting-up instructions (API key, project id, ...)\\r\\n- Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n \\r\\nIssue: https://github.com/langchain-ai/langchain/issues/17545\\r\\n\\r\\nDependencies: None\\r\\n\\r\\nTwitter handle: None'),\n",
" Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the `libs/partners/ibm` folder. \\r\\n\\r\\n\\r\\n\\r\\n## PR message\\r\\n- **Description:** Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\n The README includes:\\r\\n - Brief description\\r\\n - Installation\\r\\n - Setting-up instructions (API key, project id, ...)\\r\\n - Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n\\r\\n\\r\\n- **Issue:** #17545\\r\\n- **Dependencies:** None\\r\\n- **Twitter handle:** None'),\n",
" Document(page_content='# IBM: added partners package `langchain_ibm`, added llm\\nby MateuszOssGit\\n\\n - **Description:** Added `langchain_ibm` as an langchain partners package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (`WatsonxLLM`)\\r\\n - **Dependencies:** [ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),\\r\\n - **Tag maintainer:** : \\r\\n\\r\\nPlease make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. ✅'),\n",
" Document(page_content='# Add WatsonX support\\nby baptistebignaud\\n\\nIt is a connector to use a LLM from WatsonX.\\r\\nIt requires python SDK \"ibm-generative-ai\"\\r\\n\\r\\n(It might not be perfect since it is my first PR on a public repository 😄)')]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"pull requests related to IBM\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

View File

@@ -14,9 +14,9 @@
"\n",
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
"\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal network.\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal nework.\n",
"\n",
"It's an alternative to typical pattern of requesting a response from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
"It's an alternative to typical pattern of requesting a reponse from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
]
},
{
@@ -261,7 +261,7 @@
"\n",
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
"\n",
"Here, we're overriding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
"Here, we're overiding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
]
},
{
@@ -272,7 +272,7 @@
},
"outputs": [],
"source": [
"# Load the model with the appropriate parameters:\n",
"# Load the model with the apporiate parameters:\n",
"llm = LlamaCpp(\n",
" model_path=model_path,\n",
" max_tokens=250,\n",
@@ -551,7 +551,7 @@
"\n",
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
"\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distributed architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distribuited architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
"\n",
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
]

View File

@@ -40,9 +40,7 @@
"import nest_asyncio\n",
"import pandas as pd\n",
"from langchain.docstore.document import Document\n",
"from langchain_experimental.agents.agent_toolkits.pandas.base import (\n",
" create_pandas_dataframe_agent,\n",
")\n",
"from langchain_community.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
@@ -59,7 +57,7 @@
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4\", temperature=1.0)"
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=1.0)"
]
},
{
@@ -229,8 +227,8 @@
" BaseCombineDocumentsChain,\n",
" load_qa_with_sources_chain,\n",
")\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from pydantic import Field\n",
"\n",
"\n",

View File

@@ -24,7 +24,7 @@
"source": [
"1. Prepare data:\n",
" 1. Upload all python project files using the `langchain_community.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 2. Split all documents to chunks using the `langchain_text_splitters.CharacterTextSplitter`.\n",
" 2. Split all documents to chunks using the `langchain.text_splitter.CharacterTextSplitter`.\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain_community.vectorstores.DeepLake`\n",
"2. Question-Answering:\n",
" 1. Build a chain from `langchain.chat_models.ChatOpenAI` and `langchain.chains.ConversationalRetrievalChain`\n",
@@ -621,7 +621,7 @@
}
],
"source": [
"from langchain_text_splitters import CharacterTextSplitter\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(docs)\n",
@@ -933,7 +933,7 @@
"**Answer**: The LangChain class includes various types of retrievers such as:\n",
"\n",
"- ArxivRetriever\n",
"- AzureAISearchRetriever\n",
"- AzureCognitiveSearchRetriever\n",
"- BM25Retriever\n",
"- ChaindeskRetriever\n",
"- ChatGPTPluginRetriever\n",
@@ -993,7 +993,7 @@
{
"data": {
"text/plain": [
"{'question': 'LangChain possesses a variety of retrievers including:\\n\\n1. ArxivRetriever\\n2. AzureAISearchRetriever\\n3. BM25Retriever\\n4. ChaindeskRetriever\\n5. ChatGPTPluginRetriever\\n6. ContextualCompressionRetriever\\n7. DocArrayRetriever\\n8. ElasticSearchBM25Retriever\\n9. EnsembleRetriever\\n10. GoogleVertexAISearchRetriever\\n11. AmazonKendraRetriever\\n12. KNNRetriever\\n13. LlamaIndexGraphRetriever\\n14. LlamaIndexRetriever\\n15. MergerRetriever\\n16. MetalRetriever\\n17. MilvusRetriever\\n18. MultiQueryRetriever\\n19. ParentDocumentRetriever\\n20. PineconeHybridSearchRetriever\\n21. PubMedRetriever\\n22. RePhraseQueryRetriever\\n23. RemoteLangChainRetriever\\n24. SelfQueryRetriever\\n25. SVMRetriever\\n26. TFIDFRetriever\\n27. TimeWeightedVectorStoreRetriever\\n28. VespaRetriever\\n29. WeaviateHybridSearchRetriever\\n30. WebResearchRetriever\\n31. WikipediaRetriever\\n32. ZepRetriever\\n33. ZillizRetriever\\n\\nIt also includes self query translators like:\\n\\n1. ChromaTranslator\\n2. DeepLakeTranslator\\n3. MyScaleTranslator\\n4. PineconeTranslator\\n5. QdrantTranslator\\n6. WeaviateTranslator\\n\\nAnd remote retrievers like:\\n\\n1. RemoteLangChainRetriever'}"
"{'question': 'LangChain possesses a variety of retrievers including:\\n\\n1. ArxivRetriever\\n2. AzureCognitiveSearchRetriever\\n3. BM25Retriever\\n4. ChaindeskRetriever\\n5. ChatGPTPluginRetriever\\n6. ContextualCompressionRetriever\\n7. DocArrayRetriever\\n8. ElasticSearchBM25Retriever\\n9. EnsembleRetriever\\n10. GoogleVertexAISearchRetriever\\n11. AmazonKendraRetriever\\n12. KNNRetriever\\n13. LlamaIndexGraphRetriever\\n14. LlamaIndexRetriever\\n15. MergerRetriever\\n16. MetalRetriever\\n17. MilvusRetriever\\n18. MultiQueryRetriever\\n19. ParentDocumentRetriever\\n20. PineconeHybridSearchRetriever\\n21. PubMedRetriever\\n22. RePhraseQueryRetriever\\n23. RemoteLangChainRetriever\\n24. SelfQueryRetriever\\n25. SVMRetriever\\n26. TFIDFRetriever\\n27. TimeWeightedVectorStoreRetriever\\n28. VespaRetriever\\n29. WeaviateHybridSearchRetriever\\n30. WebResearchRetriever\\n31. WikipediaRetriever\\n32. ZepRetriever\\n33. ZillizRetriever\\n\\nIt also includes self query translators like:\\n\\n1. ChromaTranslator\\n2. DeepLakeTranslator\\n3. MyScaleTranslator\\n4. PineconeTranslator\\n5. QdrantTranslator\\n6. WeaviateTranslator\\n\\nAnd remote retrievers like:\\n\\n1. RemoteLangChainRetriever'}"
]
},
"execution_count": 31,
@@ -1117,7 +1117,7 @@
"The LangChain class includes various types of retrievers such as:\n",
"\n",
"- ArxivRetriever\n",
"- AzureAISearchRetriever\n",
"- AzureCognitiveSearchRetriever\n",
"- BM25Retriever\n",
"- ChaindeskRetriever\n",
"- ChatGPTPluginRetriever\n",

File diff suppressed because it is too large Load Diff

View File

@@ -52,12 +52,12 @@
"import os\n",
"\n",
"from langchain.chains import RetrievalQA\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import (\n",
"from langchain.text_splitter import (\n",
" CharacterTextSplitter,\n",
" RecursiveCharacterTextSplitter,\n",
")\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",

View File

@@ -84,7 +84,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)"
]
},

View File

@@ -100,7 +100,7 @@
}
],
"source": [
"agent.invoke(\"whats 2 + 2\")"
"agent.run(\"whats 2 + 2\")"
]
},
{

View File

@@ -1,245 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
"metadata": {},
"source": [
"## Fireworks.AI + LangChain + RAG\n",
" \n",
"[Fireworks AI](https://python.langchain.com/docs/integrations/llms/fireworks) wants to provide the best experience when working with LangChain, and here is an example of Fireworks + LangChain doing RAG\n",
"\n",
"See [our models page](https://fireworks.ai/models) for the full list of models. We use `accounts/fireworks/models/mixtral-8x7b-instruct` for RAG In this tutorial.\n",
"\n",
"For the RAG target, we will use the Gemma technical report https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Found existing installation: langchain-fireworks 0.0.1\n",
"Uninstalling langchain-fireworks-0.0.1:\n",
" Successfully uninstalled langchain-fireworks-0.0.1\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Obtaining file:///mnt/disks/data/langchain/libs/partners/fireworks\n",
" Installing build dependencies ... \u001b[?25ldone\n",
"\u001b[?25h Checking if build backend supports build_editable ... \u001b[?25ldone\n",
"\u001b[?25h Getting requirements to build editable ... \u001b[?25ldone\n",
"\u001b[?25h Preparing editable metadata (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25hRequirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (3.9.3)\n",
"Requirement already satisfied: fireworks-ai<0.13.0,>=0.12.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.12.0)\n",
"Requirement already satisfied: langchain-core<0.2,>=0.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.1.23)\n",
"Requirement already satisfied: requests<3,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (2.31.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (23.1.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.4.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (6.0.4)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.9.2)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (4.0.3)\n",
"Requirement already satisfied: httpx in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.26.0)\n",
"Requirement already satisfied: httpx-sse in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.4.0)\n",
"Requirement already satisfied: pydantic in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.4.2)\n",
"Requirement already satisfied: Pillow in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (10.2.0)\n",
"Requirement already satisfied: PyYAML>=5.3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (6.0.1)\n",
"Requirement already satisfied: anyio<5,>=3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (3.7.1)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.33)\n",
"Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (0.1.5)\n",
"Requirement already satisfied: packaging<24.0,>=23.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (23.2)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (8.2.3)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.3.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2.0.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2023.7.22)\n",
"Requirement already satisfied: sniffio>=1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.3.0)\n",
"Requirement already satisfied: exceptiongroup in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.1.3)\n",
"Requirement already satisfied: jsonpointer>=1.9 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (2.4)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.5.0)\n",
"Requirement already satisfied: pydantic-core==2.10.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.10.1)\n",
"Requirement already satisfied: typing-extensions>=4.6.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (4.8.0)\n",
"Requirement already satisfied: httpcore==1.* in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (1.0.2)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpcore==1.*->httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.14.0)\n",
"Building wheels for collected packages: langchain-fireworks\n",
" Building editable for langchain-fireworks (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25h Created wheel for langchain-fireworks: filename=langchain_fireworks-0.0.1-py3-none-any.whl size=2228 sha256=564071b120b09ec31f2dc737733448a33bbb26e40b49fcde0c129ad26045259d\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-oz368vdk/wheels/e0/ad/31/d7e76dd73d61905ff7f369f5b0d21a4b5e7af4d3cb7487aece\n",
"Successfully built langchain-fireworks\n",
"Installing collected packages: langchain-fireworks\n",
"Successfully installed langchain-fireworks-0.0.1\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --quiet pypdf chromadb tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cf719376",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<module 'fireworks' from '/mnt/disks/data/langchain/.venv/lib/python3.9/site-packages/fireworks/__init__.py'>\n"
]
}
],
"source": [
"import fireworks\n",
"\n",
"print(fireworks)\n",
"import fireworks.client"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab49327-0532-4480-804c-d066c302a322",
"metadata": {},
"outputs": [],
"source": [
"# Load\n",
"import requests\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"# Download the PDF from a URL and save it to a temporary location\n",
"url = \"https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf\"\n",
"response = requests.get(url, stream=True)\n",
"file_name = \"temp_file.pdf\"\n",
"with open(file_name, \"wb\") as pdf:\n",
" pdf.write(response.content)\n",
"\n",
"loader = PyPDFLoader(file_name)\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=all_splits,\n",
" collection_name=\"rag-chroma\",\n",
" embedding=FireworksEmbeddings(),\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"# RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"# LLM\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
" temperature=0.0,\n",
" max_tokens=2000,\n",
" top_k=1,\n",
")\n",
"\n",
"# RAG chain\n",
"chain = (\n",
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
]
},
{
"cell_type": "markdown",
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -170,8 +170,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -52,7 +52,7 @@
"\n",
"bash_chain = LLMBashChain.from_llm(llm, verbose=True)\n",
"\n",
"bash_chain.invoke(text)"
"bash_chain.run(text)"
]
},
{
@@ -135,7 +135,7 @@
"\n",
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
"\n",
"bash_chain.invoke(text)"
"bash_chain.run(text)"
]
},
{
@@ -190,7 +190,7 @@
"\n",
"text = \"List the current directory then move up a level.\"\n",
"\n",
"bash_chain.invoke(text)"
"bash_chain.run(text)"
]
},
{
@@ -231,7 +231,7 @@
],
"source": [
"# Run the same command again and see that the state is maintained between calls\n",
"bash_chain.invoke(text)"
"bash_chain.run(text)"
]
}
],

View File

@@ -50,7 +50,7 @@
"\n",
"checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)\n",
"\n",
"checker_chain.invoke(text)"
"checker_chain.run(text)"
]
},
{

View File

@@ -51,7 +51,7 @@
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",
"\n",
"llm_math.invoke(\"What is 13 raised to the .3432 power?\")"
"llm_math.run(\"What is 13 raised to the .3432 power?\")"
]
},
{

View File

@@ -45,7 +45,7 @@
}
],
"source": [
"llm_symbolic_math.invoke(\"What is the derivative of sin(x)*exp(x) with respect to x?\")"
"llm_symbolic_math.run(\"What is the derivative of sin(x)*exp(x) with respect to x?\")"
]
},
{
@@ -65,7 +65,7 @@
}
],
"source": [
"llm_symbolic_math.invoke(\n",
"llm_symbolic_math.run(\n",
" \"What is the integral of exp(x)*sin(x) + exp(x)*cos(x) with respect to x?\"\n",
")"
]
@@ -94,7 +94,7 @@
}
],
"source": [
"llm_symbolic_math.invoke('Solve the differential equation y\" - y = e^t')"
"llm_symbolic_math.run('Solve the differential equation y\" - y = e^t')"
]
},
{
@@ -114,7 +114,7 @@
}
],
"source": [
"llm_symbolic_math.invoke(\"What are the solutions to this equation y^3 + 1/3y?\")"
"llm_symbolic_math.run(\"What are the solutions to this equation y^3 + 1/3y?\")"
]
},
{
@@ -134,7 +134,7 @@
}
],
"source": [
"llm_symbolic_math.invoke(\"x = y + 5, y = z - 3, z = x * y. Solve for x, y, z\")"
"llm_symbolic_math.run(\"x = y + 5, y = z - 3, z = x * y. Solve for x, y, z\")"
]
}
],

View File

@@ -1,818 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "70b333e6",
"metadata": {},
"source": [
"[![View Article](https://img.shields.io/badge/View%20Article-blue)](https://www.mongodb.com/developer/products/atlas/advanced-rag-langchain-mongodb/)\n"
]
},
{
"cell_type": "markdown",
"id": "d84a72ea",
"metadata": {},
"source": [
"# Adding Semantic Caching and Memory to your RAG Application using MongoDB and LangChain\n",
"\n",
"In this notebook, we will see how to use the new MongoDBCache and MongoDBChatMessageHistory in your RAG application.\n"
]
},
{
"cell_type": "markdown",
"id": "65527202",
"metadata": {},
"source": [
"## Step 1: Install required libraries\n",
"\n",
"- **datasets**: Python library to get access to datasets available on Hugging Face Hub\n",
"\n",
"- **langchain**: Python toolkit for LangChain\n",
"\n",
"- **langchain-mongodb**: Python package to use MongoDB as a vector store, semantic cache, chat history store etc. in LangChain\n",
"\n",
"- **langchain-openai**: Python package to use OpenAI models with LangChain\n",
"\n",
"- **pymongo**: Python toolkit for MongoDB\n",
"\n",
"- **pandas**: Python library for data analysis, exploration, and manipulation"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cbc22fa4",
"metadata": {},
"outputs": [],
"source": [
"! pip install -qU datasets langchain langchain-mongodb langchain-openai pymongo pandas"
]
},
{
"cell_type": "markdown",
"id": "39c41e87",
"metadata": {},
"source": [
"## Step 2: Setup pre-requisites\n",
"\n",
"* Set the MongoDB connection string. Follow the steps [here](https://www.mongodb.com/docs/manual/reference/connection-string/) to get the connection string from the Atlas UI.\n",
"\n",
"* Set the OpenAI API key. Steps to obtain an API key as [here](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b56412ae",
"metadata": {},
"outputs": [],
"source": [
"import getpass"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "16a20d7a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your MongoDB connection string:········\n"
]
}
],
"source": [
"MONGODB_URI = getpass.getpass(\"Enter your MongoDB connection string:\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "978682d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your OpenAI API key:········\n"
]
}
],
"source": [
"OPENAI_API_KEY = getpass.getpass(\"Enter your OpenAI API key:\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "606081c5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"········\n"
]
}
],
"source": [
"# Optional-- If you want to enable Langsmith -- good for debugging\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "f6b8302c",
"metadata": {},
"source": [
"## Step 3: Download the dataset\n",
"\n",
"We will be using MongoDB's [embedded_movies](https://huggingface.co/datasets/MongoDB/embedded_movies) dataset"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1a3433a6",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from datasets import load_dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aee5311b",
"metadata": {},
"outputs": [],
"source": [
"# Ensure you have an HF_TOKEN in your development enviornment:\n",
"# access tokens can be created or copied from the Hugging Face platform (https://huggingface.co/docs/hub/en/security-tokens)\n",
"\n",
"# Load MongoDB's embedded_movies dataset from Hugging Face\n",
"# https://huggingface.co/datasets/MongoDB/airbnb_embeddings\n",
"\n",
"data = load_dataset(\"MongoDB/embedded_movies\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1d630a26",
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(data[\"train\"])"
]
},
{
"cell_type": "markdown",
"id": "a1f94f43",
"metadata": {},
"source": [
"## Step 4: Data analysis\n",
"\n",
"Make sure length of the dataset is what we expect, drop Nones etc."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b276df71",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>fullplot</th>\n",
" <th>type</th>\n",
" <th>plot_embedding</th>\n",
" <th>num_mflix_comments</th>\n",
" <th>runtime</th>\n",
" <th>writers</th>\n",
" <th>imdb</th>\n",
" <th>countries</th>\n",
" <th>rated</th>\n",
" <th>plot</th>\n",
" <th>title</th>\n",
" <th>languages</th>\n",
" <th>metacritic</th>\n",
" <th>directors</th>\n",
" <th>awards</th>\n",
" <th>genres</th>\n",
" <th>poster</th>\n",
" <th>cast</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>Young Pauline is left a lot of money when her ...</td>\n",
" <td>movie</td>\n",
" <td>[0.00072939653, -0.026834568, 0.013515796, -0....</td>\n",
" <td>0</td>\n",
" <td>199.0</td>\n",
" <td>[Charles W. Goddard (screenplay), Basil Dickey...</td>\n",
" <td>{'id': 4465, 'rating': 7.6, 'votes': 744}</td>\n",
" <td>[USA]</td>\n",
" <td>None</td>\n",
" <td>Young Pauline is left a lot of money when her ...</td>\n",
" <td>The Perils of Pauline</td>\n",
" <td>[English]</td>\n",
" <td>NaN</td>\n",
" <td>[Louis J. Gasnier, Donald MacKenzie]</td>\n",
" <td>{'nominations': 0, 'text': '1 win.', 'wins': 1}</td>\n",
" <td>[Action]</td>\n",
" <td>https://m.media-amazon.com/images/M/MV5BMzgxOD...</td>\n",
" <td>[Pearl White, Crane Wilbur, Paul Panzer, Edwar...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" fullplot type \\\n",
"0 Young Pauline is left a lot of money when her ... movie \n",
"\n",
" plot_embedding num_mflix_comments \\\n",
"0 [0.00072939653, -0.026834568, 0.013515796, -0.... 0 \n",
"\n",
" runtime writers \\\n",
"0 199.0 [Charles W. Goddard (screenplay), Basil Dickey... \n",
"\n",
" imdb countries rated \\\n",
"0 {'id': 4465, 'rating': 7.6, 'votes': 744} [USA] None \n",
"\n",
" plot title \\\n",
"0 Young Pauline is left a lot of money when her ... The Perils of Pauline \n",
"\n",
" languages metacritic directors \\\n",
"0 [English] NaN [Louis J. Gasnier, Donald MacKenzie] \n",
"\n",
" awards genres \\\n",
"0 {'nominations': 0, 'text': '1 win.', 'wins': 1} [Action] \n",
"\n",
" poster \\\n",
"0 https://m.media-amazon.com/images/M/MV5BMzgxOD... \n",
"\n",
" cast \n",
"0 [Pearl White, Crane Wilbur, Paul Panzer, Edwar... "
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Previewing the contents of the data\n",
"df.head(1)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "22ab375d",
"metadata": {},
"outputs": [],
"source": [
"# Only keep records where the fullplot field is not null\n",
"df = df[df[\"fullplot\"].notna()]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "fceed99a",
"metadata": {},
"outputs": [],
"source": [
"# Renaming the embedding field to \"embedding\" -- required by LangChain\n",
"df.rename(columns={\"plot_embedding\": \"embedding\"}, inplace=True)"
]
},
{
"cell_type": "markdown",
"id": "aedec13a",
"metadata": {},
"source": [
"## Step 5: Create a simple RAG chain using MongoDB as the vector store"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "11d292f3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mongodb import MongoDBAtlasVectorSearch\n",
"from pymongo import MongoClient\n",
"\n",
"# Initialize MongoDB python client\n",
"client = MongoClient(MONGODB_URI, appname=\"devrel.content.python\")\n",
"\n",
"DB_NAME = \"langchain_chatbot\"\n",
"COLLECTION_NAME = \"data\"\n",
"ATLAS_VECTOR_SEARCH_INDEX_NAME = \"vector_index\"\n",
"collection = client[DB_NAME][COLLECTION_NAME]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d8292d53",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"DeleteResult({'n': 1000, 'electionId': ObjectId('7fffffff00000000000000f6'), 'opTime': {'ts': Timestamp(1710523288, 1033), 't': 246}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1710523288, 1042), 'signature': {'hash': b\"i\\xa8\\xe9'\\x1ed\\xf2u\\xf3L\\xff\\xb1\\xf5\\xbfA\\x90\\xabJ\\x12\\x83\", 'keyId': 7299545392000008318}}, 'operationTime': Timestamp(1710523288, 1033)}, acknowledged=True)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Delete any existing records in the collection\n",
"collection.delete_many({})"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "36c68914",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Data ingestion into MongoDB completed\n"
]
}
],
"source": [
"# Data Ingestion\n",
"records = df.to_dict(\"records\")\n",
"collection.insert_many(records)\n",
"\n",
"print(\"Data ingestion into MongoDB completed\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "cbfca0b8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset\n",
"embeddings = OpenAIEmbeddings(\n",
" openai_api_key=OPENAI_API_KEY, model=\"text-embedding-ada-002\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "798e176c",
"metadata": {},
"outputs": [],
"source": [
"# Vector Store Creation\n",
"vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n",
" connection_string=MONGODB_URI,\n",
" namespace=DB_NAME + \".\" + COLLECTION_NAME,\n",
" embedding=embeddings,\n",
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
" text_key=\"fullplot\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "c71cd087",
"metadata": {},
"outputs": [],
"source": [
"# Using the MongoDB vector store as a retriever in a RAG chain\n",
"retriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 5})"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "b6588cd3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Generate context using the retriever, and pass the user question through\n",
"retrieve = {\n",
" \"context\": retriever | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs])),\n",
" \"question\": RunnablePassthrough(),\n",
"}\n",
"template = \"\"\"Answer the question based only on the following context: \\\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"# Defining the chat prompt\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"# Defining the model to be used for chat completion\n",
"model = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)\n",
"# Parse output as a string\n",
"parse_output = StrOutputParser()\n",
"\n",
"# Naive RAG chain\n",
"naive_rag_chain = retrieve | prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "aaae21f5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "markdown",
"id": "75f929ef",
"metadata": {},
"source": [
"## Step 6: Create a RAG chain with chat history"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "94e7bd4a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "5bb30860",
"metadata": {},
"outputs": [],
"source": [
"def get_session_history(session_id: str) -> MongoDBChatMessageHistory:\n",
" return MongoDBChatMessageHistory(\n",
" MONGODB_URI, session_id, database_name=DB_NAME, collection_name=\"history\"\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "f51d0f35",
"metadata": {},
"outputs": [],
"source": [
"# Given a follow-up question and history, create a standalone question\n",
"standalone_system_prompt = \"\"\"\n",
"Given a chat history and a follow-up question, rephrase the follow-up question to be a standalone question. \\\n",
"Do NOT answer the question, just reformulate it if needed, otherwise return it as is. \\\n",
"Only return the final standalone question. \\\n",
"\"\"\"\n",
"standalone_question_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", standalone_system_prompt),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"question_chain = standalone_question_prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "f3ef3354",
"metadata": {},
"outputs": [],
"source": [
"# Generate context by passing output of the question_chain i.e. the standalone question to the retriever\n",
"retriever_chain = RunnablePassthrough.assign(\n",
" context=question_chain\n",
" | retriever\n",
" | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs]))\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "5afb7345",
"metadata": {},
"outputs": [],
"source": [
"# Create a prompt that includes the context, history and the follow-up question\n",
"rag_system_prompt = \"\"\"Answer the question based only on the following context: \\\n",
"{context}\n",
"\"\"\"\n",
"rag_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", rag_system_prompt),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "f95f47d0",
"metadata": {},
"outputs": [],
"source": [
"# RAG chain\n",
"rag_chain = retriever_chain | rag_prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "9618d395",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The best movie to watch when feeling down could be \"Last Action Hero.\" It\\'s a fun and action-packed film that blends reality and fantasy, offering an escape from the real world and providing an entertaining distraction.'"
]
},
"execution_count": 57,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# RAG chain with history\n",
"with_message_history = RunnableWithMessageHistory(\n",
" rag_chain,\n",
" get_session_history,\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")\n",
"with_message_history.invoke(\n",
" {\"question\": \"What is the best movie to watch when sad?\"},\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "6e3080d1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I apologize for the confusion. Another movie that might lift your spirits when you\\'re feeling sad is \"Smilla\\'s Sense of Snow.\" It\\'s a mystery thriller that could engage your mind and distract you from your sadness with its intriguing plot and suspenseful storyline.'"
]
},
"execution_count": 58,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\n",
" \"question\": \"Hmmm..I don't want to watch that one. Can you suggest something else?\"\n",
" },\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "daea2953",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'For a lighter movie option, you might enjoy \"Cousins.\" It\\'s a comedy film set in Barcelona with action and humor, offering a fun and entertaining escape from reality. The storyline is engaging and filled with comedic moments that could help lift your spirits.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"question\": \"How about something more light?\"},\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0de23a88",
"metadata": {},
"source": [
"## Step 7: Get faster responses using Semantic Cache\n",
"\n",
"**NOTE:** Semantic cache only caches the input to the LLM. When using it in retrieval chains, remember that documents retrieved can change between runs resulting in cache misses for semantically similar queries."
]
},
{
"cell_type": "code",
"execution_count": 61,
"id": "5d6b6741",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.globals import set_llm_cache\n",
"from langchain_mongodb.cache import MongoDBAtlasSemanticCache\n",
"\n",
"set_llm_cache(\n",
" MongoDBAtlasSemanticCache(\n",
" connection_string=MONGODB_URI,\n",
" embedding=embeddings,\n",
" collection_name=\"semantic_cache\",\n",
" database_name=DB_NAME,\n",
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
" wait_until_ready=True, # Optional, waits until the cache is ready to be used\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 62,
"id": "9825bc7b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 87.8 ms, sys: 670 µs, total: 88.5 ms\n",
"Wall time: 1.24 s\n"
]
},
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 62,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "code",
"execution_count": 63,
"id": "a5e518cf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 43.5 ms, sys: 4.16 ms, total: 47.7 ms\n",
"Wall time: 255 ms\n"
]
},
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 63,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "code",
"execution_count": 64,
"id": "3d3d3ad3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 115 ms, sys: 171 µs, total: 115 ms\n",
"Wall time: 1.38 s\n"
]
},
{
"data": {
"text/plain": [
"'I would recommend watching \"Last Action Hero\" when sad, as it is a fun and action-packed film that can help lift your spirits.'"
]
},
"execution_count": 64,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"Which movie do I watch when sad?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_pytorch_p310",
"language": "python",
"name": "conda_pytorch_p310"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -124,7 +124,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import CharacterTextSplitter\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
" chunk_size=7500, chunk_overlap=100\n",

View File

@@ -20,10 +20,10 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
"from langchain_openai import OpenAIEmbeddings"
]
},
{

View File

@@ -1,648 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c7fe38bc",
"metadata": {},
"source": [
"# Optimization\n",
"\n",
"This notebook goes over how to optimize chains using LangChain and [LangSmith](https://smith.langchain.com)."
]
},
{
"cell_type": "markdown",
"id": "2f87ccd5",
"metadata": {},
"source": [
"## Set up\n",
"\n",
"We will set an environment variable for LangSmith, and load the relevant data"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "236bedc5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_PROJECT\"] = \"movie-qa\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a3fed0dd",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "7cfff337",
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"data/imdb_top_1000.csv\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d20fb9c",
"metadata": {},
"outputs": [],
"source": [
"df[\"Released_Year\"] = df[\"Released_Year\"].astype(int, errors=\"ignore\")"
]
},
{
"cell_type": "markdown",
"id": "09fc8fe2",
"metadata": {},
"source": [
"## Create the initial retrieval chain\n",
"\n",
"We will use a self-query retriever"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f71e24e2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8881ea8e",
"metadata": {},
"outputs": [],
"source": [
"records = df.to_dict(\"records\")\n",
"documents = [Document(page_content=d[\"Overview\"], metadata=d) for d in records]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8f495423",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "31d33d62",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"Released_Year\",\n",
" description=\"The year the movie was released\",\n",
" type=\"int\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Series_Title\",\n",
" description=\"The title of the movie\",\n",
" type=\"str\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Genre\",\n",
" description=\"The genre of the movie\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"IMDB_Rating\", description=\"A 1-10 rating for the movie\", type=\"float\"\n",
" ),\n",
"]\n",
"document_content_description = \"Brief summary of a movie\"\n",
"llm = ChatOpenAI(temperature=0)\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm, vectorstore, document_content_description, metadata_field_info, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a731533b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "05181849",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "feed4be6",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the user's question based on the below information:\n",
"\n",
"Information:\n",
"\n",
"{info}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"generator = (prompt | ChatOpenAI() | StrOutputParser()).with_config(\n",
" run_name=\"generator\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "eb16cc9a",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever) | generator\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c70911cc",
"metadata": {},
"source": [
"## Run examples\n",
"\n",
"Run examples through the chain. This can either be manually, or using a list of examples, or production traffic"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "19a88d13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'One of the horror movies released in the early 2000s is \"The Ring\" (2002), directed by Gore Verbinski.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"what is a horror movie released in early 2000s\"})"
]
},
{
"cell_type": "markdown",
"id": "17f9cdae",
"metadata": {},
"source": [
"## Annotate\n",
"\n",
"Now, go to LangSmitha and annotate those examples as correct or incorrect"
]
},
{
"cell_type": "markdown",
"id": "5e211da6",
"metadata": {},
"source": [
"## Create Dataset\n",
"\n",
"We can now create a dataset from those runs.\n",
"\n",
"What we will do is find the runs marked as correct, then grab the sub-chains from them. Specifically, the query generator sub chain and the final generation step"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e4024267",
"metadata": {},
"outputs": [],
"source": [
"from langsmith import Client\n",
"\n",
"client = Client()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "3814efc5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"14"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs = list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" execution_order=1,\n",
" filter=\"and(eq(feedback_key, 'correctness'), eq(feedback_score, 1))\",\n",
" )\n",
")\n",
"\n",
"len(runs)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "3eb123e0",
"metadata": {},
"outputs": [],
"source": [
"gen_runs = []\n",
"query_runs = []\n",
"for r in runs:\n",
" gen_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'generator')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )\n",
" query_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'query_constructor')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "a4397026",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3fa6ad2a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1fda5b4b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "1a1a51e6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': {'query': 'high school comedy',\n",
" 'filter': {'operator': 'and',\n",
" 'arguments': [{'comparator': 'eq', 'attribute': 'Genre', 'value': 'comedy'},\n",
" {'operator': 'and',\n",
" 'arguments': [{'comparator': 'gte',\n",
" 'attribute': 'Released_Year',\n",
" 'value': 2000},\n",
" {'comparator': 'lt', 'attribute': 'Released_Year', 'value': 2010}]}]}}}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "e9d9966b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s',\n",
" 'info': []}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "bc113f3d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].outputs"
]
},
{
"cell_type": "markdown",
"id": "6cca74e5",
"metadata": {},
"source": [
"## Create datasets\n",
"\n",
"We can now create datasets for the query generation and final generation step.\n",
"We do this so that (1) we can inspect the datapoints, (2) we can edit them if needed, (3) we can add to them over time"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "69966f0e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-query_constructor\")\n",
"\n",
"inputs = [r.inputs for r in query_runs]\n",
"outputs = [r.outputs for r in query_runs]\n",
"\n",
"client.create_examples(\n",
" inputs=inputs, outputs=outputs, dataset_name=\"movie-query_constructor\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "7e15770e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-generator\")\n",
"\n",
"inputs = [r.inputs for r in gen_runs]\n",
"outputs = [r.outputs for r in gen_runs]\n",
"\n",
"client.create_examples(inputs=inputs, outputs=outputs, dataset_name=\"movie-generator\")"
]
},
{
"cell_type": "markdown",
"id": "61cf9bcd",
"metadata": {},
"source": [
"## Use as few shot examples\n",
"\n",
"We can now pull down a dataset and use them as few shot examples in a future chain"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "d9c79173",
"metadata": {},
"outputs": [],
"source": [
"examples = list(client.list_examples(dataset_name=\"movie-query_constructor\"))"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "a1771dd0",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"def filter_to_string(_filter):\n",
" if \"operator\" in _filter:\n",
" args = [filter_to_string(f) for f in _filter[\"arguments\"]]\n",
" return f\"{_filter['operator']}({','.join(args)})\"\n",
" else:\n",
" comparator = _filter[\"comparator\"]\n",
" attribute = json.dumps(_filter[\"attribute\"])\n",
" value = json.dumps(_filter[\"value\"])\n",
" return f\"{comparator}({attribute}, {value})\""
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "e67a3530",
"metadata": {},
"outputs": [],
"source": [
"model_examples = []\n",
"\n",
"for e in examples:\n",
" if \"filter\" in e.outputs[\"output\"]:\n",
" string_filter = filter_to_string(e.outputs[\"output\"][\"filter\"])\n",
" else:\n",
" string_filter = \"NO_FILTER\"\n",
" model_examples.append(\n",
" (\n",
" e.inputs[\"query\"],\n",
" {\"query\": e.outputs[\"output\"][\"query\"], \"filter\": string_filter},\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "84593135",
"metadata": {},
"outputs": [],
"source": [
"retriever1 = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
" metadata_field_info,\n",
" verbose=True,\n",
" chain_kwargs={\"examples\": model_examples},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "4ec9bb92",
"metadata": {},
"outputs": [],
"source": [
"chain1 = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever1) | generator\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "64eb88e2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'1. \"Saving Private Ryan\" (1998) - Directed by Steven Spielberg, this war film follows a group of soldiers during World War II as they search for a missing paratrooper.\\n\\n2. \"The Matrix\" (1999) - Directed by the Wachowskis, this science fiction action film follows a computer hacker who discovers the truth about the reality he lives in.\\n\\n3. \"Lethal Weapon 4\" (1998) - Directed by Richard Donner, this action-comedy film follows two mismatched detectives as they investigate a Chinese immigrant smuggling ring.\\n\\n4. \"The Fifth Element\" (1997) - Directed by Luc Besson, this science fiction action film follows a cab driver who must protect a mysterious woman who holds the key to saving the world.\\n\\n5. \"The Rock\" (1996) - Directed by Michael Bay, this action thriller follows a group of rogue military men who take over Alcatraz and threaten to launch missiles at San Francisco.'"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain1.invoke(\n",
" {\"question\": \"what are good action movies made before 2000 but after 1997?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1ee8b55",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -84,7 +84,7 @@
"from langchain.retrievers import KayAiRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
"retriever = KayAiRetriever.create(\n",
" dataset_id=\"company\", data_types=[\"PressRelease\"], num_contexts=6\n",
")\n",

View File

@@ -59,13 +59,13 @@
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from langchain.chains.retrieval_qa import RetrievalQA\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders.baiducloud_bos_directory import (\n",
" BaiduBOSDirectoryLoader,\n",
")\n",
"from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain_community.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
"from langchain_community.vectorstores import BESVectorStore\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
"from langchain_community.vectorstores import BESVectorStore"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -36,6 +36,9 @@
"from bs4 import BeautifulSoup as Soup\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryByteStore, LocalFileStore\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter # noqa\n",
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
@@ -43,9 +46,6 @@
"# noqa\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa\n",
"\n",
"DOCSTORE_DIR = \".\"\n",
"DOCSTORE_ID_KEY = \"doc_id\""
]

View File

@@ -274,7 +274,7 @@
"db = SQLDatabase.from_uri(\n",
" CONNECTION_STRING\n",
") # We reconnect to db so the new columns are loaded as well.\n",
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n",
"\n",
"sql_query_chain = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",

View File

@@ -245,7 +245,7 @@
"\n",
"\n",
"def _parse(text):\n",
" return text.strip('\"').strip(\"**\")"
" return text.strip(\"**\")"
]
},
{

View File

@@ -1,32 +1,28 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# SalesGPT - Context-Aware AI Sales Assistant With Knowledge Base and Ability Generate Stripe Payment Links\n",
"# SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base\n",
"\n",
"This notebook demonstrates an implementation of a **Context-Aware** AI Sales agent with a Product Knowledge Base which can actually close sales. \n",
"This notebook demonstrates an implementation of a **Context-Aware** AI Sales agent with a Product Knowledge Base. \n",
"\n",
"This notebook was originally published at [filipmichalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) by [@FilipMichalsky](https://twitter.com/FilipMichalsky).\n",
"\n",
"SalesGPT is context-aware, which means it can understand what section of a sales conversation it is in and act accordingly.\n",
" \n",
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activites, such as outbound sales calls. \n",
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activities, such as outbound sales calls. \n",
"\n",
"Additionally, the AI Sales agent has access to tools, which allow it to interact with other systems.\n",
"\n",
"Here, we show how the AI Sales Agent can use a **Product Knowledge Base** to speak about a particular's company offerings,\n",
"hence increasing relevance and reducing hallucinations.\n",
"\n",
"Furthermore, we show how our AI Sales Agent can **generate sales** by integration with the AI Agent Highway called [Mindware](https://www.mindware.co/). In practice, this allows the agent to autonomously generate a payment link for your customers **to pay for your products via Stripe**.\n",
"\n",
"We leverage the [`langchain`](https://github.com/hwchase17/langchain) library in this implementation, specifically [Custom Agent Configuration](https://langchain-langchain.vercel.app/docs/modules/agents/how_to/custom_agent_with_tool_retrieval) and are inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) architecture ."
"We leverage the [`langchain`](https://github.com/langchain-ai/langchain) library in this implementation, specifically [Custom Agent Configuration](https://langchain-langchain.vercel.app/docs/modules/agents/how_to/custom_agent_with_tool_retrieval) and are inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) architecture ."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -42,10 +38,9 @@
"import os\n",
"import re\n",
"\n",
"# make sure you have .env file saved locally with your API keys\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()\n",
"# import your OpenAI key\n",
"OPENAI_API_KEY = \"sk-xx\"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n",
"\n",
"from typing import Any, Callable, Dict, List, Union\n",
"\n",
@@ -54,18 +49,27 @@
"from langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS\n",
"from langchain.chains import LLMChain, RetrievalQA\n",
"from langchain.chains.base import Chain\n",
"from langchain.llms import BaseLLM\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.base import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_community.llms import BaseLLM\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import ChatOpenAI, OpenAI, OpenAIEmbeddings\n",
"from pydantic import BaseModel, Field"
]
},
{
"attachments": {},
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# install additional dependencies\n",
"# ! pip install chromadb openai tiktoken"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -73,21 +77,19 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Seed the SalesGPT agent\n",
"2. Run Sales Agent to decide what to do:\n",
"\n",
" a) Use a tool, such as look up Product Information in a Knowledge Base or Generate a Payment Link\n",
" a) Use a tool, such as look up Product Information in a Knowledge Base\n",
" \n",
" b) Output a response to a user \n",
"3. Run Sales Stage Recognition Agent to recognize which stage is the sales agent at and adjust their behaviour accordingly."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -96,17 +98,15 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Architecture diagram\n",
"\n",
"<img src=\"https://demo-bucket-45.s3.amazonaws.com/new_flow2.png\" width=\"800\" height=\"440\">\n"
"<img src=\"https://singularity-assets-public.s3.amazonaws.com/new_flow.png\" width=\"800\" height=\"440\"/>\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -131,7 +131,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -149,7 +149,7 @@
" {conversation_history}\n",
" ===\n",
"\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
@@ -171,7 +171,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -223,7 +223,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -240,17 +240,13 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# test the intermediate chains\n",
"verbose = True\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4-turbo-preview\",\n",
" temperature=0.9,\n",
" openai_api_key=os.getenv(\"OPENAI_API_KEY\"),\n",
")\n",
"llm = ChatOpenAI(temperature=0.9)\n",
"\n",
"stage_analyzer_chain = StageAnalyzerChain.from_llm(llm, verbose=verbose)\n",
"\n",
@@ -261,7 +257,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -280,7 +276,7 @@
" \n",
" ===\n",
"\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
@@ -300,21 +296,21 @@
{
"data": {
"text/plain": [
"{'conversation_history': '', 'text': '1'}"
"'1'"
]
},
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"stage_analyzer_chain.invoke({\"conversation_history\": \"\"})"
"stage_analyzer_chain.run(conversation_history=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -356,44 +352,32 @@
{
"data": {
"text/plain": [
"{'salesperson_name': 'Ted Lasso',\n",
" 'salesperson_role': 'Business Development Representative',\n",
" 'company_name': 'Sleep Haven',\n",
" 'company_business': 'Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.',\n",
" 'company_values': \"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" 'conversation_purpose': 'find out whether they are looking to achieve better sleep via buying a premier mattress.',\n",
" 'conversation_history': 'Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>',\n",
" 'conversation_type': 'call',\n",
" 'conversation_stage': 'Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional. Your greeting should be welcoming. Always clarify in your greeting the reason why you are contacting the prospect.',\n",
" 'text': \"I'm doing well, thank you for asking. The reason I'm calling is to discuss how Sleep Haven can help enhance your sleep quality with our premium mattresses. Are you currently looking for ways to achieve a better night's sleep? <END_OF_TURN>\"}"
"\"I'm doing great, thank you for asking! As a Business Development Representative at Sleep Haven, I wanted to reach out to see if you are looking to achieve a better night's sleep. We provide premium mattresses that offer the most comfortable and supportive sleeping experience possible. Are you interested in exploring our sleep solutions? <END_OF_TURN>\""
]
},
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sales_conversation_utterance_chain.invoke(\n",
" {\n",
" \"salesperson_name\": \"Ted Lasso\",\n",
" \"salesperson_role\": \"Business Development Representative\",\n",
" \"company_name\": \"Sleep Haven\",\n",
" \"company_business\": \"Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.\",\n",
" \"company_values\": \"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" \"conversation_purpose\": \"find out whether they are looking to achieve better sleep via buying a premier mattress.\",\n",
" \"conversation_history\": \"Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>\",\n",
" \"conversation_type\": \"call\",\n",
" \"conversation_stage\": conversation_stages.get(\n",
" \"1\",\n",
" \"Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\",\n",
" ),\n",
" }\n",
"sales_conversation_utterance_chain.run(\n",
" salesperson_name=\"Ted Lasso\",\n",
" salesperson_role=\"Business Development Representative\",\n",
" company_name=\"Sleep Haven\",\n",
" company_business=\"Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.\",\n",
" company_values=\"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" conversation_purpose=\"find out whether they are looking to achieve better sleep via buying a premier mattress.\",\n",
" conversation_history=\"Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>\",\n",
" conversation_type=\"call\",\n",
" conversation_stage=conversation_stages.get(\n",
" \"1\",\n",
" \"Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\",\n",
" ),\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -401,7 +385,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -412,7 +395,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
@@ -446,7 +429,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
@@ -462,7 +445,7 @@
" text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)\n",
" texts = text_splitter.split_text(product_catalog)\n",
"\n",
" llm = ChatOpenAI(temperature=0)\n",
" llm = OpenAI(temperature=0)\n",
" embeddings = OpenAIEmbeddings()\n",
" docsearch = Chroma.from_texts(\n",
" texts, embeddings, collection_name=\"product-knowledge-base\"\n",
@@ -471,12 +454,29 @@
" knowledge_base = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever()\n",
" )\n",
" return knowledge_base"
" return knowledge_base\n",
"\n",
"\n",
"def get_tools(product_catalog):\n",
" # query to get_tools can be used to be embedded and relevant tools found\n",
" # see here: https://langchain-langchain.vercel.app/docs/use_cases/agents/custom_agent_with_plugin_retrieval#tool-retriever\n",
"\n",
" # we only use one tool for now, but this is highly extensible!\n",
" knowledge_base = setup_knowledge_base(product_catalog)\n",
" tools = [\n",
" Tool(\n",
" name=\"ProductSearch\",\n",
" func=knowledge_base.run,\n",
" description=\"useful for when you need to answer questions about product information\",\n",
" )\n",
" ]\n",
"\n",
" return tools"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@@ -485,18 +485,16 @@
"text": [
"Created a chunk of size 940, which is longer than the specified 10\n",
"Created a chunk of size 844, which is longer than the specified 10\n",
"Created a chunk of size 837, which is longer than the specified 10\n",
"/Users/filipmichalsky/Odyssey/sales_bot/SalesGPT/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
" warn_deprecated(\n"
"Created a chunk of size 837, which is longer than the specified 10\n"
]
},
{
"data": {
"text/plain": [
"'The Sleep Haven products available are:\\n\\n1. Luxury Cloud-Comfort Memory Foam Mattress\\n2. Classic Harmony Spring Mattress\\n3. EcoGreen Hybrid Latex Mattress\\n4. Plush Serenity Bamboo Mattress\\n\\nEach product has its unique features and price point.'"
"' We have four products available: the Classic Harmony Spring Mattress, the Plush Serenity Bamboo Mattress, the Luxury Cloud-Comfort Memory Foam Mattress, and the EcoGreen Hybrid Latex Mattress. Each product is available in different sizes, with the Classic Harmony Spring Mattress available in Queen and King sizes, the Plush Serenity Bamboo Mattress available in King size, the Luxury Cloud-Comfort Memory Foam Mattress available in Twin, Queen, and King sizes, and the EcoGreen Hybrid Latex Mattress available in Twin and Full sizes.'"
]
},
"execution_count": 10,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -510,199 +508,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Payment gateway"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to set up your AI agent to use a payment gateway to generate payment links for your users you need two things:\n",
"\n",
"1. Sign up for a Stripe account and obtain a STRIPE API KEY\n",
"2. Create products you would like to sell in the Stripe UI. Then follow out example of `example_product_price_id_mapping.json`\n",
"to feed the product name to price_id mapping which allows you to generate the payment links."
"### Set up the SalesGPT Controller with the Sales Agent and Stage Analyzer and a Knowledge Base"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"from litellm import completion\n",
"\n",
"# set GPT model env variable\n",
"os.environ[\"GPT_MODEL\"] = \"gpt-4-turbo-preview\"\n",
"\n",
"product_price_id_mapping = {\n",
" \"ai-consulting-services\": \"price_1Ow8ofB795AYY8p1goWGZi6m\",\n",
" \"Luxury Cloud-Comfort Memory Foam Mattress\": \"price_1Owv99B795AYY8p1mjtbKyxP\",\n",
" \"Classic Harmony Spring Mattress\": \"price_1Owv9qB795AYY8p1tPcxCM6T\",\n",
" \"EcoGreen Hybrid Latex Mattress\": \"price_1OwvLDB795AYY8p1YBAMBcbi\",\n",
" \"Plush Serenity Bamboo Mattress\": \"price_1OwvMQB795AYY8p1hJN2uS3S\",\n",
"}\n",
"with open(\"example_product_price_id_mapping.json\", \"w\") as f:\n",
" json.dump(product_price_id_mapping, f)\n",
"\n",
"\n",
"def get_product_id_from_query(query, product_price_id_mapping_path):\n",
" # Load product_price_id_mapping from a JSON file\n",
" with open(product_price_id_mapping_path, \"r\") as f:\n",
" product_price_id_mapping = json.load(f)\n",
"\n",
" # Serialize the product_price_id_mapping to a JSON string for inclusion in the prompt\n",
" product_price_id_mapping_json_str = json.dumps(product_price_id_mapping)\n",
"\n",
" # Dynamically create the enum list from product_price_id_mapping keys\n",
" enum_list = list(product_price_id_mapping.values()) + [\n",
" \"No relevant product id found\"\n",
" ]\n",
" enum_list_str = json.dumps(enum_list)\n",
"\n",
" prompt = f\"\"\"\n",
" You are an expert data scientist and you are working on a project to recommend products to customers based on their needs.\n",
" Given the following query:\n",
" {query}\n",
" and the following product price id mapping:\n",
" {product_price_id_mapping_json_str}\n",
" return the price id that is most relevant to the query.\n",
" ONLY return the price id, no other text. If no relevant price id is found, return 'No relevant price id found'.\n",
" Your output will follow this schema:\n",
" {{\n",
" \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n",
" \"title\": \"Price ID Response\",\n",
" \"type\": \"object\",\n",
" \"properties\": {{\n",
" \"price_id\": {{\n",
" \"type\": \"string\",\n",
" \"enum\": {enum_list_str}\n",
" }}\n",
" }},\n",
" \"required\": [\"price_id\"]\n",
" }}\n",
" Return a valid directly parsable json, dont return in it within a code snippet or add any kind of explanation!!\n",
" \"\"\"\n",
" prompt += \"{\"\n",
" response = completion(\n",
" model=os.getenv(\"GPT_MODEL\", \"gpt-3.5-turbo-1106\"),\n",
" messages=[{\"content\": prompt, \"role\": \"user\"}],\n",
" max_tokens=1000,\n",
" temperature=0,\n",
" )\n",
"\n",
" product_id = response.choices[0].message.content.strip()\n",
" return product_id"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"import requests\n",
"\n",
"\n",
"def generate_stripe_payment_link(query: str) -> str:\n",
" \"\"\"Generate a stripe payment link for a customer based on a single query string.\"\"\"\n",
"\n",
" # example testing payment gateway url\n",
" PAYMENT_GATEWAY_URL = os.getenv(\n",
" \"PAYMENT_GATEWAY_URL\", \"https://agent-payments-gateway.vercel.app/payment\"\n",
" )\n",
" PRODUCT_PRICE_MAPPING = \"example_product_price_id_mapping.json\"\n",
"\n",
" # use LLM to get the price_id from query\n",
" price_id = get_product_id_from_query(query, PRODUCT_PRICE_MAPPING)\n",
" price_id = json.loads(price_id)\n",
" payload = json.dumps(\n",
" {\"prompt\": query, **price_id, \"stripe_key\": os.getenv(\"STRIPE_API_KEY\")}\n",
" )\n",
" headers = {\n",
" \"Content-Type\": \"application/json\",\n",
" }\n",
"\n",
" response = requests.request(\n",
" \"POST\", PAYMENT_GATEWAY_URL, headers=headers, data=payload\n",
" )\n",
" return response.text"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"response\":\"https://buy.stripe.com/test_6oEbLS8JB1F9bv229d\"}'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_stripe_payment_link(\n",
" query=\"Please generate a payment link for John Doe to buy two mattresses - the Classic Harmony Spring Mattress\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup agent tools"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"def get_tools(product_catalog):\n",
" # query to get_tools can be used to be embedded and relevant tools found\n",
" # see here: https://langchain-langchain.vercel.app/docs/use_cases/agents/custom_agent_with_plugin_retrieval#tool-retriever\n",
"\n",
" # we only use one tool for now, but this is highly extensible!\n",
" knowledge_base = setup_knowledge_base(product_catalog)\n",
" tools = [\n",
" Tool(\n",
" name=\"ProductSearch\",\n",
" func=knowledge_base.run,\n",
" description=\"useful for when you need to answer questions about product information or services offered, availability and their costs.\",\n",
" ),\n",
" Tool(\n",
" name=\"GeneratePaymentLink\",\n",
" func=generate_stripe_payment_link,\n",
" description=\"useful to close a transaction with a customer. You need to include product name and quantity and customer name in the query input.\",\n",
" ),\n",
" ]\n",
"\n",
" return tools"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up the SalesGPT Controller with the Sales Agent and Stage Analyzer\n",
"\n",
"#### The Agent has access to a Knowledge Base and can autonomously sell your products via Stripe"
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
@@ -752,11 +563,19 @@
" print(\"TEXT\")\n",
" print(text)\n",
" print(\"-------\")\n",
" if f\"{self.ai_prefix}:\" in text:\n",
" return AgentFinish(\n",
" {\"output\": text.split(f\"{self.ai_prefix}:\")[-1].strip()}, text\n",
" )\n",
" regex = r\"Action: (.*?)[\\n]*Action Input: (.*)\"\n",
" match = re.search(regex, text)\n",
" if not match:\n",
" ## TODO - this is not entirely reliable, sometimes results in an error.\n",
" return AgentFinish(\n",
" {\"output\": text.split(f\"{self.ai_prefix}:\")[-1].strip()}, text\n",
" {\n",
" \"output\": \"I apologize, I was unable to find the answer to your question. Is there anything else I can help with?\"\n",
" },\n",
" text,\n",
" )\n",
" # raise OutputParserException(f\"Could not parse LLM output: `{text}`\")\n",
" action = match.group(1)\n",
@@ -770,7 +589,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
@@ -828,18 +647,18 @@
"Previous conversation history:\n",
"{conversation_history}\n",
"\n",
"Thought:\n",
"{salesperson_name}:\n",
"{agent_scratchpad}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"class SalesGPT(Chain):\n",
"class SalesGPT(Chain, BaseModel):\n",
" \"\"\"Controller model for the Sales Agent.\"\"\"\n",
"\n",
" conversation_history: List[str] = []\n",
@@ -985,9 +804,7 @@
"\n",
" # WARNING: this output parser is NOT reliable yet\n",
" ## It makes assumptions about output from LLM which can break and throw an error\n",
" output_parser = SalesConvoOutputParser(\n",
" ai_prefix=kwargs[\"salesperson_name\"], verbose=verbose\n",
" )\n",
" output_parser = SalesConvoOutputParser(ai_prefix=kwargs[\"salesperson_name\"])\n",
"\n",
" sales_agent_with_tools = LLMSingleActionAgent(\n",
" llm_chain=llm_chain,\n",
@@ -1011,7 +828,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1019,7 +835,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1028,7 +843,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
@@ -1065,7 +880,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -1074,7 +888,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 16,
"metadata": {},
"outputs": [
{
@@ -1083,9 +897,7 @@
"text": [
"Created a chunk of size 940, which is longer than the specified 10\n",
"Created a chunk of size 844, which is longer than the specified 10\n",
"Created a chunk of size 837, which is longer than the specified 10\n",
"/Users/filipmichalsky/Odyssey/sales_bot/SalesGPT/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class `langchain.agents.agent.LLMSingleActionAgent` was deprecated in langchain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.\n",
" warn_deprecated(\n"
"Created a chunk of size 837, which is longer than the specified 10\n"
]
}
],
@@ -1095,7 +907,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
@@ -1105,7 +917,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 18,
"metadata": {},
"outputs": [
{
@@ -1122,14 +934,14 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Good day! This is Ted Lasso from Sleep Haven. How are you doing today?\n"
"Ted Lasso: Hello, this is Ted Lasso from Sleep Haven. How are you doing today?\n"
]
}
],
@@ -1139,18 +951,18 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"I am well, how are you? I would like to learn more about your services.\"\n",
" \"I am well, how are you? I would like to learn more about your mattresses.\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 21,
"metadata": {},
"outputs": [
{
@@ -1167,14 +979,14 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: I'm doing great, thank you for asking! I'm glad to hear you're interested. Sleep Haven is a premium mattress company, and we're all about offering the best sleep solutions, including top-notch mattresses, pillows, and bedding accessories. Our mission is to help you achieve a better night's sleep. May I know if you're looking to enhance your sleep experience with a new mattress or bedding accessories? \n"
"Ted Lasso: I'm glad to hear that you're doing well! As for our mattresses, at Sleep Haven, we provide customers with the most comfortable and supportive sleeping experience possible. Our high-quality mattresses are designed to meet the unique needs of our customers. Can I ask what specifically you'd like to learn more about? \n"
]
}
],
@@ -1184,18 +996,16 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"Yes, I would like to improve my sleep. Can you tell me more about your products?\"\n",
")"
"sales_agent.human_step(\"Yes, what materials are you mattresses made from?\")"
]
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 24,
"metadata": {},
"outputs": [
{
@@ -1212,14 +1022,14 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Absolutely, I'd be happy to share more about our products. At Sleep Haven, we offer a variety of high-quality mattresses designed to cater to different sleeping preferences and needs. Whether you're looking for memory foam's comfort, the support of hybrid mattresses, or the breathability of natural latex, we have options for everyone. Our pillows and bedding accessories are similarly curated to enhance your sleep quality. Every product is built with the aim of helping you achieve the restful night's sleep you deserve. What specific features are you looking for in a mattress? \n"
"Ted Lasso: Our mattresses are made from a variety of materials, depending on the model. We have the EcoGreen Hybrid Latex Mattress, which is made from 100% natural latex harvested from eco-friendly plantations. The Plush Serenity Bamboo Mattress features a layer of plush, adaptive foam and a base of high-resilience support foam, with a bamboo-infused top layer. The Luxury Cloud-Comfort Memory Foam Mattress has an innovative, temperature-sensitive memory foam layer and a high-density foam base with cooling gel-infused particles. Finally, the Classic Harmony Spring Mattress has a robust inner spring construction and layers of plush padding, with a quilted top layer and a natural cotton cover. Is there anything specific you'd like to know about these materials?\n"
]
}
],
@@ -1229,16 +1039,61 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\"What mattresses do you have and how much do they cost?\")"
"sales_agent.human_step(\n",
" \"Yes, I am looking for a queen sized mattress. Do you have any mattresses in queen size?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Conversation Stage: Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.\n"
]
}
],
"source": [
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Yes, we do have queen-sized mattresses available. We offer the Luxury Cloud-Comfort Memory Foam Mattress and the Classic Harmony Spring Mattress in queen size. Both mattresses provide exceptional comfort and support. Is there anything specific you would like to know about these options?\n"
]
}
],
"source": [
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\"Yea, compare and contrast those two options, please.\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
@@ -1255,14 +1110,14 @@
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: We offer two primary types of mattresses at Sleep Haven. The first is our Luxury Cloud-Comfort Memory Foam Mattress, which is priced at $999 and comes in Twin, Queen, and King sizes. The second is our Classic Harmony Spring Mattress, priced at $1,299, available in Queen and King sizes. Both are designed to provide exceptional comfort and support for a better night's sleep. Which type of mattress would you be interested in learning more about? \n"
"Ted Lasso: The Luxury Cloud-Comfort Memory Foam Mattress is priced at $999 and is available in Twin, Queen, and King sizes. It features an innovative, temperature-sensitive memory foam layer and a high-density foam base. On the other hand, the Classic Harmony Spring Mattress is priced at $1,299 and is available in Queen and King sizes. It features a robust inner spring construction and layers of plush padding. Both mattresses provide exceptional comfort and support, but the Classic Harmony Spring Mattress may be a better option if you prefer the traditional feel of an inner spring mattress. Do you have any other questions about these options?\n"
]
}
],
@@ -1272,66 +1127,14 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"Okay.I would like to order two Memory Foam mattresses in Twin size please.\"\n",
" \"Great, thanks, that's it. I will talk to my wife and call back if she is onboard. Have a good day!\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Conversation Stage: Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits.\n"
]
}
],
"source": [
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Fantastic choice! You're on your way to a better night's sleep with our Luxury Cloud-Comfort Memory Foam Mattresses. I've generated a payment link for two Twin size mattresses for you. Here is the link to complete your purchase: https://buy.stripe.com/test_6oEg28e3V97BdDabJn. Is there anything else I can assist you with today? \n"
]
}
],
"source": [
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"Great, thanks! I will discuss with my wife and will buy it if she is onboard. Have a good day!\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -1350,9 +1153,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 2
}

View File

@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain langchain-elasticsearch lark openai elasticsearch pandas"
"!pip install langchain lark openai elasticsearch pandas"
]
},
{
@@ -1083,7 +1083,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_elasticsearch import ElasticsearchStore\n",
"from langchain_community.vectorstores import ElasticsearchStore\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"

View File

@@ -22,8 +22,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, create_react_agent\n",
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain.chains import LLMChain\n",
"from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory\n",
"from langchain.prompts import PromptTemplate\n",
@@ -85,7 +84,19 @@
"metadata": {},
"outputs": [],
"source": [
"prompt = hub.pull(\"hwchase17/react\")"
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"{chat_history}\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"input\", \"chat_history\", \"agent_scratchpad\"],\n",
")"
]
},
{
@@ -103,14 +114,16 @@
"metadata": {},
"outputs": [],
"source": [
"model = OpenAI()\n",
"agent = create_react_agent(model, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)"
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
"agent_chain = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True, memory=memory\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 6,
"id": "ca4bc1fb",
"metadata": {},
"outputs": [
@@ -120,15 +133,15 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I should research ChatGPT to answer this question.\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should research ChatGPT to answer this question.\n",
"Action: Search\n",
"Action Input: \"ChatGPT\"\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001B[0m\n",
"Action Input: \"ChatGPT\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -140,40 +153,10 @@
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
"\u001B[0;31mKeyboardInterrupt\u001B[0m Traceback (most recent call last)",
"Cell \u001B[0;32mIn[36], line 1\u001B[0m\n\u001B[0;32m----> 1\u001B[0m \u001B[43magent_executor\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43minvoke\u001B[49m\u001B[43m(\u001B[49m\u001B[43m{\u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43minput\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m:\u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43mWhat is ChatGPT?\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m}\u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/chains/base.py:163\u001B[0m, in \u001B[0;36mChain.invoke\u001B[0;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[1;32m 161\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 162\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_error(e)\n\u001B[0;32m--> 163\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[1;32m 164\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_end(outputs)\n\u001B[1;32m 166\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m include_run_info:\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/chains/base.py:153\u001B[0m, in \u001B[0;36mChain.invoke\u001B[0;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[1;32m 150\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 151\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_validate_inputs(inputs)\n\u001B[1;32m 152\u001B[0m outputs \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m--> 153\u001B[0m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call\u001B[49m\u001B[43m(\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 154\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[1;32m 155\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call(inputs)\n\u001B[1;32m 156\u001B[0m )\n\u001B[1;32m 158\u001B[0m final_outputs: Dict[\u001B[38;5;28mstr\u001B[39m, Any] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mprep_outputs(\n\u001B[1;32m 159\u001B[0m inputs, outputs, return_only_outputs\n\u001B[1;32m 160\u001B[0m )\n\u001B[1;32m 161\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1432\u001B[0m, in \u001B[0;36mAgentExecutor._call\u001B[0;34m(self, inputs, run_manager)\u001B[0m\n\u001B[1;32m 1430\u001B[0m \u001B[38;5;66;03m# We now enter the agent loop (until it returns something).\u001B[39;00m\n\u001B[1;32m 1431\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_should_continue(iterations, time_elapsed):\n\u001B[0;32m-> 1432\u001B[0m next_step_output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_take_next_step\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1433\u001B[0m \u001B[43m \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1434\u001B[0m \u001B[43m \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1435\u001B[0m \u001B[43m \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1436\u001B[0m \u001B[43m \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1437\u001B[0m \u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1438\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1439\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(next_step_output, AgentFinish):\n\u001B[1;32m 1440\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_return(\n\u001B[1;32m 1441\u001B[0m next_step_output, intermediate_steps, run_manager\u001B[38;5;241m=\u001B[39mrun_manager\n\u001B[1;32m 1442\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1138\u001B[0m, in \u001B[0;36mAgentExecutor._take_next_step\u001B[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[1;32m 1129\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[1;32m 1130\u001B[0m \u001B[38;5;28mself\u001B[39m,\n\u001B[1;32m 1131\u001B[0m name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[0;32m (...)\u001B[0m\n\u001B[1;32m 1135\u001B[0m run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 1136\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[1;32m 1137\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[0;32m-> 1138\u001B[0m [\n\u001B[1;32m 1139\u001B[0m a\n\u001B[1;32m 1140\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m a \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_iter_next_step(\n\u001B[1;32m 1141\u001B[0m name_to_tool_map,\n\u001B[1;32m 1142\u001B[0m color_mapping,\n\u001B[1;32m 1143\u001B[0m inputs,\n\u001B[1;32m 1144\u001B[0m intermediate_steps,\n\u001B[1;32m 1145\u001B[0m run_manager,\n\u001B[1;32m 1146\u001B[0m )\n\u001B[1;32m 1147\u001B[0m ]\n\u001B[1;32m 1148\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1138\u001B[0m, in \u001B[0;36m<listcomp>\u001B[0;34m(.0)\u001B[0m\n\u001B[1;32m 1129\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[1;32m 1130\u001B[0m \u001B[38;5;28mself\u001B[39m,\n\u001B[1;32m 1131\u001B[0m name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[0;32m (...)\u001B[0m\n\u001B[1;32m 1135\u001B[0m run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 1136\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[1;32m 1137\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[0;32m-> 1138\u001B[0m [\n\u001B[1;32m 1139\u001B[0m a\n\u001B[1;32m 1140\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m a \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_iter_next_step(\n\u001B[1;32m 1141\u001B[0m name_to_tool_map,\n\u001B[1;32m 1142\u001B[0m color_mapping,\n\u001B[1;32m 1143\u001B[0m inputs,\n\u001B[1;32m 1144\u001B[0m intermediate_steps,\n\u001B[1;32m 1145\u001B[0m run_manager,\n\u001B[1;32m 1146\u001B[0m )\n\u001B[1;32m 1147\u001B[0m ]\n\u001B[1;32m 1148\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1223\u001B[0m, in \u001B[0;36mAgentExecutor._iter_next_step\u001B[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[1;32m 1221\u001B[0m \u001B[38;5;28;01myield\u001B[39;00m agent_action\n\u001B[1;32m 1222\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m agent_action \u001B[38;5;129;01min\u001B[39;00m actions:\n\u001B[0;32m-> 1223\u001B[0m \u001B[38;5;28;01myield\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_perform_agent_action\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1224\u001B[0m \u001B[43m \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43magent_action\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\n\u001B[1;32m 1225\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1245\u001B[0m, in \u001B[0;36mAgentExecutor._perform_agent_action\u001B[0;34m(self, name_to_tool_map, color_mapping, agent_action, run_manager)\u001B[0m\n\u001B[1;32m 1243\u001B[0m tool_run_kwargs[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mllm_prefix\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m 1244\u001B[0m \u001B[38;5;66;03m# We then call the tool on the tool input to get an observation\u001B[39;00m\n\u001B[0;32m-> 1245\u001B[0m observation \u001B[38;5;241m=\u001B[39m \u001B[43mtool\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrun\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1246\u001B[0m \u001B[43m \u001B[49m\u001B[43magent_action\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mtool_input\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1247\u001B[0m \u001B[43m \u001B[49m\u001B[43mverbose\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mverbose\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1248\u001B[0m \u001B[43m \u001B[49m\u001B[43mcolor\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcolor\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1249\u001B[0m \u001B[43m \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mget_child\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01melse\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m,\u001B[49m\n\u001B[1;32m 1250\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_run_kwargs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1251\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1252\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[1;32m 1253\u001B[0m tool_run_kwargs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39magent\u001B[38;5;241m.\u001B[39mtool_run_logging_kwargs()\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:422\u001B[0m, in \u001B[0;36mBaseTool.run\u001B[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[1;32m 420\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m (\u001B[38;5;167;01mException\u001B[39;00m, \u001B[38;5;167;01mKeyboardInterrupt\u001B[39;00m) \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 421\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_tool_error(e)\n\u001B[0;32m--> 422\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[1;32m 423\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[1;32m 424\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_tool_end(observation, color\u001B[38;5;241m=\u001B[39mcolor, name\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mname, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:381\u001B[0m, in \u001B[0;36mBaseTool.run\u001B[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[1;32m 378\u001B[0m parsed_input \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_parse_input(tool_input)\n\u001B[1;32m 379\u001B[0m tool_args, tool_kwargs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_to_args_and_kwargs(parsed_input)\n\u001B[1;32m 380\u001B[0m observation \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m--> 381\u001B[0m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_run\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_args\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_kwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 382\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[1;32m 383\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_run(\u001B[38;5;241m*\u001B[39mtool_args, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mtool_kwargs)\n\u001B[1;32m 384\u001B[0m )\n\u001B[1;32m 385\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m ValidationError \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 386\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhandle_validation_error:\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:588\u001B[0m, in \u001B[0;36mTool._run\u001B[0;34m(self, run_manager, *args, **kwargs)\u001B[0m\n\u001B[1;32m 579\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc:\n\u001B[1;32m 580\u001B[0m new_argument_supported \u001B[38;5;241m=\u001B[39m signature(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc)\u001B[38;5;241m.\u001B[39mparameters\u001B[38;5;241m.\u001B[39mget(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcallbacks\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m 581\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m (\n\u001B[1;32m 582\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc(\n\u001B[1;32m 583\u001B[0m \u001B[38;5;241m*\u001B[39margs,\n\u001B[1;32m 584\u001B[0m callbacks\u001B[38;5;241m=\u001B[39mrun_manager\u001B[38;5;241m.\u001B[39mget_child() \u001B[38;5;28;01mif\u001B[39;00m run_manager \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 585\u001B[0m \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs,\n\u001B[1;32m 586\u001B[0m )\n\u001B[1;32m 587\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_argument_supported\n\u001B[0;32m--> 588\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mfunc\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 589\u001B[0m )\n\u001B[1;32m 590\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mNotImplementedError\u001B[39;00m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mTool does not support sync\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
"File \u001B[0;32m~/code/langchain/libs/community/langchain_community/utilities/google_search.py:94\u001B[0m, in \u001B[0;36mGoogleSearchAPIWrapper.run\u001B[0;34m(self, query)\u001B[0m\n\u001B[1;32m 92\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Run query through GoogleSearch and parse result.\"\"\"\u001B[39;00m\n\u001B[1;32m 93\u001B[0m snippets \u001B[38;5;241m=\u001B[39m []\n\u001B[0;32m---> 94\u001B[0m results \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_google_search_results\u001B[49m\u001B[43m(\u001B[49m\u001B[43mquery\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mnum\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mk\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 95\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mlen\u001B[39m(results) \u001B[38;5;241m==\u001B[39m \u001B[38;5;241m0\u001B[39m:\n\u001B[1;32m 96\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mNo good Google Search Result was found\u001B[39m\u001B[38;5;124m\"\u001B[39m\n",
"File \u001B[0;32m~/code/langchain/libs/community/langchain_community/utilities/google_search.py:62\u001B[0m, in \u001B[0;36mGoogleSearchAPIWrapper._google_search_results\u001B[0;34m(self, search_term, **kwargs)\u001B[0m\n\u001B[1;32m 60\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msiterestrict:\n\u001B[1;32m 61\u001B[0m cse \u001B[38;5;241m=\u001B[39m cse\u001B[38;5;241m.\u001B[39msiterestrict()\n\u001B[0;32m---> 62\u001B[0m res \u001B[38;5;241m=\u001B[39m \u001B[43mcse\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mlist\u001B[49m\u001B[43m(\u001B[49m\u001B[43mq\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43msearch_term\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcx\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mgoogle_cse_id\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mexecute\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 63\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m res\u001B[38;5;241m.\u001B[39mget(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mitems\u001B[39m\u001B[38;5;124m\"\u001B[39m, [])\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/_helpers.py:130\u001B[0m, in \u001B[0;36mpositional.<locals>.positional_decorator.<locals>.positional_wrapper\u001B[0;34m(*args, **kwargs)\u001B[0m\n\u001B[1;32m 128\u001B[0m \u001B[38;5;28;01melif\u001B[39;00m positional_parameters_enforcement \u001B[38;5;241m==\u001B[39m POSITIONAL_WARNING:\n\u001B[1;32m 129\u001B[0m logger\u001B[38;5;241m.\u001B[39mwarning(message)\n\u001B[0;32m--> 130\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mwrapped\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/http.py:923\u001B[0m, in \u001B[0;36mHttpRequest.execute\u001B[0;34m(self, http, num_retries)\u001B[0m\n\u001B[1;32m 920\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mheaders[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcontent-length\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mstr\u001B[39m(\u001B[38;5;28mlen\u001B[39m(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mbody))\n\u001B[1;32m 922\u001B[0m \u001B[38;5;66;03m# Handle retries for server-side errors.\u001B[39;00m\n\u001B[0;32m--> 923\u001B[0m resp, content \u001B[38;5;241m=\u001B[39m \u001B[43m_retry_request\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 924\u001B[0m \u001B[43m \u001B[49m\u001B[43mhttp\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 925\u001B[0m \u001B[43m \u001B[49m\u001B[43mnum_retries\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 926\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43mrequest\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m,\u001B[49m\n\u001B[1;32m 927\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_sleep\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 928\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_rand\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 929\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mstr\u001B[39;49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43muri\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 930\u001B[0m \u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mstr\u001B[39;49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmethod\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 931\u001B[0m \u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 932\u001B[0m \u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mheaders\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 933\u001B[0m \u001B[43m\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 935\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m callback \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mresponse_callbacks:\n\u001B[1;32m 936\u001B[0m callback(resp)\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/http.py:191\u001B[0m, in \u001B[0;36m_retry_request\u001B[0;34m(http, num_retries, req_type, sleep, rand, uri, method, *args, **kwargs)\u001B[0m\n\u001B[1;32m 189\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 190\u001B[0m exception \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[0;32m--> 191\u001B[0m resp, content \u001B[38;5;241m=\u001B[39m \u001B[43mhttp\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrequest\u001B[49m\u001B[43m(\u001B[49m\u001B[43muri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 192\u001B[0m \u001B[38;5;66;03m# Retry on SSL errors and socket timeout errors.\u001B[39;00m\n\u001B[1;32m 193\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m _ssl_SSLError \u001B[38;5;28;01mas\u001B[39;00m ssl_error:\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1724\u001B[0m, in \u001B[0;36mHttp.request\u001B[0;34m(self, uri, method, body, headers, redirections, connection_type)\u001B[0m\n\u001B[1;32m 1722\u001B[0m content \u001B[38;5;241m=\u001B[39m \u001B[38;5;124mb\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m 1723\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m-> 1724\u001B[0m (response, content) \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_request\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1725\u001B[0m \u001B[43m \u001B[49m\u001B[43mconn\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mauthority\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43muri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrequest_uri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mredirections\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcachekey\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1726\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1727\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 1728\u001B[0m is_timeout \u001B[38;5;241m=\u001B[39m \u001B[38;5;28misinstance\u001B[39m(e, socket\u001B[38;5;241m.\u001B[39mtimeout)\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1444\u001B[0m, in \u001B[0;36mHttp._request\u001B[0;34m(self, conn, host, absolute_uri, request_uri, method, body, headers, redirections, cachekey)\u001B[0m\n\u001B[1;32m 1441\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth:\n\u001B[1;32m 1442\u001B[0m auth\u001B[38;5;241m.\u001B[39mrequest(method, request_uri, headers, body)\n\u001B[0;32m-> 1444\u001B[0m (response, content) \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_conn_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mconn\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrequest_uri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1446\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth:\n\u001B[1;32m 1447\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth\u001B[38;5;241m.\u001B[39mresponse(response, body):\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1366\u001B[0m, in \u001B[0;36mHttp._conn_request\u001B[0;34m(self, conn, request_uri, method, body, headers)\u001B[0m\n\u001B[1;32m 1364\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 1365\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m conn\u001B[38;5;241m.\u001B[39msock \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m-> 1366\u001B[0m \u001B[43mconn\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mconnect\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1367\u001B[0m conn\u001B[38;5;241m.\u001B[39mrequest(method, request_uri, body, headers)\n\u001B[1;32m 1368\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m socket\u001B[38;5;241m.\u001B[39mtimeout:\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1156\u001B[0m, in \u001B[0;36mHTTPSConnectionWithTimeout.connect\u001B[0;34m(self)\u001B[0m\n\u001B[1;32m 1154\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m has_timeout(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtimeout):\n\u001B[1;32m 1155\u001B[0m sock\u001B[38;5;241m.\u001B[39msettimeout(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtimeout)\n\u001B[0;32m-> 1156\u001B[0m \u001B[43msock\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mconnect\u001B[49m\u001B[43m(\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhost\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mport\u001B[49m\u001B[43m)\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1158\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msock \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_context\u001B[38;5;241m.\u001B[39mwrap_socket(sock, server_hostname\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhost)\n\u001B[1;32m 1160\u001B[0m \u001B[38;5;66;03m# Python 3.3 compatibility: emulate the check_hostname behavior\u001B[39;00m\n",
"\u001B[0;31mKeyboardInterrupt\u001B[0m: "
]
}
],
"source": [
"agent_executor.invoke({\"input\": \"What is ChatGPT?\"})"
"agent_chain.run(input=\"What is ChatGPT?\")"
]
},
{
@@ -196,15 +179,15 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to find out who developed ChatGPT\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out who developed ChatGPT\n",
"Action: Search\n",
"Action Input: Who developed ChatGPT\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001B[0m\n",
"Action Input: Who developed ChatGPT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -219,7 +202,7 @@
}
],
"source": [
"agent_executor.invoke({\"input\": \"Who developed it?\"})"
"agent_chain.run(input=\"Who developed it?\")"
]
},
{
@@ -234,14 +217,14 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"Action: Summary\n",
"Action Input: My daughter 5 years old\u001B[0m\n",
"Action Input: My daughter 5 years old\u001b[0m\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThis is a conversation between a human and a bot:\n",
"\u001b[32;1m\u001b[1;3mThis is a conversation between a human and a bot:\n",
"\n",
"Human: What is ChatGPT?\n",
"AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\n",
@@ -249,16 +232,16 @@
"AI: ChatGPT was developed by OpenAI.\n",
"\n",
"Write a summary of the conversation for My daughter 5 years old:\n",
"\u001B[0m\n",
"\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001B[33;1m\u001B[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting.\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.\u001B[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -273,8 +256,8 @@
}
],
"source": [
"agent_executor.invoke(\n",
" {\"input\": \"Thanks. Summarize the conversation, for my daughter 5 years old.\"}\n",
"agent_chain.run(\n",
" input=\"Thanks. Summarize the conversation, for my daughter 5 years old.\"\n",
")"
]
},
@@ -306,17 +289,9 @@
}
],
"source": [
"print(agent_executor.memory.buffer)"
"print(agent_chain.memory.buffer)"
]
},
{
"cell_type": "markdown",
"id": "84ca95c30e262e00",
"metadata": {
"collapsed": false
},
"source": []
},
{
"cell_type": "markdown",
"id": "cc3d0aa4",
@@ -365,9 +340,25 @@
" ),\n",
"]\n",
"\n",
"prompt = hub.pull(\"hwchase17/react\")\n",
"agent = create_react_agent(model, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)"
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"{chat_history}\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"input\", \"chat_history\", \"agent_scratchpad\"],\n",
")\n",
"\n",
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
"agent_chain = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True, memory=memory\n",
")"
]
},
{
@@ -382,15 +373,15 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I should research ChatGPT to answer this question.\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should research ChatGPT to answer this question.\n",
"Action: Search\n",
"Action Input: \"ChatGPT\"\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001B[0m\n",
"Action Input: \"ChatGPT\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -405,7 +396,7 @@
}
],
"source": [
"agent_executor.invoke({\"input\": \"What is ChatGPT?\"})"
"agent_chain.run(input=\"What is ChatGPT?\")"
]
},
{
@@ -420,15 +411,15 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to find out who developed ChatGPT\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out who developed ChatGPT\n",
"Action: Search\n",
"Action Input: Who developed ChatGPT\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001B[0m\n",
"Action Input: Who developed ChatGPT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -443,7 +434,7 @@
}
],
"source": [
"agent_executor.invoke({\"input\": \"Who developed it?\"})"
"agent_chain.run(input=\"Who developed it?\")"
]
},
{
@@ -458,14 +449,14 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"Action: Summary\n",
"Action Input: My daughter 5 years old\u001B[0m\n",
"Action Input: My daughter 5 years old\u001b[0m\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThis is a conversation between a human and a bot:\n",
"\u001b[32;1m\u001b[1;3mThis is a conversation between a human and a bot:\n",
"\n",
"Human: What is ChatGPT?\n",
"AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\n",
@@ -473,16 +464,16 @@
"AI: ChatGPT was developed by OpenAI.\n",
"\n",
"Write a summary of the conversation for My daughter 5 years old:\n",
"\u001B[0m\n",
"\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001B[33;1m\u001B[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images.\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.\u001B[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@@ -497,8 +488,8 @@
}
],
"source": [
"agent_executor.invoke(\n",
" {\"input\": \"Thanks. Summarize the conversation, for my daughter 5 years old.\"}\n",
"agent_chain.run(\n",
" input=\"Thanks. Summarize the conversation, for my daughter 5 years old.\"\n",
")"
]
},
@@ -533,7 +524,7 @@
}
],
"source": [
"print(agent_executor.memory.buffer)"
"print(agent_chain.memory.buffer)"
]
}
],

View File

@@ -209,7 +209,7 @@
}
],
"source": [
"chain.invoke({})"
"chain.run({})"
]
},
{

View File

@@ -9,7 +9,7 @@
" \n",
"[Together AI](https://python.langchain.com/docs/integrations/llms/together) has a broad set of OSS LLMs via inference API.\n",
"\n",
"See [here](https://docs.together.ai/docs/inference-models). We use `\"mistralai/Mixtral-8x7B-Instruct-v0.1` for RAG on the Mixtral paper.\n",
"See [here](https://api.together.xyz/playground). We use `\"mistralai/Mixtral-8x7B-Instruct-v0.1` for RAG on the Mixtral paper.\n",
"\n",
"Download the paper:\n",
"https://arxiv.org/pdf/2401.04088.pdf"
@@ -39,7 +39,7 @@
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
@@ -148,7 +148,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -1,199 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"id": "c48812ed-35bd-4fbe-9a2c-6c7335e5645e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_core.tools import tool\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"@tool\n",
"def multiply(x: float, y: float) -> float:\n",
" \"\"\"Multiply 'x' times 'y'.\"\"\"\n",
" return x * y\n",
"\n",
"\n",
"@tool\n",
"def exponentiate(x: float, y: float) -> float:\n",
" \"\"\"Raise 'x' to the 'y'.\"\"\"\n",
" return x**y\n",
"\n",
"\n",
"@tool\n",
"def add(x: float, y: float) -> float:\n",
" \"\"\"Add 'x' and 'y'.\"\"\"\n",
" return x + y\n",
"\n",
"\n",
"tools = [multiply, exponentiate, add]\n",
"\n",
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
"llm_with_tools = gpt35.configurable_alternatives(\n",
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9c186263-1b98-4cb2-b6d1-71f65eb0d811",
"metadata": {},
"source": [
"# LangGraph"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "28fc2c60-7dbc-428a-8983-1a6a15ea30d2",
"metadata": {},
"outputs": [],
"source": [
"import operator\n",
"from typing import Annotated, Sequence, TypedDict\n",
"\n",
"from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langgraph.graph import END, StateGraph\n",
"\n",
"\n",
"class AgentState(TypedDict):\n",
" messages: Annotated[Sequence[BaseMessage], operator.add]\n",
"\n",
"\n",
"def should_continue(state):\n",
" return \"continue\" if state[\"messages\"][-1].tool_calls else \"end\"\n",
"\n",
"\n",
"def call_model(state, config):\n",
" return {\"messages\": [llm_with_tools.invoke(state[\"messages\"], config=config)]}\n",
"\n",
"\n",
"def _invoke_tool(tool_call):\n",
" tool = {tool.name: tool for tool in tools}[tool_call[\"name\"]]\n",
" return ToolMessage(tool.invoke(tool_call[\"args\"]), tool_call_id=tool_call[\"id\"])\n",
"\n",
"\n",
"tool_executor = RunnableLambda(_invoke_tool)\n",
"\n",
"\n",
"def call_tools(state):\n",
" last_message = state[\"messages\"][-1]\n",
" return {\"messages\": tool_executor.batch(last_message.tool_calls)}\n",
"\n",
"\n",
"workflow = StateGraph(AgentState)\n",
"workflow.add_node(\"agent\", call_model)\n",
"workflow.add_node(\"action\", call_tools)\n",
"workflow.set_entry_point(\"agent\")\n",
"workflow.add_conditional_edges(\n",
" \"agent\",\n",
" should_continue,\n",
" {\n",
" \"continue\": \"action\",\n",
" \"end\": END,\n",
" },\n",
")\n",
"workflow.add_edge(\"action\", \"agent\")\n",
"graph = workflow.compile()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3710e724-2595-4625-ba3a-effb81e66e4a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" \"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"\n",
" )\n",
" ]\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "073c074e-d722-42e0-85ec-c62c079207e4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" \"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"\n",
" )\n",
" ]\n",
" },\n",
" config={\"configurable\": {\"llm\": \"claude3\"}},\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -2610,7 +2610,7 @@
}
],
"source": [
"from langchain_text_splitters import CharacterTextSplitter\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(docs)"
@@ -3811,7 +3811,7 @@
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
]
},

View File

@@ -424,7 +424,7 @@
" DialogueAgentWithTools(\n",
" name=name,\n",
" system_message=SystemMessage(content=system_message),\n",
" model=ChatOpenAI(model=\"gpt-4\", temperature=0.2),\n",
" model=ChatOpenAI(model_name=\"gpt-4\", temperature=0.2),\n",
" tool_names=tools,\n",
" top_k_results=2,\n",
" )\n",

View File

@@ -1,174 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Video Captioning\n",
"This notebook shows how to use VideoCaptioningChain, which is implemented using Langchain's ImageCaptionLoader and AssemblyAI to produce .srt files.\n",
"\n",
"This system autogenerates both subtitles and closed captions from a video URL."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installing Dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# !pip install ffmpeg-python\n",
"# !pip install assemblyai\n",
"# !pip install opencv-python\n",
"# !pip install torch\n",
"# !pip install pillow\n",
"# !pip install transformers\n",
"# !pip install langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-30T03:39:14.078232Z",
"start_time": "2023-11-30T03:39:12.534410Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"from langchain.chains.video_captioning import VideoCaptioningChain\n",
"from langchain.chat_models.openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting up API Keys"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-30T03:39:17.423806Z",
"start_time": "2023-11-30T03:39:17.417945Z"
}
},
"outputs": [],
"source": [
"OPENAI_API_KEY = getpass.getpass(\"OpenAI API Key:\")\n",
"\n",
"ASSEMBLYAI_API_KEY = getpass.getpass(\"AssemblyAI API Key:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Required parameters:**\n",
"\n",
"* llm: The language model this chain will use to get suggestions on how to refine the closed-captions\n",
"* assemblyai_key: The API key for AssemblyAI, used to generate the subtitles\n",
"\n",
"**Optional Parameters:**\n",
"\n",
"* verbose (Default: True): Sets verbose mode for downstream chain calls\n",
"* use_logging (Default: True): Log the chain's processes in run manager\n",
"* frame_skip (Default: None): Choose how many video frames to skip during processing. Increasing it results in faster execution, but less accurate results. If None, frame skip is calculated manually based on the framerate Set this to 0 to sample all frames\n",
"* image_delta_threshold (Default: 3000000): Set the sensitivity for what the image processor considers a change in scenery in the video, used to delimit closed captions. Higher = less sensitive\n",
"* closed_caption_char_limit (Default: 20): Sets the character limit on closed captions\n",
"* closed_caption_similarity_threshold (Default: 80): Sets the percentage value to how similar two closed caption models should be in order to be clustered into one longer closed caption\n",
"* use_unclustered_video_models (Default: False): If true, closed captions that could not be clustered will be included. May result in spontaneous behaviour from closed captions such as very short lasting captions or fast-changing captions. Enabling this is experimental and not recommended"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# https://ia804703.us.archive.org/27/items/uh-oh-here-we-go-again/Uh-Oh%2C%20Here%20we%20go%20again.mp4\n",
"# https://ia601200.us.archive.org/9/items/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb.mp4\n",
"\n",
"chain = VideoCaptioningChain(\n",
" llm=ChatOpenAI(model=\"gpt-4\", max_tokens=4000, openai_api_key=OPENAI_API_KEY),\n",
" assemblyai_key=ASSEMBLYAI_API_KEY,\n",
")\n",
"\n",
"srt_content = chain.run(\n",
" video_file_path=\"https://ia601200.us.archive.org/9/items/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb.mp4\"\n",
")\n",
"\n",
"print(srt_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Writing output to .srt file"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"with open(\"output.srt\", \"w\") as file:\n",
" file.write(srt_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "myenv",
"language": "python",
"name": "myenv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -601,7 +601,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)"
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)"
]
},
{

View File

@@ -1,17 +1,12 @@
# docker-compose to make it easier to spin up integration tests.
# Services should use NON standard ports to avoid collision with
# any existing services that might be used for development.
# ATTENTION: When adding a service below use a non-standard port
# increment by one from the preceding port.
# For credentials always use `langchain` and `langchain` for the
# username and password.
version: "3"
name: langchain-tests
services:
redis:
image: redis/redis-stack-server:latest
# We use non standard ports since
# We use non standard ports since
# these instances are used for testing
# and users may already have existing
# redis instances set up locally
@@ -24,61 +19,3 @@ services:
image: graphdb
ports:
- "6021:7200"
mongo:
image: mongo:latest
container_name: mongo_container
ports:
- "6022:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: langchain
MONGO_INITDB_ROOT_PASSWORD: langchain
postgres:
image: postgres:16
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6023:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data:/var/lib/postgresql/data
pgvector:
# postgres with the pgvector extension
image: ankane/pgvector
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6024:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data_pgvector:/var/lib/postgresql/data
vdms:
image: intellabs/vdms:latest
container_name: vdms_container
ports:
- "6025:55555"
volumes:
postgres_data:
postgres_data_pgvector:

2
docs/.gitignore vendored
View File

@@ -1,2 +0,0 @@
/.quarto/
src/supabase.d.ts

View File

@@ -1 +0,0 @@
nodeLinker: node-modules

View File

@@ -1,5 +1,4 @@
"""Configuration file for the Sphinx documentation builder."""
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
@@ -175,6 +174,3 @@ myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
html_copy_source = False
html_show_sourcelink = False

View File

@@ -3,7 +3,6 @@
import importlib
import inspect
import os
import sys
import typing
from enum import Enum
from pathlib import Path
@@ -307,14 +306,7 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if package_name in (
"langchain",
"experimental",
"community",
"core",
"cli",
"text-splitters",
):
if package_name in ("langchain", "experimental", "community", "core", "cli"):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
@@ -352,29 +344,28 @@ def _doc_first_line(package_name: str) -> str:
return f".. {package_name.replace('-', '_')}_api_reference:\n\n"
def main(dirs: Optional[list] = None) -> None:
def main() -> None:
"""Generate the api_reference.rst file for each package."""
print("Starting to build API reference files.")
if not dirs:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners")
]
dirs += os.listdir(ROOT_DIR / "libs" / "partners")
for dir_ in dirs:
for dir in os.listdir(ROOT_DIR / "libs"):
# Skip any hidden directories
# Some of these could be present by mistake in the code base
# e.g., .pytest_cache from running tests from the wrong location.
if dir_.startswith("."):
print("Skipping dir:", dir_)
if dir.startswith("."):
print("Skipping dir:", dir)
continue
if dir in ("cli", "partners"):
continue
else:
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
print("Building package:", dir)
_build_rst_file(package_name=dir)
partner_packages = os.listdir(ROOT_DIR / "libs" / "partners")
print("Building partner packages:", partner_packages)
for dir in partner_packages:
_build_rst_file(package_name=dir)
print("API reference files built.")
if __name__ == "__main__":
dirs = sys.argv[1:] or None
main(dirs=dirs)
main()

File diff suppressed because one or more lines are too long

View File

@@ -43,9 +43,6 @@
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('text_splitters_api_reference') }}">Text splitters</a>
</li>
{%- for title, pathname in partners %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ pathto(pathname) }}">{{ title }}</a>

File diff suppressed because it is too large Load Diff

View File

@@ -241,6 +241,7 @@ Dependents stats for `langchain-ai/langchain`
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 514 |
|[sajjadium/ctf-archives](https://github.com/sajjadium/ctf-archives) | 507 |
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 502 |
|[llmOS/opencopilot](https://github.com/llmOS/opencopilot) | 495 |
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 494 |
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 493 |
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 492 |
@@ -454,6 +455,7 @@ Dependents stats for `langchain-ai/langchain`
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 149 |
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 148 |
|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 148 |
|[lmstudio-ai/examples](https://github.com/lmstudio-ai/examples) | 147 |
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 147 |
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 147 |
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 |

View File

@@ -1,50 +1,155 @@
# Tutorials
## Books and Handbooks
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
## Tutorials
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
## Courses
### Featured courses on Deeplearning.AI
- [LangChain for LLM Application Development](https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/)
- [LangChain Chat with Your Data](https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/)
- [Functions, Tools and Agents with LangChain](https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/)
- [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js/)
### Online courses
- [Udemy](https://www.udemy.com/courses/search/?q=langchain)
- [Pluralsight](https://www.pluralsight.com/search?q=langchain)
- [Coursera](https://www.coursera.org/search?query=langchain)
- [Maven](https://maven.com/courses?query=langchain)
- [Udacity](https://www.udacity.com/catalog/all/any-price/any-school/any-skill/any-difficulty/any-duration/any-type/relevance/page-1?searchValue=langchain)
- [LinkedIn Learning](https://www.linkedin.com/search/results/learning/?keywords=langchain)
- [edX](https://www.edx.org/search?q=langchain)
## Short Tutorials
- [by Nicholas Renotte](https://youtu.be/MlK6SIjcjE8)
- [by Patrick Loeber](https://youtu.be/LbT1yp6quS8)
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
## [Documentation: Use cases](/docs/use_cases)
⛓ icon marks a new addition [last update 2024-02-06]
---------------------
### [LangChain](https://en.wikipedia.org/wiki/LangChain) on Wikipedia
### Books
#### [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
### DeepLearning.AI courses
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
### Handbook
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
⛓ [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
### Short Tutorials
[LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners](https://youtu.be/aywZrzNaKjs) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
[LangChain Crash Course: Build an AutoGPT app in 25 minutes](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
[LangChain Crash Course - Build apps with language models](https://youtu.be/LbT1yp6quS8) by [Patrick Loeber](https://www.youtube.com/@patloeber)
⛓ [LangChain 101 Course](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb) by **Ivan Reznikov**
## Tutorials
### [LangChain for Gen AI and LLMs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) by [James Briggs](https://www.youtube.com/@jamesbriggs)
- #1 [Getting Started with `GPT-3` vs. Open Source LLMs](https://youtu.be/nE2skSRWTTs)
- #2 [Prompt Templates for `GPT 3.5` and other LLMs](https://youtu.be/RflBcK0oDH0)
- #3 [LLM Chains using `GPT 3.5` and other LLMs](https://youtu.be/S8j9Tk0lZHU)
- [LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep 101](https://youtu.be/eqOfr4AGLk8)
- #4 [Chatbot Memory for `Chat-GPT`, `Davinci` + other LLMs](https://youtu.be/X05uK0TZozM)
- #5 [Chat with OpenAI in LangChain](https://youtu.be/CnAgB3A5OlU)
- #6 [Fixing LLM Hallucinations with Retrieval Augmentation in LangChain](https://youtu.be/kvdVduIJsc8)
- #7 [LangChain Agents Deep Dive with `GPT 3.5`](https://youtu.be/jSP-gSEyVeI)
- #8 [Create Custom Tools for Chatbots in LangChain](https://youtu.be/q-HNphrWsDE)
- #9 [Build Conversational Agents with Vector DBs](https://youtu.be/H6bCqqw9xyI)
- [Using NEW `MPT-7B` in Hugging Face and LangChain](https://youtu.be/DXpk9K7DgMo)
- [`MPT-30B` Chatbot with LangChain](https://youtu.be/pnem-EhT6VI)
- [Fine-tuning OpenAI's `GPT 3.5` for LangChain Agents](https://youtu.be/boHXgQ5eQic?si=OOOfK-GhsgZGBqSr)
- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=N7k6xy4RQksbWwsQ)
### [LangChain 101](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) by [Greg Kamradt (Data Indy)](https://www.youtube.com/@DataIndependent)
- [What Is LangChain? - LangChain + `ChatGPT` Overview](https://youtu.be/_v_fgW2SkkQ)
- [Quickstart Guide](https://youtu.be/kYRB-vJFy38)
- [Beginner's Guide To 7 Essential Concepts](https://youtu.be/2xxziIWmaSA)
- [Beginner's Guide To 9 Use Cases](https://youtu.be/vGP4pQdCocw)
- [Agents Overview + Google Searches](https://youtu.be/Jq9Sf68ozk0)
- [`OpenAI` + `Wolfram Alpha`](https://youtu.be/UijbzCIJ99g)
- [Ask Questions On Your Custom (or Private) Files](https://youtu.be/EnT-ZTrcPrg)
- [Connect `Google Drive Files` To `OpenAI`](https://youtu.be/IqqHqDcXLww)
- [`YouTube Transcripts` + `OpenAI`](https://youtu.be/pNcQ5XXMgH4)
- [Question A 300 Page Book (w/ `OpenAI` + `Pinecone`)](https://youtu.be/h0DHDp1FbmQ)
- [Workaround `OpenAI's` Token Limit With Chain Types](https://youtu.be/f9_BWhCI4Zo)
- [Build Your Own OpenAI + LangChain Web App in 23 Minutes](https://youtu.be/U_eV8wfMkXU)
- [Working With The New `ChatGPT API`](https://youtu.be/e9P7FLi5Zy8)
- [OpenAI + LangChain Wrote Me 100 Custom Sales Emails](https://youtu.be/y1pyAQM-3Bo)
- [Structured Output From `OpenAI` (Clean Dirty Data)](https://youtu.be/KwAXfey-xQk)
- [Connect `OpenAI` To +5,000 Tools (LangChain + `Zapier`)](https://youtu.be/7tNm0yiDigU)
- [Use LLMs To Extract Data From Text (Expert Mode)](https://youtu.be/xZzvwR9jdPA)
- [Extract Insights From Interview Transcripts Using LLMs](https://youtu.be/shkMOHwJ4SM)
- [5 Levels Of LLM Summarizing: Novice to Expert](https://youtu.be/qaPMdcCqtWk)
- [Control Tone & Writing Style Of Your LLM Output](https://youtu.be/miBG-a3FuhU)
- [Build Your Own `AI Twitter Bot` Using LLMs](https://youtu.be/yLWLDjT01q8)
- [ChatGPT made my interview questions for me (`Streamlit` + LangChain)](https://youtu.be/zvoAMx0WKkw)
- [Function Calling via ChatGPT API - First Look With LangChain](https://youtu.be/0-zlUy7VUjg)
- [Extract Topics From Video/Audio With LLMs (Topic Modeling w/ LangChain)](https://youtu.be/pEkxRQFNAs4)
### [LangChain How to and guides](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) by [Sam Witteveen](https://www.youtube.com/@samwitteveenai)
- [LangChain Basics - LLMs & PromptTemplates with Colab](https://youtu.be/J_0qvRt4LNk)
- [LangChain Basics - Tools and Chains](https://youtu.be/hI2BY7yl_Ac)
- [`ChatGPT API` Announcement & Code Walkthrough with LangChain](https://youtu.be/phHqvLHCwH4)
- [Conversations with Memory (explanation & code walkthrough)](https://youtu.be/X550Zbz_ROE)
- [Chat with `Flan20B`](https://youtu.be/VW5LBavIfY4)
- [Using `Hugging Face Models` locally (code walkthrough)](https://youtu.be/Kn7SX2Mx_Jk)
- [`PAL`: Program-aided Language Models with LangChain code](https://youtu.be/dy7-LvDu-3s)
- [Building a Summarization System with LangChain and `GPT-3` - Part 1](https://youtu.be/LNq_2s_H01Y)
- [Building a Summarization System with LangChain and `GPT-3` - Part 2](https://youtu.be/d-yeHDLgKHw)
- [Microsoft's `Visual ChatGPT` using LangChain](https://youtu.be/7YEiEyfPF5U)
- [LangChain Agents - Joining Tools and Chains with Decisions](https://youtu.be/ziu87EXZVUE)
- [Comparing LLMs with LangChain](https://youtu.be/rFNG0MIEuW0)
- [Using `Constitutional AI` in LangChain](https://youtu.be/uoVqNFDwpX4)
- [Talking to `Alpaca` with LangChain - Creating an Alpaca Chatbot](https://youtu.be/v6sF8Ed3nTE)
- [Talk to your `CSV` & `Excel` with LangChain](https://youtu.be/xQ3mZhw69bc)
- [`BabyAGI`: Discover the Power of Task-Driven Autonomous Agents!](https://youtu.be/QBcDLSE2ERA)
- [Improve your `BabyAGI` with LangChain](https://youtu.be/DRgPyOXZ-oE)
- [Master `PDF` Chat with LangChain - Your essential guide to queries on documents](https://youtu.be/ZzgUqFtxgXI)
- [Using LangChain with `DuckDuckGO`, `Wikipedia` & `PythonREPL` Tools](https://youtu.be/KerHlb8nuVc)
- [Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)](https://youtu.be/biS8G8x8DdA)
- [LangChain Retrieval QA Over Multiple Files with `ChromaDB`](https://youtu.be/3yPBVii7Ct0)
- [LangChain Retrieval QA with Instructor Embeddings & `ChromaDB` for PDFs](https://youtu.be/cFCGUjc33aU)
- [LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!](https://youtu.be/9ISVjh8mdlA)
- [`Camel` + LangChain for Synthetic Data & Market Research](https://youtu.be/GldMMK6-_-g)
- [Information Extraction with LangChain & `Kor`](https://youtu.be/SW1ZdqH0rRQ)
- [Converting a LangChain App from OpenAI to OpenSource](https://youtu.be/KUDn7bVyIfc)
- [Using LangChain `Output Parsers` to get what you want out of LLMs](https://youtu.be/UVn2NroKQCw)
- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws)
- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs)
- [`OpenAI Functions` + LangChain : Building a Multi Tool Agent](https://youtu.be/4KXK6c6TVXQ)
- [What can you do with 16K tokens in LangChain?](https://youtu.be/z2aCZBAtWXs)
- [Tagging and Extraction - Classification using `OpenAI Functions`](https://youtu.be/a8hMgIcUEnE)
- [HOW to Make Conversational Form with LangChain](https://youtu.be/IT93On2LB5k)
- [`Claude-2` meets LangChain!](https://youtu.be/Hb_D3p0bK2U?si=j96Kc7oJoeRI5-iC)
- [`PaLM 2` Meets LangChain](https://youtu.be/orPwLibLqm4?si=KgJjpEbAD9YBPqT4)
- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=v3Hwxk1m3fksBIHN)
- [Serving `LLaMA2` with `Replicate`](https://youtu.be/JIF4nNi26DE?si=dSazFyC4UQmaR-rJ)
- [NEW LangChain Expression Language](https://youtu.be/ud7HJ2p3gp0?si=8pJ9O6hGbXrCX5G9)
- [Building a RCI Chain for Agents with LangChain Expression Language](https://youtu.be/QaKM5s0TnsY?si=0miEj-o17AHcGfLG)
- [How to Run `LLaMA-2-70B` on the `Together AI`](https://youtu.be/Tc2DHfzHeYE?si=Xku3S9dlBxWQukpe)
- [`RetrievalQA` with `LLaMA 2 70b` & `Chroma` DB](https://youtu.be/93yueQQnqpM?si=ZMwj-eS_CGLnNMXZ)
- [How to use `BGE Embeddings` for LangChain](https://youtu.be/sWRvSG7vL4g?si=85jnvnmTCF9YIWXI)
- [How to use Custom Prompts for `RetrievalQA` on `LLaMA-2 7B`](https://youtu.be/PDwUKves9GY?si=sMF99TWU0p4eiK80)
### [LangChain](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- [LangChain Crash Course — All You Need to Know to Build Powerful Apps with LLMs](https://youtu.be/5-fc4Tlgmro)
- [Working with MULTIPLE `PDF` Files in LangChain: `ChatGPT` for your Data](https://youtu.be/s5LhRdh5fu4)
- [`ChatGPT` for YOUR OWN `PDF` files with LangChain](https://youtu.be/TLf90ipMzfE)
- [Talk to YOUR DATA without OpenAI APIs: LangChain](https://youtu.be/wrD-fZvT6UI)
- [LangChain: `PDF` Chat App (GUI) | `ChatGPT` for Your `PDF` FILES](https://youtu.be/RIWbalZ7sTo)
- [`LangFlow`: Build Chatbots without Writing Code](https://youtu.be/KJ-ux3hre4s)
- [LangChain: Giving Memory to LLMs](https://youtu.be/dxO6pzlgJiY)
- [BEST OPEN Alternative to `OPENAI's EMBEDDINGs` for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY)
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw)
- [Slash API Costs: Mastering Caching for LLM Applications](https://youtu.be/EQOznhaJWR0?si=AXoI7f3-SVFRvQUl)
- [Avoid PROMPT INJECTION with `Constitutional AI` - LangChain](https://youtu.be/tyKSkPFHVX8?si=9mgcB5Y1kkotkBGB)
### LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
- [LangChain Beginner's Tutorial for `Typescript`/`Javascript`](https://youtu.be/bH722QgRlhQ)
- [`GPT-4` Tutorial: How to Chat With Multiple `PDF` Files (~1000 pages of Tesla's 10-K Annual Reports)](https://youtu.be/Ix9WIZpArm0)
- [`GPT-4` & LangChain Tutorial: How to Chat With A 56-Page `PDF` Document (w/`Pinecone`)](https://youtu.be/ih9PBGVVOO4)
- [LangChain & `Supabase` Tutorial: How to Build a ChatGPT Chatbot For Your Website](https://youtu.be/R2FMzcsmQY8)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI)
### Codebase Analysis
- [Codebase Analysis: Langchain Agents](https://carbonated-yacht-2c5.notion.site/Codebase-Analysis-Langchain-Agents-0b0587acd50647ca88aaae7cff5df1f2)
---------------------
⛓ icon marks a new addition [last update 2024-02-061]

View File

@@ -7,7 +7,7 @@
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@The_Full_Stack)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
## Videos (sorted by views)
@@ -15,8 +15,8 @@
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DaveShap)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DaveShap)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
@@ -37,15 +37,15 @@
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nickdaigler)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
- [Build More Powerful LLM Applications for Businesss with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
@@ -82,7 +82,7 @@
- [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@heymichaeldaigler)
- [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
- [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- [Build AI chatbot with custom knowledge base using OpenAI API and GPT Index](https://youtu.be/vDZAZuaXf48) by [Irina Nik](https://www.youtube.com/@irina_nik)
@@ -93,7 +93,7 @@
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
- [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Girlfriend GPT](https://www.youtube.com/@girlfriendGPT)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg) by [Krish Naik](https://www.youtube.com/@krishnaik06)
- ⛓ [Vector Embeddings Tutorial Code Your Own AI Assistant with `GPT-4 API` + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=5uJhxoh2tvdnOXok) by [FreeCodeCamp.org](https://www.youtube.com/@freecodecamp)
@@ -109,7 +109,7 @@
- ⛓ [PyData Heidelberg #11 - TimeSeries Forecasting & LLM Langchain](https://www.youtube.com/live/Glbwb5Hxu18?si=PIEY8Raq_C9PCHuW) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [Prompt Engineering in Web Development | Using LangChain and Templates with OpenAI](https://youtu.be/pK6WzlTOlYw?si=fkcDQsBG2h-DM8uQ) by [Akamai Developer
](https://www.youtube.com/@AkamaiDeveloper)
- ⛓ [Retrieval-Augmented Generation (RAG) using LangChain and `Pinecone` - The RAG Special Episode](https://youtu.be/J_tCD_J6w3s?si=60Mnr5VD9UED9bGG) by [Generative AI and Data Science On AWS](https://www.youtube.com/@GenerativeAIOnAWS)
- ⛓ [Retrieval-Augmented Generation (RAG) using LangChain and `Pinecone` - The RAG Special Episode](https://youtu.be/J_tCD_J6w3s?si=60Mnr5VD9UED9bGG) by [Generative AI and Data Science On AWS](https://www.youtube.com/@GenerativeAIDataScienceOnAWS)
- ⛓ [`LLAMA2 70b-chat` Multiple Documents Chatbot with Langchain & Streamlit |All OPEN SOURCE|Replicate API](https://youtu.be/vhghB81vViM?si=dszzJnArMeac7lyc) by [DataInsightEdge](https://www.youtube.com/@DataInsightEdge01)
- ⛓ [Chatting with 44K Fashion Products: LangChain Opportunities and Pitfalls](https://youtu.be/Zudgske0F_s?si=8HSshHoEhh0PemJA) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- ⛓ [Structured Data Extraction from `ChatGPT` with LangChain](https://youtu.be/q1lYg8JISpQ?si=0HctzOHYZvq62sve) by [MG](https://www.youtube.com/@MG_cafe)

View File

@@ -98,7 +98,7 @@ To run unit tests in Docker:
make docker_tests
```
There are also [integration tests and code-coverage](/docs/contributing/testing/) available.
There are also [integration tests and code-coverage](./testing) available.
### Only develop langchain_core or langchain_experimental

View File

@@ -1,4 +1,7 @@
# Technical logistics
---
sidebar_position: 3
---
# Contribute Documentation
LangChain documentation consists of two components:

View File

@@ -1,2 +0,0 @@
label: 'Documentation'
position: 3

View File

@@ -1,138 +0,0 @@
---
sidebar_label: "Style guide"
---
# LangChain Documentation Style Guide
## Introduction
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
organization and structure.
## Philosophy
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories:
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/expression_language/streaming).
- Our guides on [custom components](/docs/modules/model_io/chat/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/use_cases/) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/expression_language/interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/expression_language/primitives/sequence) are an example of this.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
## Taxonomy
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms
when contributing new documentation:
### Getting started
The [getting started section](/docs/get_started/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/use_cases/) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more
in-depth concepts and strategies that accompany the main happy paths.
:::note
The below sections are listed roughly in order of increasing level of abstraction.
:::
### Expression Language
[LangChain Expression Language (LCEL)](/docs/expression_language/) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
and some **References** for how to use different methods in the Runnable interface.
### Components
The [components section](/docs/modules) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
:::note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should
cover only components that exist in `langchain_core`.
:::
### Integrations
The [integrations](/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services.
If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the
discretion of the third-party provider.
:::note
Concepts covered in `Integrations` should generally exist in `langchain_community` or specific partner packages.
:::
### Guides and Ecosystem
The [Guides](/docs/guides) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**-focused content as well.
## Sample developer journey
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/get_started/quickstart) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.
- Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
## Guidelines
Here are some other guidelines you should think about when writing and organizing documentation.
### Linking to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Conciseness
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
Be concise, including in code samples.
### General style
- Use active voice and present tense whenever possible.
- Use examples and code snippets to illustrate concepts and usage.
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
- Use bullet points and numbered lists to break down information into easily digestible chunks.
- Use tables (especially for **Reference** sections) and diagrams often to present information visually.
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.

View File

@@ -12,7 +12,7 @@ As an open-source project in a rapidly developing field, we are extremely open t
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](/docs/contributing/documentation/style_guide): Help improve our docs, including this one!
- [**Documentation**](./documentation.mdx): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.

View File

@@ -3,7 +3,7 @@ sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/).
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).
There are a few different places you can contribute integrations for LangChain:
@@ -14,20 +14,19 @@ For the most part, new integrations should be added to the Community package. Pa
In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
## Community package
## Community Package
The `langchain-community` package is in `libs/community` and contains most integrations.
It can be installed with `pip install langchain-community`, and exported members can be imported with code like
It is installed by users with `pip install langchain-community`, and exported members can be imported with code like
```python
from langchain_community.chat_models import ChatParrotLink
from langchain_community.llms import ParrotLinkLLM
from langchain_community.chat_models import ParrotLinkLLM
from langchain_community.llms import ChatParrotLink
from langchain_community.vectorstores import ParrotLinkVectorStore
```
The `community` package relies on manually-installed dependent packages, so you will see errors
if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
The community package relies on manually-installed dependent packages, so you will see errors if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
@@ -40,7 +39,7 @@ class ChatParrotLink(BaseChatModel):
Example:
.. code-block:: python
from langchain_community.chat_models import ChatParrotLink
from langchain_parrot_link import ChatParrotLink
model = ChatParrotLink()
"""
@@ -57,16 +56,9 @@ And add documentation to:
- `docs/docs/integrations/chat/parrot_link.ipynb`
## Partner package in LangChain repo
## Partner Packages
Partner packages can be hosted in the `LangChain` monorepo or in an external repo.
Partner package in the `LangChain` repo is placed in `libs/partners/{partner}`
and the package source code is in `libs/partners/{partner}/langchain_{partner}`.
A package is
installed by users with `pip install langchain-{partner}`, and the package members
can be imported with code like:
Partner packages are in `libs/partners/*` and are installed by users with `pip install langchain-{partner}`, and exported members can be imported with code like
```python
from langchain_{partner} import X
@@ -131,49 +123,13 @@ By default, this will include stubs for a Chat Model, an LLM, and/or a Vector St
### Write Unit and Integration Tests
Some basic tests are presented in the `tests/` directory. You should add more tests to cover your package's functionality.
Some basic tests are generated in the tests/ directory. You should add more tests to cover your package's functionality.
For information on running and implementing tests, see the [Testing guide](/docs/contributing/testing/).
For information on running and implementing tests, see the [Testing guide](./testing).
### Write documentation
Documentation is generated from Jupyter notebooks in the `docs/` directory. You should place the notebooks with examples
to the relevant `docs/docs/integrations` directory in the monorepo root.
### (If Necessary) Deprecate community integration
Note: this is only necessary if you're migrating an existing community integration into
a partner package. If the component you're integrating is net-new to LangChain (i.e.
not already in the `community` package), you can skip this step.
Let's pretend we migrated our `ChatParrotLink` chat model from the community package to
the partner package. We would need to deprecate the old model in the community package.
We would do that by adding a `@deprecated` decorator to the old model as follows, in
`libs/community/langchain_community/chat_models/parrot_link.py`.
Before our change, our chat model might look like this:
```python
class ChatParrotLink(BaseChatModel):
...
```
After our change, it would look like this:
```python
from langchain_core._api.deprecation import deprecated
@deprecated(
since="0.0.<next community version>",
removal="0.2.0",
alternative_import="langchain_parrot_link.ChatParrotLink"
)
class ChatParrotLink(BaseChatModel):
...
```
You should do this for *each* component that you're migrating to the partner package.
Documentation is generated from Jupyter notebooks in the `docs/` directory. You should move the generated notebooks to the relevant `docs/docs/integrations` directory in the monorepo root.
### Additional steps
@@ -187,12 +143,3 @@ Maintainer steps (Contributors should **not** do these):
- [ ] set up pypi and test pypi projects
- [ ] add credential secrets to Github Actions
- [ ] add package to conda-forge
## Partner package in external repo
Partner packages in external repos must be coordinated between the LangChain team and
the partner organization to ensure that they are maintained and updated.
If you're interested in creating a partner package in an external repo, please start
with one in the LangChain repo, and then reach out to the LangChain team to discuss
how to move it to an external repo.

View File

@@ -41,7 +41,7 @@ There are other files in the root directory level, but their presence should be
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
See the [documentation](./documentation) guidelines to learn how to contribute to the documentation.
## Code

View File

@@ -0,0 +1,205 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e89f490d",
"metadata": {},
"source": [
"# Agents\n",
"\n",
"You can pass a Runnable into an agent. Make sure you have `langchainhub` installed: `pip install langchainhub`"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "af4381de",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, tool\n",
"from langchain.agents.output_parsers import XMLAgentOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "24cc8134",
"metadata": {},
"outputs": [],
"source": [
"model = ChatAnthropic(model=\"claude-2\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "67c0b0e4",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def search(query: str) -> str:\n",
" \"\"\"Search things about current events.\"\"\"\n",
" return \"32 degrees\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7203b101",
"metadata": {},
"outputs": [],
"source": [
"tool_list = [search]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b68e756d",
"metadata": {},
"outputs": [],
"source": [
"# Get the prompt to use - you can modify this!\n",
"prompt = hub.pull(\"hwchase17/xml-agent-convo\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "61ab3e9a",
"metadata": {},
"outputs": [],
"source": [
"# Logic for going from intermediate steps to a string to pass into model\n",
"# This is pretty tied to the prompt\n",
"def convert_intermediate_steps(intermediate_steps):\n",
" log = \"\"\n",
" for action, observation in intermediate_steps:\n",
" log += (\n",
" f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
" f\"</tool_input><observation>{observation}</observation>\"\n",
" )\n",
" return log\n",
"\n",
"\n",
"# Logic for converting tools to string to go in prompt\n",
"def convert_tools(tools):\n",
" return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
]
},
{
"cell_type": "markdown",
"id": "260f5988",
"metadata": {},
"source": [
"Building an agent from a runnable usually involves a few things:\n",
"\n",
"1. Data processing for the intermediate steps. These need to be represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the prompt\n",
"\n",
"2. The prompt itself\n",
"\n",
"3. The model, complete with stop tokens if needed\n",
"\n",
"4. The output parser - should be in sync with how the prompt specifies things to be formatted."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e92f1d6f",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt.partial(tools=convert_tools(tool_list))\n",
" | model.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
" | XMLAgentOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6ce6ec7a",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb5cb2e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m <tool>search</tool><tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <tool>search</tool>\n",
"<tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>The weather in New York is 32 degrees\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'whats the weather in New york?',\n",
" 'output': 'The weather in New York is 32 degrees'}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in New york?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bce86dd8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,15 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "1e997ab7",
"metadata": {},
"source": [
"---\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f09fd305",

View File

@@ -0,0 +1,163 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cf4fb76d-c534-485b-8b51-a0714ee3b82e",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"\n",
"With LCEL you can easily add [custom routing logic](/docs/expression_language/how_to/routing#using-a-custom-function) to your chain to dynamically determine the chain logic based on user input. All you need to do is define a function that given an input returns a `Runnable`.\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's a very simple example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b793a0aa",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eef9020a-5f7c-4291-98eb-fa73f17d4b92",
"metadata": {},
"outputs": [],
"source": [
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
"When you don't know the answer to a question you admit that you don't know.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"math_template = \"\"\"You are a very good mathematician. You are great at answering math questions. \\\n",
"You are so good because you are able to break down hard problems into their component parts, \\\n",
"answer the component parts, and then put them together to answer the broader question.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"prompt_templates = [physics_template, math_template]\n",
"prompt_embeddings = embeddings.embed_documents(prompt_templates)\n",
"\n",
"\n",
"def prompt_router(input):\n",
" query_embedding = embeddings.embed_query(input[\"query\"])\n",
" similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]\n",
" most_similar = prompt_templates[similarity.argmax()]\n",
" print(\"Using MATH\" if most_similar == math_template else \"Using PHYSICS\")\n",
" return PromptTemplate.from_template(most_similar)\n",
"\n",
"\n",
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4d22b0f3-24f2-4a47-9440-065b57ebcdbd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using PHYSICS\n",
"A black hole is a region in space where gravity is extremely strong, so strong that nothing, not even light, can escape its gravitational pull. It is formed when a massive star collapses under its own gravity during a supernova explosion. The collapse causes an incredibly dense mass to be concentrated in a small volume, creating a gravitational field that is so intense that it warps space and time. Black holes have a boundary called the event horizon, which marks the point of no return for anything that gets too close. Beyond the event horizon, the gravitational pull is so strong that even light cannot escape, hence the name \"black hole.\" While we have a good understanding of black holes, there is still much to learn, especially about what happens inside them.\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a black hole\"))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f261910d-1de1-4a01-8c8a-308db02b81de",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using MATH\n",
"Thank you for your kind words! I will do my best to break down the concept of a path integral for you.\n",
"\n",
"In mathematics and physics, a path integral is a mathematical tool used to calculate the probability amplitude or wave function of a particle or system of particles. It was introduced by Richard Feynman and is an integral over all possible paths that a particle can take to go from an initial state to a final state.\n",
"\n",
"To understand the concept better, let's consider an example. Suppose we have a particle moving from point A to point B in space. Classically, we would describe this particle's motion using a definite trajectory, but in quantum mechanics, particles can simultaneously take multiple paths from A to B.\n",
"\n",
"The path integral formalism considers all possible paths that the particle could take and assigns a probability amplitude to each path. These probability amplitudes are then added up, taking into account the interference effects between different paths.\n",
"\n",
"To calculate a path integral, we need to define an action, which is a mathematical function that describes the behavior of the system. The action is usually expressed in terms of the particle's position, velocity, and time.\n",
"\n",
"Once we have the action, we can write down the path integral as an integral over all possible paths. Each path is weighted by a factor determined by the action and the principle of least action, which states that a particle takes a path that minimizes the action.\n",
"\n",
"Mathematically, the path integral is expressed as:\n",
"\n",
"∫ e^(iS/ħ) D[x(t)]\n",
"\n",
"Here, S is the action, ħ is the reduced Planck's constant, and D[x(t)] represents the integration over all possible paths x(t) of the particle.\n",
"\n",
"By evaluating this integral, we can obtain the probability amplitude for the particle to go from the initial state to the final state. The absolute square of this amplitude gives us the probability of finding the particle in a particular state.\n",
"\n",
"Path integrals have proven to be a powerful tool in various areas of physics, including quantum mechanics, quantum field theory, and statistical mechanics. They allow us to study complex systems and calculate probabilities that are difficult to obtain using other methods.\n",
"\n",
"I hope this explanation helps you understand the concept of a path integral. If you have any further questions, feel free to ask!\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a path integral\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0c1732a-01ca-4d10-977c-29ed7480972b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,11 @@
---
sidebar_position: 3
---
# Cookbook
import DocCardList from "@theme/DocCardList";
Example code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the [Prompt + LLM](/docs/expression_language/cookbook/prompt_llm_parser) page is a good place to start.
<DocCardList />

View File

@@ -0,0 +1,194 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5062941a",
"metadata": {},
"source": [
"# Adding memory\n",
"\n",
"This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18753dee",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7998efd8",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful chatbot\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fa0087f3",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "06b531ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': []}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d9437af6",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" RunnablePassthrough.assign(\n",
" history=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\")\n",
" )\n",
" | prompt\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bed1e260",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"hi im bob\"}\n",
"response = chain.invoke(inputs)\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "890475b4",
"metadata": {},
"outputs": [],
"source": [
"memory.save_context(inputs, {\"output\": response.content})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e8fcb77f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False),\n",
" AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)]}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d837d5c3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob.', additional_kwargs={}, example=False)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"whats my name\"}\n",
"response = chain.invoke(inputs)\n",
"response"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,19 +5,9 @@
"id": "4927a727-b4c8-453c-8c83-bd87b4fcac14",
"metadata": {},
"source": [
"# Moderation chain\n",
"# Adding moderation\n",
"\n",
"This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. \n",
"Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. \n",
"Some API providers specifically prohibit you, or your end users, from generating some \n",
"types of harmful content. To comply with this (and to just generally prevent your application from being harmful) \n",
"you may want to add a moderation chain to your sequences in order to make sure any output \n",
"the LLM generates is not harmful.\n",
"\n",
"If the content passed into the moderation chain is harmful, there is not one best way to handle it.\n",
"It probably depends on your application. Sometimes you may want to throw an error \n",
"(and have your application handle that). Other times, you may want to return something to \n",
"the user explaining that the text was harmful."
"This shows how to add in moderation (or other safeguards) around your LLM application."
]
},
{

View File

@@ -20,11 +20,9 @@
]
},
{
"cell_type": "code",
"cell_type": "raw",
"id": "0f316b5c",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]

View File

@@ -34,7 +34,7 @@
"from langchain.agents import AgentExecutor, load_tools\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
"from langchain_community.tools import WikipediaQueryRun\n",
"from langchain.tools import WikipediaQueryRun\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.prompt_values import ChatPromptValue\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
@@ -220,7 +220,7 @@
"id": "637f994a-5134-402a-bcf0-4de3911eaf49",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/60909eae-f4f1-43eb-9f96-354f5176f66f/r)\n",
"\n",
@@ -388,7 +388,7 @@
"id": "5a7e498b-dc68-4267-a35c-90ceffa91c46",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/3b27d47f-e4df-4afb-81b1-0f88b80ca97e/r)\n",
"\n",

View File

@@ -0,0 +1,492 @@
{
"cells": [
{
"cell_type": "raw",
"id": "abe47592-909c-4844-bf44-9e55c2fb4bfa",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: RAG\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "91c5ef3d",
"metadata": {},
"source": [
"Let's look at adding in a retrieval step to a prompt and LLM, which adds up to a \"retrieval-augmented generation\" chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7f25d9e9-d192-42e9-af50-5660a4bfb0d9",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai faiss-cpu tiktoken"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "33be32af",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "bfc47ec1",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "eae31755",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f3040b0c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e1d20c7c",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\n",
"Answer in the following language: {language}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"chain = (\n",
" {\n",
" \"context\": itemgetter(\"question\") | retriever,\n",
" \"question\": itemgetter(\"question\"),\n",
" \"language\": itemgetter(\"language\"),\n",
" }\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7ee8b2d4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
]
},
{
"cell_type": "markdown",
"id": "f007669c",
"metadata": {},
"source": [
"## Conversational Retrieval Chain\n",
"\n",
"We can easily add in conversation history. This primarily means adding in chat_message_history"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "3f30c348",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage, get_buffer_string\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.runnables import RunnableParallel"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "64ab1dbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"_template = \"\"\"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7d628c97",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f60a5d0f",
"metadata": {},
"outputs": [],
"source": [
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
"\n",
"\n",
"def _combine_documents(\n",
" docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"\n",
"):\n",
" doc_strings = [format_document(doc, document_prompt) for doc in docs]\n",
" return document_separator.join(doc_strings)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5c32cc89",
"metadata": {},
"outputs": [],
"source": [
"_inputs = RunnableParallel(\n",
" standalone_question=RunnablePassthrough.assign(\n",
" chat_history=lambda x: get_buffer_string(x[\"chat_history\"])\n",
" )\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
" | StrOutputParser(),\n",
")\n",
"_context = {\n",
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
" \"question\": lambda x: x[\"standalone_question\"],\n",
"}\n",
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "135c8205",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison was employed at Kensho.')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did harrison work?\",\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "424e7e7a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison worked at Kensho.')"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did he work?\",\n",
" \"chat_history\": [\n",
" HumanMessage(content=\"Who wrote this notebook?\"),\n",
" AIMessage(content=\"Harrison\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c5543183",
"metadata": {},
"source": [
"### With Memory and returning source documents\n",
"\n",
"This shows how to use memory with the above. For memory, we need to manage that outside at the memory. For returning the retrieved documents, we just need to pass them through all the way."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e31dd17c",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "d4bffe94",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(\n",
" return_messages=True, output_key=\"answer\", input_key=\"question\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "733be985",
"metadata": {},
"outputs": [],
"source": [
"# First we add a step to load memory\n",
"# This adds a \"memory\" key to the input object\n",
"loaded_memory = RunnablePassthrough.assign(\n",
" chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\"),\n",
")\n",
"# Now we calculate the standalone question\n",
"standalone_question = {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: get_buffer_string(x[\"chat_history\"]),\n",
" }\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
" | StrOutputParser(),\n",
"}\n",
"# Now we retrieve the documents\n",
"retrieved_documents = {\n",
" \"docs\": itemgetter(\"standalone_question\") | retriever,\n",
" \"question\": lambda x: x[\"standalone_question\"],\n",
"}\n",
"# Now we construct the inputs for the final prompt\n",
"final_inputs = {\n",
" \"context\": lambda x: _combine_documents(x[\"docs\"]),\n",
" \"question\": itemgetter(\"question\"),\n",
"}\n",
"# And finally, we do the part that returns the answers\n",
"answer = {\n",
" \"answer\": final_inputs | ANSWER_PROMPT | ChatOpenAI(),\n",
" \"docs\": itemgetter(\"docs\"),\n",
"}\n",
"# And now we put it all together!\n",
"final_chain = loaded_memory | standalone_question | retrieved_documents | answer"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "806e390c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison was employed at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"where did harrison work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "977399fd",
"metadata": {},
"outputs": [],
"source": [
"# Note that the memory does not save automatically\n",
"# This will be improved in the future\n",
"# For now you need to save it yourself\n",
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "f94f7de4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='where did harrison work?'),\n",
" AIMessage(content='Harrison was employed at Kensho.')]}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "88f2b7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison actually worked at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"but where did he really work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207a2782",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,223 @@
{
"cells": [
{
"cell_type": "raw",
"id": "c14da114-1a4a-487d-9cff-e0e8c30ba366",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: Querying a SQL DB\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "506e9636",
"metadata": {},
"source": [
"We can replicate our SQLDatabaseChain with Runnables."
]
},
{
"cell_type": "raw",
"id": "b3121aa8",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7a927516",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query:\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3f51f386",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase"
]
},
{
"cell_type": "markdown",
"id": "7c3449d6-684b-416e-ba16-90a035835a88",
"metadata": {},
"source": [
"We'll need the Chinook sample DB for this example. There's many places to download it from, e.g. https://database.guide/2-sample-databases-sqlite/"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "2ccca6fc",
"metadata": {},
"outputs": [],
"source": [
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "05ba88ee",
"metadata": {},
"outputs": [],
"source": [
"def get_schema(_):\n",
" return db.get_table_info()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "a4eda902",
"metadata": {},
"outputs": [],
"source": [
"def run_query(query):\n",
" return db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "5046cb17",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"sql_response = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",
" | prompt\n",
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "a5552039",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'SELECT COUNT(*) FROM Employee'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sql_response.invoke({\"question\": \"How many employees are there?\"})"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "d6fee130",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "923aa634",
"metadata": {},
"outputs": [],
"source": [
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response).assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "e94963d8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='There are 8 employees.', additional_kwargs={}, example=False)"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"How many employees are there?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f358d7b-a721-4db3-9f92-f06913428afc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,122 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "29781123",
"metadata": {},
"source": [
"# Using tools\n",
"\n",
"You can use any Tools with Runnables easily."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5c579dd-2e22-41b0-a789-346dfdecb5a2",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai duckduckgo-search"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9232d2a9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import DuckDuckGoSearchRun\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a0c64d2c",
"metadata": {},
"outputs": [],
"source": [
"search = DuckDuckGoSearchRun()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "391969b6",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"turn the following user input into a search query for a search engine:\n",
"\n",
"{input}\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e3d9d20d",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model | StrOutputParser() | search"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "55f2967d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code \"GNPLAY\" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\\'s college football schedule still has some good games remaining to watch on your television.. We\\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"I'd like to figure out what games are tonight\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a16949cf-00ea-43c6-a6aa-797ad4f6918d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -31,42 +31,13 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "raw",
"id": "278b0027",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "c3d54f72",
"metadata": {},
"source": [
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9eed8e8",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -87,8 +58,10 @@
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a short joke about {topic}\")\n",
"model = ChatOpenAI(model=\"gpt-4\")\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser\n",
@@ -101,15 +74,15 @@
"id": "81c502c5-85ee-4f36-aaf4-d6e350b7792f",
"metadata": {},
"source": [
"Notice this line of the code, where we piece together these different components into a single chain using LCEL:\n",
"Notice this line of this code, where we piece together then different components into a single chain using LCEL:\n",
"\n",
"```\n",
"chain = prompt | model | output_parser\n",
"```\n",
"\n",
"The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components, feeding the output from one component as input into the next component. \n",
"The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components feeds the output from one component as input into the next component. \n",
"\n",
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on."
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on. "
]
},
{
@@ -244,7 +217,7 @@
}
],
"source": [
"from langchain_openai import OpenAI\n",
"from langchain_openai.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"llm.invoke(prompt_value)"
@@ -258,7 +231,7 @@
"### 3. Output parser\n",
"\n",
"And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n",
"`BaseMessage` as input. The specific `StrOutputParser` simply converts any input into a string."
"`BaseMessage` as input. The `StrOutputParser` specifically simple converts any input into a string."
]
},
{
@@ -318,7 +291,7 @@
"source": [
":::info\n",
"\n",
"Note that if youre curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n",
"Note that if youre curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n",
"\n",
":::"
]
@@ -346,17 +319,7 @@
"source": [
"## RAG Search Example\n",
"\n",
"For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions."
]
},
{
"cell_type": "markdown",
"id": "b8fe8eb4",
"metadata": {},
"source": [
"```{=mdx}\n",
"<ChatModelTabs />\n",
"```"
"For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. "
]
},
{
@@ -373,7 +336,8 @@
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_texts(\n",
" [\"harrison worked at kensho\", \"bears like to eat honey\"],\n",
@@ -387,6 +351,7 @@
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"setup_and_retrieval = RunnableParallel(\n",
@@ -440,7 +405,7 @@
"id": "e6833844-f1c4-444c-a3d2-31b3c6b31d46",
"metadata": {},
"source": [
"We then use the `RunnableParallel` to prepare the expected inputs into the prompt by using the entries for the retrieved documents as well as the original user question, using the retriever for document search, and `RunnablePassthrough` to pass the users question:"
"We then use the `RunnableParallel` to prepare the expected inputs into the prompt by using the entries for the retrieved documents as well as the original user question, using the retriever for document search, and RunnablePassthrough to pass the users question:"
]
},
{
@@ -484,7 +449,7 @@
"With the flow being:\n",
"\n",
"1. The first steps create a `RunnableParallel` object with two entries. The first entry, `context` will include the document results fetched by the retriever. The second entry, `question` will contain the users original question. To pass on the question, we use `RunnablePassthrough` to copy this entry. \n",
"2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n",
"2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method.\n",
"\n",
@@ -509,7 +474,7 @@
"source": [
"## Next steps\n",
"\n",
"We recommend reading our [Advantages of LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
"We recommend reading our [Why use LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
]
}
],
@@ -529,7 +494,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -1,25 +1,13 @@
{
"cells": [
{
"cell_type": "raw",
"id": "fe63ffaf",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: \"Binding: Attach runtime args\"\n",
"keywords: [RunnableBinding, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "711752cb-4f15-42a3-9838-a0c67f397771",
"metadata": {},
"source": [
"# Binding: Attach runtime args\n",
"# Bind runtime args\n",
"\n",
"Sometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to pass these arguments in.\n",
"Sometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to easily pass these arguments in.\n",
"\n",
"Suppose we have a simple prompt + model sequence:"
]

View File

@@ -1,17 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "9ede5870",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 7\n",
"title: \"Configure runtime chain internals\"\n",
"keywords: [ConfigurableField, configurable_fields, ConfigurableAlternatives, configurable_alternatives, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "39eaf61b",
@@ -569,7 +557,7 @@
"metadata": {},
"outputs": [],
"source": [
"openai_joke = chain.with_config(configurable={\"llm\": \"openai\"})"
"openai_poem = chain.with_config(configurable={\"llm\": \"openai\"})"
]
},
{
@@ -590,7 +578,7 @@
}
],
"source": [
"openai_joke.invoke({\"topic\": \"bears\"})"
"openai_poem.invoke({\"topic\": \"bears\"})"
]
},
{

View File

@@ -5,9 +5,9 @@
"id": "b45110ef",
"metadata": {},
"source": [
"# Create a runnable with the @chain decorator\n",
"# Create a runnable with the `@chain` decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](./functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested childen.\n",
"\n",

Some files were not shown because too many files have changed in this diff Show More