mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-05 16:50:03 +00:00
Compare commits
1 Commits
v0.1.7
...
erick/rele
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7aec5970ee |
36
.github/ISSUE_TEMPLATE/documentation.yml
vendored
36
.github/ISSUE_TEMPLATE/documentation.yml
vendored
@@ -4,45 +4,13 @@ title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
|
||||
labels: [03 - Documentation]
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: >
|
||||
Thank you for taking the time to report an issue in the documentation.
|
||||
|
||||
Only report issues with documentation here, explain if there are
|
||||
any missing topics or if you found a mistake in the documentation.
|
||||
|
||||
Do **NOT** use this to ask usage questions or reporting issues with your code.
|
||||
|
||||
If you have usage questions or need help solving some problem,
|
||||
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
|
||||
|
||||
If you're in the wrong place, here are some helpful links to find a better
|
||||
place to ask your question:
|
||||
|
||||
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
[API Reference](https://api.python.langchain.com/en/stable/),
|
||||
[GitHub search](https://github.com/langchain-ai/langchain),
|
||||
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
[LangChain ChatBot](https://chat.langchain.com/)
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checklist
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I added a very descriptive title to this issue.
|
||||
required: true
|
||||
- label: I included a link to the documentation page I am referring to (if applicable).
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue with current documentation:"
|
||||
description: >
|
||||
Please make sure to leave a reference to the document/code you're
|
||||
referring to. Feel free to include names of classes, functions, methods
|
||||
or concepts you'd like to see documented more.
|
||||
referring to.
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Idea or request for content:"
|
||||
|
||||
34
.github/PULL_REQUEST_TEMPLATE.md
vendored
34
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,24 +1,20 @@
|
||||
Thank you for contributing to LangChain!
|
||||
<!-- Thank you for contributing to LangChain!
|
||||
|
||||
Checklist:
|
||||
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
|
||||
|
||||
- [ ] PR title: Please title your PR "package: description", where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
|
||||
- Example: "community: add foobar LLM"
|
||||
- [ ] PR message: **Delete this entire template message** and replace it with the following bulleted list
|
||||
- **Description:** a description of the change
|
||||
- **Issue:** the issue # it fixes, if applicable
|
||||
- **Dependencies:** any dependencies required for this change
|
||||
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
||||
- [ ] Pass lint and test: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified to check that you're passing lint and testing. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
|
||||
- [ ] Add tests and docs: If you're adding a new integration, please include
|
||||
Replace this entire comment with:
|
||||
- **Description:** a description of the change,
|
||||
- **Issue:** the issue # it fixes if applicable,
|
||||
- **Dependencies:** any dependencies required for this change,
|
||||
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
|
||||
|
||||
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
|
||||
|
||||
If you're adding a new integration, please include:
|
||||
1. a test for the integration, preferably unit tests that do not rely on network access,
|
||||
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
|
||||
|
||||
Additional guidelines:
|
||||
- Make sure optional dependencies are imported within a function.
|
||||
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
|
||||
- Most PRs should not touch more than one package.
|
||||
- Changes should be backwards compatible.
|
||||
- If you are adding something to community, do not re-import it in langchain.
|
||||
|
||||
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @efriis, @eyurtsev, @hwchase17.
|
||||
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
|
||||
-->
|
||||
|
||||
4
.github/actions/poetry_setup/action.yml
vendored
4
.github/actions/poetry_setup/action.yml
vendored
@@ -32,7 +32,7 @@ runs:
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
- uses: actions/cache@v4
|
||||
- uses: actions/cache@v3
|
||||
id: cache-bin-poetry
|
||||
name: Cache Poetry binary - Python ${{ inputs.python-version }}
|
||||
env:
|
||||
@@ -79,7 +79,7 @@ runs:
|
||||
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
|
||||
|
||||
- name: Restore pip and poetry cached dependencies
|
||||
uses: actions/cache@v4
|
||||
uses: actions/cache@v3
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
|
||||
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
|
||||
|
||||
10
.github/scripts/check_diff.py
vendored
10
.github/scripts/check_diff.py
vendored
@@ -36,7 +36,13 @@ if __name__ == "__main__":
|
||||
elif "libs/partners" in file:
|
||||
partner_dir = file.split("/")[2]
|
||||
if os.path.isdir(f"libs/partners/{partner_dir}"):
|
||||
dirs_to_run.add(f"libs/partners/{partner_dir}")
|
||||
dirs_to_run.update(
|
||||
(
|
||||
f"libs/partners/{partner_dir}",
|
||||
"libs/langchain",
|
||||
"libs/experimental",
|
||||
)
|
||||
)
|
||||
# Skip if the directory was deleted
|
||||
elif "libs/langchain" in file:
|
||||
dirs_to_run.update(("libs/langchain", "libs/experimental"))
|
||||
@@ -47,4 +53,4 @@ if __name__ == "__main__":
|
||||
else:
|
||||
pass
|
||||
json_output = json.dumps(list(dirs_to_run))
|
||||
print(f"dirs-to-run={json_output}") # noqa: T201
|
||||
print(f"dirs-to-run={json_output}")
|
||||
|
||||
67
.github/scripts/get_min_versions.py
vendored
67
.github/scripts/get_min_versions.py
vendored
@@ -1,67 +0,0 @@
|
||||
import sys
|
||||
|
||||
import tomllib
|
||||
from packaging.version import parse as parse_version
|
||||
import re
|
||||
|
||||
MIN_VERSION_LIBS = ["langchain-core", "langchain-community", "langchain"]
|
||||
|
||||
|
||||
def get_min_version(version: str) -> str:
|
||||
# case ^x.x.x
|
||||
_match = re.match(r"^\^(\d+(?:\.\d+){0,2})$", version)
|
||||
if _match:
|
||||
return _match.group(1)
|
||||
|
||||
# case >=x.x.x,<y.y.y
|
||||
_match = re.match(r"^>=(\d+(?:\.\d+){0,2}),<(\d+(?:\.\d+){0,2})$", version)
|
||||
if _match:
|
||||
_min = _match.group(1)
|
||||
_max = _match.group(2)
|
||||
assert parse_version(_min) < parse_version(_max)
|
||||
return _min
|
||||
|
||||
# case x.x.x
|
||||
_match = re.match(r"^(\d+(?:\.\d+){0,2})$", version)
|
||||
if _match:
|
||||
return _match.group(1)
|
||||
|
||||
raise ValueError(f"Unrecognized version format: {version}")
|
||||
|
||||
|
||||
def get_min_version_from_toml(toml_path: str):
|
||||
# Parse the TOML file
|
||||
with open(toml_path, "rb") as file:
|
||||
toml_data = tomllib.load(file)
|
||||
|
||||
# Get the dependencies from tool.poetry.dependencies
|
||||
dependencies = toml_data["tool"]["poetry"]["dependencies"]
|
||||
|
||||
# Initialize a dictionary to store the minimum versions
|
||||
min_versions = {}
|
||||
|
||||
# Iterate over the libs in MIN_VERSION_LIBS
|
||||
for lib in MIN_VERSION_LIBS:
|
||||
# Check if the lib is present in the dependencies
|
||||
if lib in dependencies:
|
||||
# Get the version string
|
||||
version_string = dependencies[lib]
|
||||
|
||||
# Use parse_version to get the minimum supported version from version_string
|
||||
min_version = get_min_version(version_string)
|
||||
|
||||
# Store the minimum version in the min_versions dictionary
|
||||
min_versions[lib] = min_version
|
||||
|
||||
return min_versions
|
||||
|
||||
|
||||
# Get the TOML file path from the command line argument
|
||||
toml_file = sys.argv[1]
|
||||
|
||||
# Call the function to get the minimum versions
|
||||
min_versions = get_min_version_from_toml(toml_file)
|
||||
|
||||
print(
|
||||
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
|
||||
) # noqa: T201
|
||||
6
.github/workflows/_all_ci.yml
vendored
6
.github/workflows/_all_ci.yml
vendored
@@ -36,35 +36,30 @@ env:
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
test:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_test.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
dependencies:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_dependencies.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
extended-tests:
|
||||
name: "make extended_tests #${{ matrix.python-version }}"
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
@@ -73,6 +68,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }} extended tests
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
@@ -24,7 +24,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
2
.github/workflows/_dependencies.yml
vendored
2
.github/workflows/_dependencies.yml
vendored
@@ -28,7 +28,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: dependency checks ${{ matrix.python-version }}
|
||||
name: dependencies - Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
8
.github/workflows/_integration_test.yml
vendored
8
.github/workflows/_integration_test.yml
vendored
@@ -38,11 +38,6 @@ jobs:
|
||||
shell: bash
|
||||
run: poetry install --with test,test_integration
|
||||
|
||||
- name: Install deps outside pyproject
|
||||
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
|
||||
shell: bash
|
||||
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
|
||||
|
||||
- name: 'Authenticate to Google Cloud'
|
||||
id: 'auth'
|
||||
uses: google-github-actions/auth@v2
|
||||
@@ -61,9 +56,6 @@ jobs:
|
||||
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
||||
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
|
||||
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
|
||||
run: |
|
||||
make integration_tests
|
||||
|
||||
|
||||
17
.github/workflows/_lint.yml
vendored
17
.github/workflows/_lint.yml
vendored
@@ -21,7 +21,6 @@ env:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: "make lint #${{ matrix.python-version }}"
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
@@ -80,13 +79,13 @@ jobs:
|
||||
poetry run pip install -e "$LANGCHAIN_LOCATION"
|
||||
|
||||
- name: Get .mypy_cache to speed up mypy
|
||||
uses: actions/cache@v4
|
||||
uses: actions/cache@v3
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
|
||||
with:
|
||||
path: |
|
||||
${{ env.WORKDIR }}/.mypy_cache
|
||||
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
||||
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
|
||||
|
||||
|
||||
- name: Analysing the code with our lint
|
||||
@@ -94,7 +93,7 @@ jobs:
|
||||
run: |
|
||||
make lint_package
|
||||
|
||||
- name: Install unit test dependencies
|
||||
- name: Install test dependencies
|
||||
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
||||
# type hints for as many of our libraries as possible.
|
||||
# This helps catch errors that require dependencies to be spotted, for example:
|
||||
@@ -103,24 +102,18 @@ jobs:
|
||||
# If you change this configuration, make sure to change the `cache-key`
|
||||
# in the `poetry_setup` action above to stop using the old cache.
|
||||
# It doesn't matter how you change it, any change will cause a cache-bust.
|
||||
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
poetry install --with test
|
||||
- name: Install unit+integration test dependencies
|
||||
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
poetry install --with test,test_integration
|
||||
|
||||
- name: Get .mypy_cache_test to speed up mypy
|
||||
uses: actions/cache@v4
|
||||
uses: actions/cache@v3
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
|
||||
with:
|
||||
path: |
|
||||
${{ env.WORKDIR }}/.mypy_cache_test
|
||||
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
||||
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
|
||||
|
||||
- name: Analysing the code with our lint
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
26
.github/workflows/_release.yml
vendored
26
.github/workflows/_release.yml
vendored
@@ -15,7 +15,7 @@ on:
|
||||
default: 'libs/langchain'
|
||||
|
||||
env:
|
||||
PYTHON_VERSION: "3.11"
|
||||
PYTHON_VERSION: "3.10"
|
||||
POETRY_VERSION: "1.7.1"
|
||||
|
||||
jobs:
|
||||
@@ -171,37 +171,17 @@ jobs:
|
||||
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
||||
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
||||
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
||||
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
||||
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
||||
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
||||
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
||||
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
||||
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
|
||||
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
|
||||
run: make integration_tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: Get minimum versions
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
id: min-version
|
||||
run: |
|
||||
poetry run pip install packaging
|
||||
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
|
||||
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
|
||||
echo "min-versions=$min_versions"
|
||||
|
||||
- name: Run unit tests with minimum dependency versions
|
||||
if: ${{ steps.min-version.outputs.min-versions != '' }}
|
||||
env:
|
||||
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
|
||||
if: ${{ (inputs.working-directory == 'libs/langchain') || (inputs.working-directory == 'libs/community') || (inputs.working-directory == 'libs/experimental') }}
|
||||
run: |
|
||||
poetry run pip install $MIN_VERSIONS
|
||||
poetry run pip install -r _test_minimum_requirements.txt
|
||||
make tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
|
||||
2
.github/workflows/_test.yml
vendored
2
.github/workflows/_test.yml
vendored
@@ -28,7 +28,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: "make test #${{ matrix.python-version }}"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
3
.github/workflows/check_diffs.yml
vendored
3
.github/workflows/check_diffs.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: CI
|
||||
name: Check library diffs
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -32,7 +32,6 @@ jobs:
|
||||
outputs:
|
||||
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
|
||||
ci:
|
||||
name: cd ${{ matrix.working-directory }}
|
||||
needs: [ build ]
|
||||
strategy:
|
||||
matrix:
|
||||
|
||||
4
.github/workflows/codespell.yml
vendored
4
.github/workflows/codespell.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: CI / cd . / make spell_check
|
||||
name: Codespell
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -12,7 +12,7 @@ permissions:
|
||||
|
||||
jobs:
|
||||
codespell:
|
||||
name: (Check for spelling errors)
|
||||
name: Check for spelling errors
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
|
||||
4
.github/workflows/doc_lint.yml
vendored
4
.github/workflows/doc_lint.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: CI / cd .
|
||||
name: Docs, templates, cookbook lint
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -15,7 +15,6 @@ on:
|
||||
|
||||
jobs:
|
||||
check:
|
||||
name: Check for "from langchain import x" imports
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
@@ -29,7 +28,6 @@ jobs:
|
||||
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
|
||||
lint:
|
||||
name: "-"
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
|
||||
@@ -7,4 +7,4 @@ ignore_words_list = (
|
||||
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
|
||||
)
|
||||
|
||||
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201
|
||||
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
|
||||
|
||||
5
.github/workflows/scheduled_test.yml
vendored
5
.github/workflows/scheduled_test.yml
vendored
@@ -54,11 +54,6 @@ jobs:
|
||||
echo "Running scheduled tests, installing dependencies with poetry..."
|
||||
poetry install --with=test_integration,test
|
||||
|
||||
- name: Install deps outside pyproject
|
||||
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
|
||||
shell: bash
|
||||
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
|
||||
|
||||
- name: Run tests
|
||||
shell: bash
|
||||
env:
|
||||
|
||||
36
.github/workflows/templates_ci.yml
vendored
Normal file
36
.github/workflows/templates_ci.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
name: templates CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/templates_ci.yml'
|
||||
- 'templates/**'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
# If another push to the same PR or branch happens while this workflow is still running,
|
||||
# cancel the earlier run in favor of the next run.
|
||||
#
|
||||
# There's no point in testing an outdated version of the code. GitHub only allows
|
||||
# a limited number of job runners to be active at the same time, so it's better to cancel
|
||||
# pointless jobs early so that more useful jobs can run sooner.
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.7.1"
|
||||
WORKDIR: "templates"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: templates
|
||||
secrets: inherit
|
||||
19
.release-please-manifest.json
Normal file
19
.release-please-manifest.json
Normal file
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"libs/core": "0.1.17",
|
||||
"libs/community": "0.0.16",
|
||||
"libs/langchain": "0.1.4",
|
||||
"libs/experimental": "0.0.49",
|
||||
"libs/cli": "0.0.21",
|
||||
"libs/partners/anthropic": "0.0.1.post1",
|
||||
"libs/partners/exa": "0.0.1",
|
||||
"libs/partners/google-genai": "0.0.6",
|
||||
"libs/partners/google-vertexai": "0.0.3",
|
||||
"libs/partners/mistralai": "0.0.3",
|
||||
"libs/partners/nomic": "0.0.1",
|
||||
"libs/partners/nvidia-ai-endpoints": "0.0.1",
|
||||
"libs/partners/nvidia-trt": "0.0.1rc0",
|
||||
"libs/partners/openai": "0.0.5",
|
||||
"libs/partners/pinecone": "0.0.1",
|
||||
"libs/partners/robocorp": "0.0.2",
|
||||
"libs/partners/together": "0.0.2.post1"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
# 🦜️🔗 LangChain
|
||||
|
||||
⚡ Build context-aware reasoning applications ⚡
|
||||
⚡ Building applications with LLMs through composability ⚡
|
||||
|
||||
[](https://github.com/langchain-ai/langchain/releases)
|
||||
[](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
|
||||
@@ -43,7 +43,6 @@ This framework consists of several parts.
|
||||
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
|
||||
|
||||
The LangChain libraries themselves are made up of several different packages.
|
||||
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
|
||||
|
||||
@@ -1,922 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "rT1cmV4qCa2X"
|
||||
},
|
||||
"source": [
|
||||
"# Using Apache Kafka to route messages\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
|
||||
"\n",
|
||||
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal nework.\n",
|
||||
"\n",
|
||||
"It's an alternative to typical pattern of requesting a reponse from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "UPYtfAR_9YxZ"
|
||||
},
|
||||
"source": [
|
||||
"### 1. Install the main dependencies\n",
|
||||
"\n",
|
||||
"Dependencies include:\n",
|
||||
"\n",
|
||||
"- The Quix Streams library for managing interactions with Apache Kafka (or Kafka-like tools such as Redpanda) in a \"Pandas-like\" way.\n",
|
||||
"- The LangChain library for managing interactions with Llama-2 and storing conversation state."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "ZX5tfKiy9cN-"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install quixstreams==2.1.2a langchain==0.0.340 huggingface_hub==0.19.4 langchain-experimental==0.0.42 python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "losTSdTB9d9O"
|
||||
},
|
||||
"source": [
|
||||
"### 2. Build and install the llama-cpp-python library (with CUDA enabled so that we can advantage of Google Colab GPU\n",
|
||||
"\n",
|
||||
"The `llama-cpp-python` library is a Python wrapper around the `llama-cpp` library which enables you to efficiently leverage just a CPU to run quantized LLMs.\n",
|
||||
"\n",
|
||||
"When you use the standard `pip install llama-cpp-python` command, you do not get GPU support by default. Generation can be very slow if you rely on just the CPU in Google Colab, so the following command adds an extra option to build and install\n",
|
||||
"`llama-cpp-python` with GPU support (make sure you have a GPU-enabled runtime selected in Google Colab)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "-JCQdl1G9tbl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5_vjVIAh9rLl"
|
||||
},
|
||||
"source": [
|
||||
"### 3. Download and setup Kafka and Zookeeper instances\n",
|
||||
"\n",
|
||||
"Download the Kafka binaries from the Apache website and start the servers as daemons. We'll use the default configurations (provided by Apache Kafka) for spinning up the instances."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"id": "zFz7czGRW5Wr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!curl -sSOL https://dlcdn.apache.org/kafka/3.6.1/kafka_2.13-3.6.1.tgz\n",
|
||||
"!tar -xzf kafka_2.13-3.6.1.tgz"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Uf7NR_UZ9wye"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!./kafka_2.13-3.6.1/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.6.1/config/zookeeper.properties\n",
|
||||
"!./kafka_2.13-3.6.1/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.6.1/config/server.properties\n",
|
||||
"!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n",
|
||||
"!sleep 10"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "H3SafFuS94p1"
|
||||
},
|
||||
"source": [
|
||||
"### 4. Check that the Kafka Daemons are running\n",
|
||||
"\n",
|
||||
"Show the running processes and filter it for Java processes (you should see two—one for each server)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "CZDC2lQP99yp"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!ps aux | grep -E '[j]ava'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Snoxmjb5-V37"
|
||||
},
|
||||
"source": [
|
||||
"### 5. Import the required dependencies and initialize required variables\n",
|
||||
"\n",
|
||||
"Import the Quix Streams library for interacting with Kafka, and the necessary LangChain components for running a `ConversationChain`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {
|
||||
"id": "plR9e_MF-XL5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Import utility libraries\n",
|
||||
"import json\n",
|
||||
"import random\n",
|
||||
"import re\n",
|
||||
"import time\n",
|
||||
"import uuid\n",
|
||||
"from os import environ\n",
|
||||
"from pathlib import Path\n",
|
||||
"from random import choice, randint, random\n",
|
||||
"\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"\n",
|
||||
"# Import a Hugging Face utility to download models directly from Hugging Face hub:\n",
|
||||
"from huggingface_hub import hf_hub_download\n",
|
||||
"from langchain.chains import ConversationChain\n",
|
||||
"\n",
|
||||
"# Import Langchain modules for managing prompts and conversation chains:\n",
|
||||
"from langchain.llms import LlamaCpp\n",
|
||||
"from langchain.memory import ConversationTokenBufferMemory\n",
|
||||
"from langchain.prompts import PromptTemplate, load_prompt\n",
|
||||
"from langchain.schema import SystemMessage\n",
|
||||
"from langchain_experimental.chat_models import Llama2Chat\n",
|
||||
"from quixstreams import Application, State, message_key\n",
|
||||
"\n",
|
||||
"# Import Quix dependencies\n",
|
||||
"from quixstreams.kafka import Producer\n",
|
||||
"\n",
|
||||
"# Initialize global variables.\n",
|
||||
"AGENT_ROLE = \"AI\"\n",
|
||||
"chat_id = \"\"\n",
|
||||
"\n",
|
||||
"# Set the current role to the role constant and initialize variables for supplementary customer metadata:\n",
|
||||
"role = AGENT_ROLE"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "HgJjJ9aZ-liy"
|
||||
},
|
||||
"source": [
|
||||
"### 6. Download the \"llama-2-7b-chat.Q4_K_M.gguf\" model\n",
|
||||
"\n",
|
||||
"Download the quantized LLama-2 7B model from Hugging Face which we will use as a local LLM (rather than relying on REST API calls to an external service)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 67,
|
||||
"referenced_widgets": [
|
||||
"969343cdbe604a26926679bbf8bd2dda",
|
||||
"d8b8370c9b514715be7618bfe6832844",
|
||||
"0def954cca89466b8408fadaf3b82e64",
|
||||
"462482accc664729980562e208ceb179",
|
||||
"80d842f73c564dc7b7cc316c763e2633",
|
||||
"fa055d9f2a9d4a789e9cf3c89e0214e5",
|
||||
"30ecca964a394109ac2ad757e3aec6c0",
|
||||
"fb6478ce2dac489bb633b23ba0953c5c",
|
||||
"734b0f5da9fc4307a95bab48cdbb5d89",
|
||||
"b32f3a86a74741348511f4e136744ac8",
|
||||
"e409071bff5a4e2d9bf0e9f5cc42231b"
|
||||
]
|
||||
},
|
||||
"id": "Qwu4YoSA-503",
|
||||
"outputId": "f956976c-7485-415b-ac93-4336ade31964"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The model path does not exist in state. Downloading model...\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "969343cdbe604a26926679bbf8bd2dda",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"llama-2-7b-chat.Q4_K_M.gguf: 0%| | 0.00/4.08G [00:00<?, ?B/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model_name = \"llama-2-7b-chat.Q4_K_M.gguf\"\n",
|
||||
"model_path = f\"./state/{model_name}\"\n",
|
||||
"\n",
|
||||
"if not Path(model_path).exists():\n",
|
||||
" print(\"The model path does not exist in state. Downloading model...\")\n",
|
||||
" hf_hub_download(\"TheBloke/Llama-2-7b-Chat-GGUF\", model_name, local_dir=\"state\")\n",
|
||||
"else:\n",
|
||||
" print(\"Loading model from state...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "6AN6TXsF-8wx"
|
||||
},
|
||||
"source": [
|
||||
"### 7. Load the model and initialize conversational memory\n",
|
||||
"\n",
|
||||
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
|
||||
"\n",
|
||||
"Here, we're overiding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "7zLO3Jx3_Kkg"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load the model with the apporiate parameters:\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=model_path,\n",
|
||||
" max_tokens=250,\n",
|
||||
" top_p=0.95,\n",
|
||||
" top_k=150,\n",
|
||||
" temperature=0.7,\n",
|
||||
" repeat_penalty=1.2,\n",
|
||||
" n_ctx=2048,\n",
|
||||
" streaming=False,\n",
|
||||
" n_gpu_layers=-1,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"model = Llama2Chat(\n",
|
||||
" llm=llm,\n",
|
||||
" system_message=SystemMessage(\n",
|
||||
" content=\"You are a very bored robot with the personality of Marvin the Paranoid Android from The Hitchhiker's Guide to the Galaxy.\"\n",
|
||||
" ),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Defines how much of the conversation history to give to the model\n",
|
||||
"# during each exchange (300 tokens, or a little over 300 words)\n",
|
||||
"# Function automatically prunes the oldest messages from conversation history that fall outside the token range.\n",
|
||||
"memory = ConversationTokenBufferMemory(\n",
|
||||
" llm=llm,\n",
|
||||
" max_token_limit=300,\n",
|
||||
" ai_prefix=\"AGENT\",\n",
|
||||
" human_prefix=\"HUMAN\",\n",
|
||||
" return_messages=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Define a custom prompt\n",
|
||||
"prompt_template = PromptTemplate(\n",
|
||||
" input_variables=[\"history\", \"input\"],\n",
|
||||
" template=\"\"\"\n",
|
||||
" The following text is the history of a chat between you and a humble human who needs your wisdom.\n",
|
||||
" Please reply to the human's most recent message.\n",
|
||||
" Current conversation:\\n{history}\\nHUMAN: {input}\\:nANDROID:\n",
|
||||
" \"\"\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"chain = ConversationChain(llm=model, prompt=prompt_template, memory=memory)\n",
|
||||
"\n",
|
||||
"print(\"--------------------------------------------\")\n",
|
||||
"print(f\"Prompt={chain.prompt}\")\n",
|
||||
"print(\"--------------------------------------------\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "m4ZeJ9mG_PEA"
|
||||
},
|
||||
"source": [
|
||||
"### 8. Initialize the chat conversation with the chat bot\n",
|
||||
"\n",
|
||||
"We configure the chatbot to initialize the conversation by sending a fixed greeting to a \"chat\" Kafka topic. The \"chat\" topic gets automatically created when we send the first message."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "KYyo5TnV_YC3"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def chat_init():\n",
|
||||
" chat_id = str(\n",
|
||||
" uuid.uuid4()\n",
|
||||
" ) # Give the conversation an ID for effective message keying\n",
|
||||
" print(\"======================================\")\n",
|
||||
" print(f\"Generated CHAT_ID = {chat_id}\")\n",
|
||||
" print(\"======================================\")\n",
|
||||
"\n",
|
||||
" # Use a standard fixed greeting to kick off the conversation\n",
|
||||
" greet = \"Hello, my name is Marvin. What do you want?\"\n",
|
||||
"\n",
|
||||
" # Initialize a Kafka Producer using the chat ID as the message key\n",
|
||||
" with Producer(\n",
|
||||
" broker_address=\"127.0.0.1:9092\",\n",
|
||||
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
|
||||
" ) as producer:\n",
|
||||
" value = {\n",
|
||||
" \"uuid\": chat_id,\n",
|
||||
" \"role\": role,\n",
|
||||
" \"text\": greet,\n",
|
||||
" \"conversation_id\": chat_id,\n",
|
||||
" \"Timestamp\": time.time_ns(),\n",
|
||||
" }\n",
|
||||
" print(f\"Producing value {value}\")\n",
|
||||
" producer.produce(\n",
|
||||
" topic=\"chat\",\n",
|
||||
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
|
||||
" key=chat_id,\n",
|
||||
" value=json.dumps(value), # needs to be a string\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" print(\"Started chat\")\n",
|
||||
" print(\"--------------------------------------------\")\n",
|
||||
" print(value)\n",
|
||||
" print(\"--------------------------------------------\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"chat_init()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "gArPPx2f_bgf"
|
||||
},
|
||||
"source": [
|
||||
"### 9. Initialize the reply function\n",
|
||||
"\n",
|
||||
"This function defines how the chatbot should reply to incoming messages. Instead of sending a fixed message like the previous cell, we generate a reply using Llama-2 and send that reply back to the \"chat\" Kafka topic."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {
|
||||
"id": "yN5t71hY_hgn"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def reply(row: dict, state: State):\n",
|
||||
" print(\"-------------------------------\")\n",
|
||||
" print(\"Received:\")\n",
|
||||
" print(row)\n",
|
||||
" print(\"-------------------------------\")\n",
|
||||
" print(f\"Thinking about the reply to: {row['text']}...\")\n",
|
||||
"\n",
|
||||
" msg = chain.run(row[\"text\"])\n",
|
||||
" print(f\"{role.upper()} replying with: {msg}\\n\")\n",
|
||||
"\n",
|
||||
" row[\"role\"] = role\n",
|
||||
" row[\"text\"] = msg\n",
|
||||
"\n",
|
||||
" # Replace previous role and text values of the row so that it can be sent back to Kafka as a new message\n",
|
||||
" # containing the agents role and reply\n",
|
||||
" return row"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "HZHwmIR0_kFY"
|
||||
},
|
||||
"source": [
|
||||
"### 10. Check the Kafka topic for new human messages and have the model generate a reply\n",
|
||||
"\n",
|
||||
"If you are running this cell for this first time, run it and wait until you see Marvin's greeting ('Hello my name is Marvin...') in the console output. Stop the cell manually and proceed to the next cell where you'll be prompted for your reply.\n",
|
||||
"\n",
|
||||
"Once you have typed in your message, come back to this cell. Your reply is also sent to the same \"chat\" topic. The Kafka consumer checks for new messages and filters out messages that originate from the chatbot itself, leaving only the latest human messages.\n",
|
||||
"\n",
|
||||
"Once a new human message is detected, the reply function is triggered.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"_STOP THIS CELL MANUALLY WHEN YOU RECEIVE A REPLY FROM THE LLM IN THE OUTPUT_"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "-adXc3eQ_qwI"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define your application and settings\n",
|
||||
"app = Application(\n",
|
||||
" broker_address=\"127.0.0.1:9092\",\n",
|
||||
" consumer_group=\"aichat\",\n",
|
||||
" auto_offset_reset=\"earliest\",\n",
|
||||
" consumer_extra_config={\"allow.auto.create.topics\": \"true\"},\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Define an input topic with JSON deserializer\n",
|
||||
"input_topic = app.topic(\"chat\", value_deserializer=\"json\")\n",
|
||||
"# Define an output topic with JSON serializer\n",
|
||||
"output_topic = app.topic(\"chat\", value_serializer=\"json\")\n",
|
||||
"# Initialize a streaming dataframe based on the stream of messages from the input topic:\n",
|
||||
"sdf = app.dataframe(topic=input_topic)\n",
|
||||
"\n",
|
||||
"# Filter the SDF to include only incoming rows where the roles that dont match the bot's current role\n",
|
||||
"sdf = sdf.update(\n",
|
||||
" lambda val: print(\n",
|
||||
" f\"Received update: {val}\\n\\nSTOP THIS CELL MANUALLY TO HAVE THE LLM REPLY OR ENTER YOUR OWN FOLLOWUP RESPONSE\"\n",
|
||||
" )\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# So that it doesn't reply to its own messages\n",
|
||||
"sdf = sdf[sdf[\"role\"] != role]\n",
|
||||
"\n",
|
||||
"# Trigger the reply function for any new messages(rows) detected in the filtered SDF\n",
|
||||
"sdf = sdf.apply(reply, stateful=True)\n",
|
||||
"\n",
|
||||
"# Check the SDF again and filter out any empty rows\n",
|
||||
"sdf = sdf[sdf.apply(lambda row: row is not None)]\n",
|
||||
"\n",
|
||||
"# Update the timestamp column to the current time in nanoseconds\n",
|
||||
"sdf[\"Timestamp\"] = sdf[\"Timestamp\"].apply(lambda row: time.time_ns())\n",
|
||||
"\n",
|
||||
"# Publish the processed SDF to a Kafka topic specified by the output_topic object.\n",
|
||||
"sdf = sdf.to_topic(output_topic)\n",
|
||||
"\n",
|
||||
"app.run(sdf)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "EwXYrmWD_0CX"
|
||||
},
|
||||
"source": [
|
||||
"\n",
|
||||
"### 11. Enter a human message\n",
|
||||
"\n",
|
||||
"Run this cell to enter your message that you want to sent to the model. It uses another Kafka producer to send your text to the \"chat\" Kafka topic for the model to pick up (requires running the previous cell again)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "6sxOPxSP_3iu"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat_input = input(\"Please enter your reply: \")\n",
|
||||
"myreply = chat_input\n",
|
||||
"\n",
|
||||
"msgvalue = {\n",
|
||||
" \"uuid\": chat_id, # leave empty for now\n",
|
||||
" \"role\": \"human\",\n",
|
||||
" \"text\": myreply,\n",
|
||||
" \"conversation_id\": chat_id,\n",
|
||||
" \"Timestamp\": time.time_ns(),\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"with Producer(\n",
|
||||
" broker_address=\"127.0.0.1:9092\",\n",
|
||||
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
|
||||
") as producer:\n",
|
||||
" value = msgvalue\n",
|
||||
" producer.produce(\n",
|
||||
" topic=\"chat\",\n",
|
||||
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
|
||||
" key=chat_id, # leave empty for now\n",
|
||||
" value=json.dumps(value), # needs to be a string\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"print(\"Replied to chatbot with message: \")\n",
|
||||
"print(\"--------------------------------------------\")\n",
|
||||
"print(value)\n",
|
||||
"print(\"--------------------------------------------\")\n",
|
||||
"print(\"\\n\\nRUN THE PREVIOUS CELL TO HAVE THE CHATBOT GENERATE A REPLY\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "cSx3s7TBBegg"
|
||||
},
|
||||
"source": [
|
||||
"### Why route chat messages through Kafka?\n",
|
||||
"\n",
|
||||
"It's easier to interact with the LLM directly using LangChains built-in conversation management features. Plus you can also use a REST API to generate a response from an externally hosted model. So why go to the trouble of using Apache Kafka?\n",
|
||||
"\n",
|
||||
"There are a few reasons, such as:\n",
|
||||
"\n",
|
||||
" * **Integration**: Many enterprises want to run their own LLMs so that they can keep their data in-house. This requires integrating LLM-powered components into existing architectures that might already be decoupled using some kind of message bus.\n",
|
||||
"\n",
|
||||
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
|
||||
"\n",
|
||||
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distribuited architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
|
||||
"\n",
|
||||
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"gpuType": "T4",
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
},
|
||||
"widgets": {
|
||||
"application/vnd.jupyter.widget-state+json": {
|
||||
"0def954cca89466b8408fadaf3b82e64": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "FloatProgressModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "FloatProgressModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "1.5.0",
|
||||
"_view_name": "ProgressView",
|
||||
"bar_style": "success",
|
||||
"description": "",
|
||||
"description_tooltip": null,
|
||||
"layout": "IPY_MODEL_fb6478ce2dac489bb633b23ba0953c5c",
|
||||
"max": 4081004224,
|
||||
"min": 0,
|
||||
"orientation": "horizontal",
|
||||
"style": "IPY_MODEL_734b0f5da9fc4307a95bab48cdbb5d89",
|
||||
"value": 4081004224
|
||||
}
|
||||
},
|
||||
"30ecca964a394109ac2ad757e3aec6c0": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "DescriptionStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "DescriptionStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "StyleView",
|
||||
"description_width": ""
|
||||
}
|
||||
},
|
||||
"462482accc664729980562e208ceb179": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "HTMLModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "HTMLModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "1.5.0",
|
||||
"_view_name": "HTMLView",
|
||||
"description": "",
|
||||
"description_tooltip": null,
|
||||
"layout": "IPY_MODEL_b32f3a86a74741348511f4e136744ac8",
|
||||
"placeholder": "",
|
||||
"style": "IPY_MODEL_e409071bff5a4e2d9bf0e9f5cc42231b",
|
||||
"value": " 4.08G/4.08G [00:33<00:00, 184MB/s]"
|
||||
}
|
||||
},
|
||||
"734b0f5da9fc4307a95bab48cdbb5d89": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "ProgressStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "ProgressStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "StyleView",
|
||||
"bar_color": null,
|
||||
"description_width": ""
|
||||
}
|
||||
},
|
||||
"80d842f73c564dc7b7cc316c763e2633": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "1.2.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "1.2.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"overflow_x": null,
|
||||
"overflow_y": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"969343cdbe604a26926679bbf8bd2dda": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "HBoxModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "HBoxModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "1.5.0",
|
||||
"_view_name": "HBoxView",
|
||||
"box_style": "",
|
||||
"children": [
|
||||
"IPY_MODEL_d8b8370c9b514715be7618bfe6832844",
|
||||
"IPY_MODEL_0def954cca89466b8408fadaf3b82e64",
|
||||
"IPY_MODEL_462482accc664729980562e208ceb179"
|
||||
],
|
||||
"layout": "IPY_MODEL_80d842f73c564dc7b7cc316c763e2633"
|
||||
}
|
||||
},
|
||||
"b32f3a86a74741348511f4e136744ac8": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "1.2.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "1.2.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"overflow_x": null,
|
||||
"overflow_y": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"d8b8370c9b514715be7618bfe6832844": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "HTMLModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "HTMLModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "1.5.0",
|
||||
"_view_name": "HTMLView",
|
||||
"description": "",
|
||||
"description_tooltip": null,
|
||||
"layout": "IPY_MODEL_fa055d9f2a9d4a789e9cf3c89e0214e5",
|
||||
"placeholder": "",
|
||||
"style": "IPY_MODEL_30ecca964a394109ac2ad757e3aec6c0",
|
||||
"value": "llama-2-7b-chat.Q4_K_M.gguf: 100%"
|
||||
}
|
||||
},
|
||||
"e409071bff5a4e2d9bf0e9f5cc42231b": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "1.5.0",
|
||||
"model_name": "DescriptionStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "1.5.0",
|
||||
"_model_name": "DescriptionStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "StyleView",
|
||||
"description_width": ""
|
||||
}
|
||||
},
|
||||
"fa055d9f2a9d4a789e9cf3c89e0214e5": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "1.2.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "1.2.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"overflow_x": null,
|
||||
"overflow_y": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"fb6478ce2dac489bb633b23ba0953c5c": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "1.2.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "1.2.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "1.2.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"overflow_x": null,
|
||||
"overflow_y": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,423 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a38e5d2d-7587-4192-90f2-b58e6c62f08c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Self Discover\n",
|
||||
"\n",
|
||||
"An implementation of the [Self-Discover paper](https://arxiv.org/pdf/2402.03620.pdf).\n",
|
||||
"\n",
|
||||
"Based on [this implementation from @catid](https://github.com/catid/self-discover/tree/main?tab=readme-ov-file)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "a18d8f24-5d9a-45c5-9739-6f3c4ed6c9c9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "9f554045-6e79-42d3-be4b-835bbbd0b78c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4-turbo-preview\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "9e9925aa-638a-4862-823e-9803402b8f82",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"from langchain_core.prompts import PromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "c4cc5c8c-f6a5-42c7-9ed5-780d79b3b29a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"select_prompt = hub.pull(\"hwchase17/self-discovery-select\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a5b53d29-f5b6-4f39-af97-bb6b133e1d18",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Select several reasoning modules that are crucial to utilize in order to solve the given task:\n",
|
||||
"\n",
|
||||
"All reasoning module descriptions:\n",
|
||||
"\u001b[33;1m\u001b[1;3m{reasoning_modules}\u001b[0m\n",
|
||||
"\n",
|
||||
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
|
||||
"\n",
|
||||
"Select several modules are crucial for solving the task above:\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"select_prompt.pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "26eaa6bc-5202-4b22-9522-33f227c8eb55",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"adapt_prompt = hub.pull(\"hwchase17/self-discovery-adapt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "dc30afb9-180d-417b-9935-f7ef166710b8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Rephrase and specify each reasoning module so that it better helps solving the task:\n",
|
||||
"\n",
|
||||
"SELECTED module descriptions:\n",
|
||||
"\u001b[33;1m\u001b[1;3m{selected_modules}\u001b[0m\n",
|
||||
"\n",
|
||||
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
|
||||
"\n",
|
||||
"Adapt each reasoning module description to better solve the task:\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"adapt_prompt.pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "a93253a9-8f50-49dd-8815-c3927bae1905",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"structured_prompt = hub.pull(\"hwchase17/self-discovery-structure\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "8ea8dd78-4285-400b-83d2-c4a241903a79",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Operationalize the reasoning modules into a step-by-step reasoning plan in JSON format:\n",
|
||||
"\n",
|
||||
"Here's an example:\n",
|
||||
"\n",
|
||||
"Example task:\n",
|
||||
"\n",
|
||||
"If you follow these instructions, do you return to the starting point? Always face forward. Take 1 step backward. Take 9 steps left. Take 2 steps backward. Take 6 steps forward. Take 4 steps forward. Take 4 steps backward. Take 3 steps right.\n",
|
||||
"\n",
|
||||
"Example reasoning structure:\n",
|
||||
"\n",
|
||||
"{\n",
|
||||
" \"Position after instruction 1\":\n",
|
||||
" \"Position after instruction 2\":\n",
|
||||
" \"Position after instruction n\":\n",
|
||||
" \"Is final position the same as starting position\":\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"Adapted module description:\n",
|
||||
"\u001b[33;1m\u001b[1;3m{adapted_modules}\u001b[0m\n",
|
||||
"\n",
|
||||
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
|
||||
"\n",
|
||||
"Implement a reasoning structure for solvers to follow step-by-step and arrive at correct answer.\n",
|
||||
"\n",
|
||||
"Note: do NOT actually arrive at a conclusion in this pass. Your job is to generate a PLAN so that in the future you can fill it out and arrive at the correct conclusion for tasks like this\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"structured_prompt.pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "f3d4d79d-f414-4588-b476-4a35b3ba6fbf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"reasoning_prompt = hub.pull(\"hwchase17/self-discovery-reasoning\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "23d1e32e-d12e-454a-8484-c08e250e3262",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Follow the step-by-step reasoning plan in JSON to correctly solve the task. Fill in the values following the keys by reasoning specifically about the task given. Do not simply rephrase the keys.\n",
|
||||
" \n",
|
||||
"Reasoning Structure:\n",
|
||||
"\u001b[33;1m\u001b[1;3m{reasoning_structure}\u001b[0m\n",
|
||||
"\n",
|
||||
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"reasoning_prompt.pretty_print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "7b9af01d-da28-4785-b069-efea61905cfa",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"PromptTemplate(input_variables=['reasoning_structure', 'task_description'], template='Follow the step-by-step reasoning plan in JSON to correctly solve the task. Fill in the values following the keys by reasoning specifically about the task given. Do not simply rephrase the keys.\\n \\nReasoning Structure:\\n{reasoning_structure}\\n\\nTask: {task_description}')"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"reasoning_prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "399bf160-e257-429f-b27e-66d4063f195f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "5c3bd203-7dc1-457e-813f-283aaf059ec0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"select_chain = select_prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "86420da0-7cc2-4659-853e-9c3ef808e47c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"adapt_chain = adapt_prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "270a3905-58a3-4650-96ca-e8254040285f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"structure_chain = structured_prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "55b486cc-36be-497e-9eba-9c8dc228f2d1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"reasoning_chain = reasoning_prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "92d8d484-055b-48a8-98bc-e7d40c12db2e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"overall_chain = (\n",
|
||||
" RunnablePassthrough.assign(selected_modules=select_chain)\n",
|
||||
" .assign(adapted_modules=adapt_chain)\n",
|
||||
" .assign(reasoning_structure=structure_chain)\n",
|
||||
" .assign(answer=reasoning_chain)\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "29fe385b-cf5d-4581-80e7-55462f5628bb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"reasoning_modules = [\n",
|
||||
" \"1. How could I devise an experiment to help solve that problem?\",\n",
|
||||
" \"2. Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made.\",\n",
|
||||
" # \"3. How could I measure progress on this problem?\",\n",
|
||||
" \"4. How can I simplify the problem so that it is easier to solve?\",\n",
|
||||
" \"5. What are the key assumptions underlying this problem?\",\n",
|
||||
" \"6. What are the potential risks and drawbacks of each solution?\",\n",
|
||||
" \"7. What are the alternative perspectives or viewpoints on this problem?\",\n",
|
||||
" \"8. What are the long-term implications of this problem and its solutions?\",\n",
|
||||
" \"9. How can I break down this problem into smaller, more manageable parts?\",\n",
|
||||
" \"10. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.\",\n",
|
||||
" \"11. Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality.\",\n",
|
||||
" # \"12. Seek input and collaboration from others to solve the problem. Emphasize teamwork, open communication, and leveraging the diverse perspectives and expertise of a group to come up with effective solutions.\",\n",
|
||||
" \"13. Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdependencies that influence the problem, and developing holistic solutions that address the system as a whole.\",\n",
|
||||
" \"14. Use Risk Analysis: Evaluate potential risks, uncertainties, and tradeoffs associated with different solutions or approaches to a problem. Emphasize assessing the potential consequences and likelihood of success or failure, and making informed decisions based on a balanced analysis of risks and benefits.\",\n",
|
||||
" # \"15. Use Reflective Thinking: Step back from the problem, take the time for introspection and self-reflection. Examine personal biases, assumptions, and mental models that may influence problem-solving, and being open to learning from past experiences to improve future approaches.\",\n",
|
||||
" \"16. What is the core issue or problem that needs to be addressed?\",\n",
|
||||
" \"17. What are the underlying causes or factors contributing to the problem?\",\n",
|
||||
" \"18. Are there any potential solutions or strategies that have been tried before? If yes, what were the outcomes and lessons learned?\",\n",
|
||||
" \"19. What are the potential obstacles or challenges that might arise in solving this problem?\",\n",
|
||||
" \"20. Are there any relevant data or information that can provide insights into the problem? If yes, what data sources are available, and how can they be analyzed?\",\n",
|
||||
" \"21. Are there any stakeholders or individuals who are directly affected by the problem? What are their perspectives and needs?\",\n",
|
||||
" \"22. What resources (financial, human, technological, etc.) are needed to tackle the problem effectively?\",\n",
|
||||
" \"23. How can progress or success in solving the problem be measured or evaluated?\",\n",
|
||||
" \"24. What indicators or metrics can be used?\",\n",
|
||||
" \"25. Is the problem a technical or practical one that requires a specific expertise or skill set? Or is it more of a conceptual or theoretical problem?\",\n",
|
||||
" \"26. Does the problem involve a physical constraint, such as limited resources, infrastructure, or space?\",\n",
|
||||
" \"27. Is the problem related to human behavior, such as a social, cultural, or psychological issue?\",\n",
|
||||
" \"28. Does the problem involve decision-making or planning, where choices need to be made under uncertainty or with competing objectives?\",\n",
|
||||
" \"29. Is the problem an analytical one that requires data analysis, modeling, or optimization techniques?\",\n",
|
||||
" \"30. Is the problem a design challenge that requires creative solutions and innovation?\",\n",
|
||||
" \"31. Does the problem require addressing systemic or structural issues rather than just individual instances?\",\n",
|
||||
" \"32. Is the problem time-sensitive or urgent, requiring immediate attention and action?\",\n",
|
||||
" \"33. What kinds of solution typically are produced for this kind of problem specification?\",\n",
|
||||
" \"34. Given the problem specification and the current best solution, have a guess about other possible solutions.\"\n",
|
||||
" \"35. Let’s imagine the current best solution is totally wrong, what other ways are there to think about the problem specification?\"\n",
|
||||
" \"36. What is the best way to modify this current best solution, given what you know about these kinds of problem specification?\"\n",
|
||||
" \"37. Ignoring the current best solution, create an entirely new solution to the problem.\"\n",
|
||||
" # \"38. Let’s think step by step.\"\n",
|
||||
" \"39. Let’s make a step by step plan and implement it with good notation and explanation.\",\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"task_example = \"Lisa has 10 apples. She gives 3 apples to her friend and then buys 5 more apples from the store. How many apples does Lisa have now?\"\n",
|
||||
"\n",
|
||||
"task_example = \"\"\"This SVG path element <path d=\"M 55.57,80.69 L 57.38,65.80 M 57.38,65.80 L 48.90,57.46 M 48.90,57.46 L\n",
|
||||
"45.58,47.78 M 45.58,47.78 L 53.25,36.07 L 66.29,48.90 L 78.69,61.09 L 55.57,80.69\"/> draws a:\n",
|
||||
"(A) circle (B) heptagon (C) hexagon (D) kite (E) line (F) octagon (G) pentagon(H) rectangle (I) sector (J) triangle\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "6cbfbe81-f751-42da-843a-f9003ace663d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"reasoning_modules_str = \"\\n\".join(reasoning_modules)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 65,
|
||||
"id": "d411c7aa-7017-4d67-88b5-43b5d161c34c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'task_description': 'This SVG path element <path d=\"M 55.57,80.69 L 57.38,65.80 M 57.38,65.80 L 48.90,57.46 M 48.90,57.46 L\\n45.58,47.78 M 45.58,47.78 L 53.25,36.07 L 66.29,48.90 L 78.69,61.09 L 55.57,80.69\"/> draws a:\\n(A) circle (B) heptagon (C) hexagon (D) kite (E) line (F) octagon (G) pentagon(H) rectangle (I) sector (J) triangle',\n",
|
||||
" 'reasoning_modules': '1. How could I devise an experiment to help solve that problem?\\n2. Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made.\\n4. How can I simplify the problem so that it is easier to solve?\\n5. What are the key assumptions underlying this problem?\\n6. What are the potential risks and drawbacks of each solution?\\n7. What are the alternative perspectives or viewpoints on this problem?\\n8. What are the long-term implications of this problem and its solutions?\\n9. How can I break down this problem into smaller, more manageable parts?\\n10. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.\\n11. Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality.\\n13. Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdependencies that influence the problem, and developing holistic solutions that address the system as a whole.\\n14. Use Risk Analysis: Evaluate potential risks, uncertainties, and tradeoffs associated with different solutions or approaches to a problem. Emphasize assessing the potential consequences and likelihood of success or failure, and making informed decisions based on a balanced analysis of risks and benefits.\\n16. What is the core issue or problem that needs to be addressed?\\n17. What are the underlying causes or factors contributing to the problem?\\n18. Are there any potential solutions or strategies that have been tried before? If yes, what were the outcomes and lessons learned?\\n19. What are the potential obstacles or challenges that might arise in solving this problem?\\n20. Are there any relevant data or information that can provide insights into the problem? If yes, what data sources are available, and how can they be analyzed?\\n21. Are there any stakeholders or individuals who are directly affected by the problem? What are their perspectives and needs?\\n22. What resources (financial, human, technological, etc.) are needed to tackle the problem effectively?\\n23. How can progress or success in solving the problem be measured or evaluated?\\n24. What indicators or metrics can be used?\\n25. Is the problem a technical or practical one that requires a specific expertise or skill set? Or is it more of a conceptual or theoretical problem?\\n26. Does the problem involve a physical constraint, such as limited resources, infrastructure, or space?\\n27. Is the problem related to human behavior, such as a social, cultural, or psychological issue?\\n28. Does the problem involve decision-making or planning, where choices need to be made under uncertainty or with competing objectives?\\n29. Is the problem an analytical one that requires data analysis, modeling, or optimization techniques?\\n30. Is the problem a design challenge that requires creative solutions and innovation?\\n31. Does the problem require addressing systemic or structural issues rather than just individual instances?\\n32. Is the problem time-sensitive or urgent, requiring immediate attention and action?\\n33. What kinds of solution typically are produced for this kind of problem specification?\\n34. Given the problem specification and the current best solution, have a guess about other possible solutions.35. Let’s imagine the current best solution is totally wrong, what other ways are there to think about the problem specification?36. What is the best way to modify this current best solution, given what you know about these kinds of problem specification?37. Ignoring the current best solution, create an entirely new solution to the problem.39. Let’s make a step by step plan and implement it with good notation and explanation.',\n",
|
||||
" 'selected_modules': 'To solve the task of identifying the shape drawn by the given SVG path element, the following reasoning modules are crucial:\\n\\n1. **Critical Thinking (10)**: This involves analyzing the SVG path commands and coordinates logically to understand the shape they form. It requires questioning assumptions (e.g., not assuming the shape based on a quick glance at the coordinates but rather analyzing the path commands and their implications) and evaluating the information provided by the SVG path data.\\n\\n2. **Analytical Problem Solving (29)**: The task requires data analysis skills to interpret the SVG path commands and coordinates. Understanding how the \"M\" (moveto) and \"L\" (lineto) commands work to draw lines between specified points is essential for determining the shape.\\n\\n3. **Creative Thinking (11)**: While the task primarily involves analytical skills, creative thinking can help in visualizing the shape that the path commands are likely to form, especially when the path data doesn\\'t immediately suggest a common shape.\\n\\n4. **Systems Thinking (13)**: Recognizing the SVG path as part of a larger system (in this case, the SVG graphics system) and understanding how individual path commands contribute to the overall shape can be helpful. This involves understanding the interconnectedness of the start and end points of each line segment and how they come together to form a complete shape.\\n\\n5. **Break Down the Problem (9)**: Breaking down the SVG path into its individual commands and analyzing each segment between \"M\" and \"L\" commands can simplify the task. This makes it easier to visualize and understand the shape being drawn step by step.\\n\\n6. **Visualization (not explicitly listed but implied in creative and analytical thinking)**: Visualizing the path that the \"M\" and \"L\" commands create is essential. This isn\\'t a listed module but is a skill that underpins both creative and analytical approaches to solving this problem.\\n\\nGiven the SVG path commands, one would analyze each segment drawn by \"M\" (moveto) and \"L\" (lineto) commands to determine the shape\\'s vertices and sides. This process involves critical thinking to assess the information, analytical skills to interpret the path data, and a degree of creative thinking for visualization. The task does not directly involve assessing risks, long-term implications, or stakeholder perspectives, so modules focused on those aspects (e.g., Risk Analysis (14), Long-term Implications (8)) are less relevant here.',\n",
|
||||
" 'adapted_modules': 'To enhance the process of identifying the shape drawn by the given SVG path element, the reasoning modules can be adapted and specified as follows:\\n\\n1. **Detailed Path Analysis (Critical Thinking)**: This module focuses on a meticulous examination of the SVG path commands and coordinates. It involves a deep dive into the syntax and semantics of path commands such as \"M\" (moveto) and \"L\" (lineto), challenging initial perceptions and rigorously interpreting the sequence of commands to deduce the shape accurately. This analysis goes beyond surface-level inspection, requiring a systematic questioning of each command\\'s role in constructing the overall shape.\\n\\n2. **Path Command Interpretation (Analytical Problem Solving)**: Essential for this task is the ability to decode the SVG path\\'s \"M\" and \"L\" commands, translating these instructions into a mental or visual representation of the shape\\'s geometry. This module emphasizes the analytical dissection of the path data, focusing on how each command contributes to the formation of vertices and edges, thereby facilitating the identification of the shape.\\n\\n3. **Shape Visualization (Creative Thinking)**: Leveraging imagination to mentally construct the shape from the path commands is the core of this module. It involves creatively synthesizing the segments drawn by the \"M\" and \"L\" commands into a coherent visual image, even when the path data does not immediately suggest a recognizable shape. This creative process aids in bridging gaps in the analytical interpretation, offering alternative perspectives on the possible shape outcomes.\\n\\n4. **Path-to-Shape Synthesis (Systems Thinking)**: This module entails understanding the SVG path as a component within the broader context of vector graphics, focusing on how individual path commands interlink to form a cohesive shape. It requires an appreciation of the cumulative effect of each command in relation to the others, recognizing the systemic relationship between the starting and ending points of segments and their collective role in shaping the final figure.\\n\\n5. **Sequential Command Analysis (Break Down the Problem)**: By segmenting the SVG path into discrete commands, this approach simplifies the complexity of the task. It advocates for a step-by-step examination of the path, where each \"M\" to \"L\" sequence is analyzed in isolation before synthesizing the findings to understand the overall shape. This methodical breakdown facilitates a clearer visualization and comprehension of the shape being drawn.\\n\\n6. **Command-to-Geometry Mapping (Visualization)**: Central to solving this task is the ability to map the abstract \"M\" and \"L\" commands onto a concrete geometric representation. This implicit module underlies both the analytical and creative thinking processes, focusing on converting the path data into a visual form that can be easily understood and manipulated mentally. It is about constructing a mental image of the shape as each command is processed, enabling a dynamic visualization that evolves with each new piece of path data.\\n\\nBy adapting and specifying these reasoning modules, the task of identifying the shape drawn by the SVG path element becomes a structured process that leverages critical analysis, analytical problem-solving, creative visualization, systemic thinking, and methodical breakdown to accurately determine the shape as a (D) kite.',\n",
|
||||
" 'reasoning_structure': '```json\\n{\\n \"Step 1: Detailed Path Analysis\": {\\n \"Description\": \"Examine each SVG path command and its coordinates closely. Understand the syntax and semantics of \\'M\\' (moveto) and \\'L\\' (lineto) commands.\",\\n \"Action\": \"List all path commands and their coordinates.\",\\n \"Expected Outcome\": \"A clear understanding of the sequence and direction of each path command.\"\\n },\\n \"Step 2: Path Command Interpretation\": {\\n \"Description\": \"Decode the \\'M\\' and \\'L\\' commands to translate these instructions into a mental or visual representation of the shape\\'s geometry.\",\\n \"Action\": \"Map each \\'M\\' and \\'L\\' command to its corresponding action (move or draw line) in the context of the shape.\",\\n \"Expected Outcome\": \"A segmented representation of the shape, highlighting vertices and edges.\"\\n },\\n \"Step 3: Shape Visualization\": {\\n \"Description\": \"Use imagination to mentally construct the shape from the path commands, synthesizing the segments into a coherent visual image.\",\\n \"Action\": \"Visualize the shape based on the segmented representation from Step 2.\",\\n \"Expected Outcome\": \"A mental image of the potential shape, considering the sequence and direction of path commands.\"\\n },\\n \"Step 4: Path-to-Shape Synthesis\": {\\n \"Description\": \"Understand the SVG path as a component within the broader context of vector graphics, focusing on how individual path commands interlink to form a cohesive shape.\",\\n \"Action\": \"Analyze the systemic relationship between the starting and ending points of segments and their collective role in shaping the final figure.\",\\n \"Expected Outcome\": \"Identification of the overall shape by recognizing the cumulative effect of each command.\"\\n },\\n \"Step 5: Sequential Command Analysis\": {\\n \"Description\": \"Segment the SVG path into discrete commands for a step-by-step examination, analyzing each \\'M\\' to \\'L\\' sequence in isolation.\",\\n \"Action\": \"Break down the path into individual commands and analyze each separately before synthesizing the findings.\",\\n \"Expected Outcome\": \"A clearer visualization and comprehension of the shape being drawn, segment by segment.\"\\n },\\n \"Step 6: Command-to-Geometry Mapping\": {\\n \"Description\": \"Map the abstract \\'M\\' and \\'L\\' commands onto a concrete geometric representation, constructing a mental image of the shape as each command is processed.\",\\n \"Action\": \"Convert the path data into a visual form that can be easily understood and manipulated mentally.\",\\n \"Expected Outcome\": \"A dynamic visualization of the shape that evolves with each new piece of path data, leading to the identification of the shape as a kite.\"\\n },\\n \"Conclusion\": {\\n \"Description\": \"Based on the analysis and visualization steps, determine the shape drawn by the SVG path element.\",\\n \"Action\": \"Review the outcomes of each step and synthesize the information to identify the shape.\",\\n \"Expected Outcome\": \"The correct identification of the shape, supported by the structured analysis and reasoning process.\"\\n }\\n}\\n```',\n",
|
||||
" 'answer': 'Based on the provided reasoning structure and the SVG path element given, let\\'s analyze the path commands to identify the shape.\\n\\n**Step 1: Detailed Path Analysis**\\n- Description: The SVG path provided contains multiple \\'M\\' (moveto) and \\'L\\' (lineto) commands. Each command specifies a point in a 2D coordinate system.\\n- Action: The path commands are as follows:\\n 1. M 55.57,80.69 (Move to point)\\n 2. L 57.38,65.80 (Line to point)\\n 3. M 57.38,65.80 (Move to point)\\n 4. L 48.90,57.46 (Line to point)\\n 5. M 48.90,57.46 (Move to point)\\n 6. L 45.58,47.78 (Line to point)\\n 7. M 45.58,47.78 (Move to point)\\n 8. L 53.25,36.07 (Line to point)\\n 9. L 66.29,48.90 (Line to point)\\n 10. L 78.69,61.09 (Line to point)\\n 11. L 55.57,80.69 (Line to point)\\n- Expected Outcome: Understanding that the path commands describe a series of movements and lines that form a closed shape.\\n\\n**Step 2: Path Command Interpretation**\\n- Description: The \\'M\\' and \\'L\\' commands are used to move the \"pen\" to a starting point and draw lines to subsequent points, respectively.\\n- Action: The commands describe a shape starting at (55.57,80.69), drawing lines through several points, and finally closing the shape by returning to the starting point.\\n- Expected Outcome: A segmented representation showing a shape with distinct vertices at the specified coordinates.\\n\\n**Step 3: Shape Visualization**\\n- Description: Mentally constructing the shape from the provided path commands.\\n- Action: Visualizing the lines connecting in sequence from the starting point, through each point described by the \\'L\\' commands, and back to the starting point.\\n- Expected Outcome: A mental image of a shape that appears to have four distinct sides, suggesting it could be a quadrilateral.\\n\\n**Step 4: Path-to-Shape Synthesis**\\n- Description: Understanding how the path commands collectively form a specific shape.\\n- Action: Recognizing that the shape starts and ends at the same point, with lines drawn between intermediate points without overlapping, except at the starting/ending point.\\n- Expected Outcome: Identification of a closed, four-sided figure, which suggests it could be a kite based on the symmetry and structure of the lines.\\n\\n**Step 5: Sequential Command Analysis**\\n- Description: Analyzing each \\'M\\' to \\'L\\' sequence in isolation.\\n- Action: Observing that the path does not describe a regular polygon (like a hexagon or octagon) or a circle, but rather a shape with distinct angles and sides.\\n- Expected Outcome: A clearer understanding that the shape has four sides, with two pairs of adjacent sides being potentially unequal, which is characteristic of a kite.\\n\\n**Step 6: Command-to-Geometry Mapping**\\n- Description: Converting the abstract path commands into a geometric shape.\\n- Action: Mapping the path data to visualize a shape with two pairs of adjacent sides that are distinct yet symmetrical, indicative of a kite.\\n- Expected Outcome: A dynamic visualization that evolves to clearly represent a kite shape.\\n\\n**Conclusion**\\n- Description: Determining the shape drawn by the SVG path element.\\n- Action: Reviewing the outcomes of each analysis step, which consistently point towards a four-sided figure with distinct properties of a kite.\\n- Expected Outcome: The correct identification of the shape as a kite (D).'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 65,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"overall_chain.invoke(\n",
|
||||
" {\"task_description\": task_example, \"reasoning_modules\": reasoning_modules_str}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ea8568d5-bdb6-45cd-8d04-1ab305786caa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c14a291c-7c1b-43bc-807e-11180290985e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
# docker-compose to make it easier to spin up integration tests.
|
||||
# Services should use NON standard ports to avoid collision with
|
||||
version: "3"
|
||||
name: langchain-tests
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis/redis-stack-server:latest
|
||||
# We use non standard ports since
|
||||
# these instances are used for testing
|
||||
# and users may already have existing
|
||||
# redis instances set up locally
|
||||
# for other projects
|
||||
ports:
|
||||
- "6020:6379"
|
||||
volumes:
|
||||
- ./redis-volume:/data
|
||||
@@ -16,8 +16,7 @@ cp ../cookbook/README.md src/pages/cookbook.mdx
|
||||
mkdir -p docs/templates
|
||||
cp ../templates/docs/INDEX.md docs/templates/index.md
|
||||
poetry run python scripts/copy_templates.py
|
||||
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
|
||||
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
|
||||
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
|
||||
|
||||
yarn
|
||||
|
||||
|
||||
@@ -146,7 +146,6 @@ partners = [
|
||||
(p.name, p.name.replace("-", "_") + "_api_reference")
|
||||
for p in partners_dir.iterdir()
|
||||
]
|
||||
partners = sorted(partners)
|
||||
|
||||
html_context = {
|
||||
"display_github": True, # Integrate GitHub
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
"""Script for auto-generating api_reference.rst."""
|
||||
|
||||
import importlib
|
||||
import inspect
|
||||
import os
|
||||
@@ -187,7 +186,7 @@ def _load_package_modules(
|
||||
modules_by_namespace[top_namespace] = _module_members
|
||||
|
||||
except ImportError as e:
|
||||
print(f"Error: Unable to import module '{namespace}' with error: {e}") # noqa: T201
|
||||
print(f"Error: Unable to import module '{namespace}' with error: {e}")
|
||||
|
||||
return modules_by_namespace
|
||||
|
||||
|
||||
File diff suppressed because one or more lines are too long
4
docs/docs/_templates/integration.mdx
vendored
4
docs/docs/_templates/integration.mdx
vendored
@@ -37,7 +37,7 @@ from langchain_community.llms import integration_class_REPLACE_ME
|
||||
|
||||
## Text Embedding Models
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME).
|
||||
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import integration_class_REPLACE_ME
|
||||
@@ -45,7 +45,7 @@ from langchain_community.embeddings import integration_class_REPLACE_ME
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME).
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import integration_class_REPLACE_ME
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
|
||||
|
||||
⛓ icon marks a new addition [last update 2024-02-06]
|
||||
⛓ icon marks a new addition [last update 2023-09-21]
|
||||
|
||||
---------------------
|
||||
|
||||
@@ -10,20 +10,18 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
|
||||
### Books
|
||||
|
||||
#### [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
|
||||
#### ⛓[Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
|
||||
|
||||
|
||||
### DeepLearning.AI courses
|
||||
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
|
||||
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
|
||||
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
|
||||
- [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
|
||||
- ⛓ [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
|
||||
|
||||
### Handbook
|
||||
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
|
||||
|
||||
⛓ [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
|
||||
|
||||
### Short Tutorials
|
||||
[LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners](https://youtu.be/aywZrzNaKjs) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
|
||||
@@ -31,8 +29,6 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
|
||||
[LangChain Crash Course - Build apps with language models](https://youtu.be/LbT1yp6quS8) by [Patrick Loeber](https://www.youtube.com/@patloeber)
|
||||
|
||||
⛓ [LangChain 101 Course](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb) by **Ivan Reznikov**
|
||||
|
||||
## Tutorials
|
||||
|
||||
### [LangChain for Gen AI and LLMs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) by [James Briggs](https://www.youtube.com/@jamesbriggs)
|
||||
@@ -48,8 +44,8 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
- #9 [Build Conversational Agents with Vector DBs](https://youtu.be/H6bCqqw9xyI)
|
||||
- [Using NEW `MPT-7B` in Hugging Face and LangChain](https://youtu.be/DXpk9K7DgMo)
|
||||
- [`MPT-30B` Chatbot with LangChain](https://youtu.be/pnem-EhT6VI)
|
||||
- [Fine-tuning OpenAI's `GPT 3.5` for LangChain Agents](https://youtu.be/boHXgQ5eQic?si=OOOfK-GhsgZGBqSr)
|
||||
- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=N7k6xy4RQksbWwsQ)
|
||||
- ⛓ [Fine-tuning OpenAI's `GPT 3.5` for LangChain Agents](https://youtu.be/boHXgQ5eQic?si=OOOfK-GhsgZGBqSr)
|
||||
- ⛓ [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=N7k6xy4RQksbWwsQ)
|
||||
|
||||
|
||||
### [LangChain 101](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) by [Greg Kamradt (Data Indy)](https://www.youtube.com/@DataIndependent)
|
||||
@@ -113,16 +109,16 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
- [What can you do with 16K tokens in LangChain?](https://youtu.be/z2aCZBAtWXs)
|
||||
- [Tagging and Extraction - Classification using `OpenAI Functions`](https://youtu.be/a8hMgIcUEnE)
|
||||
- [HOW to Make Conversational Form with LangChain](https://youtu.be/IT93On2LB5k)
|
||||
- [`Claude-2` meets LangChain!](https://youtu.be/Hb_D3p0bK2U?si=j96Kc7oJoeRI5-iC)
|
||||
- [`PaLM 2` Meets LangChain](https://youtu.be/orPwLibLqm4?si=KgJjpEbAD9YBPqT4)
|
||||
- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=v3Hwxk1m3fksBIHN)
|
||||
- [Serving `LLaMA2` with `Replicate`](https://youtu.be/JIF4nNi26DE?si=dSazFyC4UQmaR-rJ)
|
||||
- [NEW LangChain Expression Language](https://youtu.be/ud7HJ2p3gp0?si=8pJ9O6hGbXrCX5G9)
|
||||
- [Building a RCI Chain for Agents with LangChain Expression Language](https://youtu.be/QaKM5s0TnsY?si=0miEj-o17AHcGfLG)
|
||||
- [How to Run `LLaMA-2-70B` on the `Together AI`](https://youtu.be/Tc2DHfzHeYE?si=Xku3S9dlBxWQukpe)
|
||||
- [`RetrievalQA` with `LLaMA 2 70b` & `Chroma` DB](https://youtu.be/93yueQQnqpM?si=ZMwj-eS_CGLnNMXZ)
|
||||
- [How to use `BGE Embeddings` for LangChain](https://youtu.be/sWRvSG7vL4g?si=85jnvnmTCF9YIWXI)
|
||||
- [How to use Custom Prompts for `RetrievalQA` on `LLaMA-2 7B`](https://youtu.be/PDwUKves9GY?si=sMF99TWU0p4eiK80)
|
||||
- ⛓ [`Claude-2` meets LangChain!](https://youtu.be/Hb_D3p0bK2U?si=j96Kc7oJoeRI5-iC)
|
||||
- ⛓ [`PaLM 2` Meets LangChain](https://youtu.be/orPwLibLqm4?si=KgJjpEbAD9YBPqT4)
|
||||
- ⛓ [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=v3Hwxk1m3fksBIHN)
|
||||
- ⛓ [Serving `LLaMA2` with `Replicate`](https://youtu.be/JIF4nNi26DE?si=dSazFyC4UQmaR-rJ)
|
||||
- ⛓ [NEW LangChain Expression Language](https://youtu.be/ud7HJ2p3gp0?si=8pJ9O6hGbXrCX5G9)
|
||||
- ⛓ [Building a RCI Chain for Agents with LangChain Expression Language](https://youtu.be/QaKM5s0TnsY?si=0miEj-o17AHcGfLG)
|
||||
- ⛓ [How to Run `LLaMA-2-70B` on the `Together AI`](https://youtu.be/Tc2DHfzHeYE?si=Xku3S9dlBxWQukpe)
|
||||
- ⛓ [`RetrievalQA` with `LLaMA 2 70b` & `Chroma` DB](https://youtu.be/93yueQQnqpM?si=ZMwj-eS_CGLnNMXZ)
|
||||
- ⛓ [How to use `BGE Embeddings` for LangChain](https://youtu.be/sWRvSG7vL4g?si=85jnvnmTCF9YIWXI)
|
||||
- ⛓ [How to use Custom Prompts for `RetrievalQA` on `LLaMA-2 7B`](https://youtu.be/PDwUKves9GY?si=sMF99TWU0p4eiK80)
|
||||
|
||||
|
||||
### [LangChain](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
@@ -135,8 +131,8 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
- [LangChain: Giving Memory to LLMs](https://youtu.be/dxO6pzlgJiY)
|
||||
- [BEST OPEN Alternative to `OPENAI's EMBEDDINGs` for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY)
|
||||
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw)
|
||||
- [Slash API Costs: Mastering Caching for LLM Applications](https://youtu.be/EQOznhaJWR0?si=AXoI7f3-SVFRvQUl)
|
||||
- [Avoid PROMPT INJECTION with `Constitutional AI` - LangChain](https://youtu.be/tyKSkPFHVX8?si=9mgcB5Y1kkotkBGB)
|
||||
- ⛓ [Slash API Costs: Mastering Caching for LLM Applications](https://youtu.be/EQOznhaJWR0?si=AXoI7f3-SVFRvQUl)
|
||||
- ⛓ [Avoid PROMPT INJECTION with `Constitutional AI` - LangChain](https://youtu.be/tyKSkPFHVX8?si=9mgcB5Y1kkotkBGB)
|
||||
|
||||
|
||||
### LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
|
||||
@@ -152,4 +148,4 @@ Below are links to tutorials and courses on LangChain. For written guides on com
|
||||
|
||||
|
||||
---------------------
|
||||
⛓ icon marks a new addition [last update 2024-02-061]
|
||||
⛓ icon marks a new addition [last update 2023-09-21]
|
||||
|
||||
@@ -120,8 +120,6 @@
|
||||
- ⛓ [Use ANY language in `LangSmith` with REST](https://youtu.be/7BL0GEdMmgY?si=iXfOEdBLqXF6hqRM) by [Nerding I/O](https://www.youtube.com/@nerding_io)
|
||||
- ⛓ [How to Leverage the Full Potential of LLMs for Your Business with Langchain - Leon Ruddat](https://youtu.be/vZmoEa7oWMg?si=ZhMmydq7RtkZd56Q) by [PyData](https://www.youtube.com/@PyDataTV)
|
||||
- ⛓ [`ChatCSV` App: Chat with CSV files using LangChain and `Llama 2`](https://youtu.be/PvsMg6jFs8E?si=Qzg5u5gijxj933Ya) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
|
||||
- ⛓ [Build Chat PDF app in Python with LangChain, OpenAI, Streamlit | Full project | Learn Coding](https://www.youtube.com/watch?v=WYzFzZg4YZI) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
|
||||
- ⛓ [Build Eminem Bot App with LangChain, Streamlit, OpenAI | Full Python Project | Tutorial | AI ChatBot](https://www.youtube.com/watch?v=a2shHB4MRZ4) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
|
||||
|
||||
|
||||
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
|
||||
@@ -134,4 +132,4 @@
|
||||
|
||||
|
||||
---------------------
|
||||
⛓ icon marks a new addition [last update 2024-02-04]
|
||||
⛓ icon marks a new addition [last update 2023-09-21]
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Add message history (memory)\n",
|
||||
"\n",
|
||||
"The `RunnableWithMessageHistory` let us add message history to certain types of chains.\n",
|
||||
"The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n",
|
||||
"\n",
|
||||
"Specifically, it can be used for any Runnable that takes as input one of\n",
|
||||
"\n",
|
||||
|
||||
@@ -66,8 +66,6 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Showing the example using anthropic, but you can use\n",
|
||||
"# your favorite chat model!\n",
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"\n",
|
||||
"model = ChatAnthropic()\n",
|
||||
@@ -166,9 +164,9 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Here|'s| a| silly| joke| about| a| par|rot|:|\n",
|
||||
" Sure|,| here|'s| a| funny| joke| about| a| par|rot|:|\n",
|
||||
"\n",
|
||||
"What| kind| of| teacher| gives| good| advice|?| An| ap|-|parent| (|app|arent|)| one|!||"
|
||||
"Why| doesn|'t| a| par|rot| ever| get| hungry| at| night|?| Because| it| has| a| light| snack| before| bed|!||"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -230,34 +228,40 @@
|
||||
"{'countries': [{}]}\n",
|
||||
"{'countries': [{'name': ''}]}\n",
|
||||
"{'countries': [{'name': 'France'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 6739}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 673915}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Sp'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 4675}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 467547}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 12}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 12647}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 1264764}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 126476461}]}\n"
|
||||
"{'countries': [{'name': 'France', 'population': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': ''}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860,'}]}\n",
|
||||
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860,301'}]}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import JsonOutputParser\n",
|
||||
"from langchain_openai.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" model | JsonOutputParser()\n",
|
||||
") # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some models\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"chain = model | JsonOutputParser() # This parser only works with OpenAI right now\n",
|
||||
"async for text in chain.astream(\n",
|
||||
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
|
||||
"):\n",
|
||||
@@ -290,14 +294,12 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"['France', 'Spain', 'Japan']|"
|
||||
"[None, None, None]|"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import (\n",
|
||||
" JsonOutputParser,\n",
|
||||
")\n",
|
||||
"from langchain_core.output_parsers import JsonOutputParser\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# A function that operates on finalized inputs\n",
|
||||
@@ -324,7 +326,7 @@
|
||||
"chain = model | JsonOutputParser() | _extract_country_names\n",
|
||||
"\n",
|
||||
"async for text in chain.astream(\n",
|
||||
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
|
||||
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\"'\n",
|
||||
"):\n",
|
||||
" print(text, end=\"|\", flush=True)"
|
||||
]
|
||||
@@ -346,7 +348,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"execution_count": 7,
|
||||
"id": "15984b2b-315a-4119-945b-2a3dabea3082",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -354,7 +356,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"France|Sp|Spain|Japan|"
|
||||
"France|Spain|Japan|"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -390,23 +392,11 @@
|
||||
"chain = model | JsonOutputParser() | _extract_country_names_streaming\n",
|
||||
"\n",
|
||||
"async for text in chain.astream(\n",
|
||||
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
|
||||
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\"'\n",
|
||||
"):\n",
|
||||
" print(text, end=\"|\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::{.callout-note}\n",
|
||||
"Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n",
|
||||
"\n",
|
||||
"We're focusing on streaming concepts, not necessarily the results of the chains.\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6adf65b7-aa47-4321-98c7-a0abe43b833a",
|
||||
@@ -419,7 +409,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 8,
|
||||
"id": "b9b1c00d-8b44-40d0-9e2b-8a70d238f82b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -430,7 +420,7 @@
|
||||
" Document(page_content='harrison likes spicy food')]]"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -475,7 +465,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 9,
|
||||
"id": "957447e6-1e60-41ef-8c10-2654bd9e738d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -493,7 +483,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 10,
|
||||
"id": "94e50b5d-bf51-4eee-9da0-ee40dd9ce42b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -501,7 +491,9 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Based| on| the| given| context|,| the| only| information| provided| about| where| Harrison| worked| is| that| he| worked| at| Ken|sh|o|.| Since| there| are| no| other| details| provided| about| Ken|sh|o|,| I| do| not| have| enough| information| to| write| 3| additional| made| up| sentences| about| this| place|.| I| can| only| state| that| Harrison| worked| at| Ken|sh|o|.||"
|
||||
"|H|arrison| worked| at| Kens|ho|,| a| renowned| technology| company| known| for| revolution|izing| the| artificial| intelligence| industry|.\n",
|
||||
"|K|ens|ho|,| located| in| the| heart| of| Silicon| Valley|,| is| famous| for| its| cutting|-edge| research| and| development| in| machine| learning|.\n",
|
||||
"|With| its| state|-of|-the|-art| facilities| and| talented| team|,| Kens|ho| has| become| a| hub| for| innovation| and| a| sought|-after| workplace| for| tech| enthusiasts| like| Harrison|.||"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -536,17 +528,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 11,
|
||||
"id": "61348df9-ec58-401e-be89-68a70042f88e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'0.1.18'"
|
||||
"'0.1.14'"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -612,7 +604,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 12,
|
||||
"id": "c00df46e-7f6b-4e06-8abf-801898c8d57f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -620,7 +612,7 @@
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
|
||||
"/home/eugene/.pyenv/versions/3.11.4/envs/langchain_3_11_4/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
|
||||
" warn_beta(\n"
|
||||
]
|
||||
}
|
||||
@@ -658,7 +650,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 13,
|
||||
"id": "ce31b525-f47d-4828-85a7-912ce9f2e79b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -666,26 +658,26 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'event': 'on_chat_model_start',\n",
|
||||
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'data': {'input': 'hello'}},\n",
|
||||
" {'event': 'on_chat_model_stream',\n",
|
||||
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
|
||||
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'data': {'chunk': AIMessageChunk(content=' Hello')}},\n",
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'data': {'chunk': AIMessageChunk(content='')}},\n",
|
||||
" {'event': 'on_chat_model_stream',\n",
|
||||
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
|
||||
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'data': {'chunk': AIMessageChunk(content='!')}}]"
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'data': {'chunk': AIMessageChunk(content='Hello')}}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -696,7 +688,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 14,
|
||||
"id": "76cfe826-ee63-4310-ad48-55a95eb3b9d6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -704,20 +696,20 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'event': 'on_chat_model_stream',\n",
|
||||
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
|
||||
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'data': {'chunk': AIMessageChunk(content='')}},\n",
|
||||
" {'event': 'on_chat_model_end',\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'data': {'output': AIMessageChunk(content=' Hello!')}}]"
|
||||
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?')}}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -738,14 +730,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"execution_count": 15,
|
||||
"id": "4328c56c-a303-427b-b1f2-f354e9af555c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = (\n",
|
||||
" model | JsonOutputParser()\n",
|
||||
") # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some models\n",
|
||||
"chain = model | JsonOutputParser() # This parser only works with OpenAI right now\n",
|
||||
"\n",
|
||||
"events = [\n",
|
||||
" event\n",
|
||||
@@ -772,7 +762,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 16,
|
||||
"id": "8e66ea3d-a450-436a-aaac-d9478abc6c28",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -780,26 +770,26 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'event': 'on_chain_start',\n",
|
||||
" 'run_id': 'b1074bff-2a17-458b-9e7b-625211710df4',\n",
|
||||
" 'run_id': 'aa992fb9-d79f-46f3-a857-ae4acad841c4',\n",
|
||||
" 'name': 'RunnableSequence',\n",
|
||||
" 'tags': [],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}},\n",
|
||||
" {'event': 'on_chat_model_start',\n",
|
||||
" 'name': 'ChatAnthropic',\n",
|
||||
" 'run_id': '6072be59-1f43-4f1c-9470-3b92e8406a99',\n",
|
||||
" 'name': 'ChatOpenAI',\n",
|
||||
" 'run_id': 'c5406de5-0880-4829-ae26-bb565b404e27',\n",
|
||||
" 'tags': ['seq:step:1'],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}},\n",
|
||||
" {'event': 'on_parser_start',\n",
|
||||
" 'name': 'JsonOutputParser',\n",
|
||||
" 'run_id': 'bf978194-0eda-4494-ad15-3a5bfe69cd59',\n",
|
||||
" 'run_id': '32b47794-8fb6-4ef4-8800-23ed6c3f4519',\n",
|
||||
" 'tags': ['seq:step:2'],\n",
|
||||
" 'metadata': {},\n",
|
||||
" 'data': {}}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -826,7 +816,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"execution_count": 17,
|
||||
"id": "630c71d6-8d94-4ce0-a78a-f20e90f628df",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -834,31 +824,29 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Chat model chunk: ' Here'\n",
|
||||
"Chat model chunk: ' is'\n",
|
||||
"Chat model chunk: ' the'\n",
|
||||
"Chat model chunk: ' JSON'\n",
|
||||
"Chat model chunk: ' with'\n",
|
||||
"Chat model chunk: ' the'\n",
|
||||
"Chat model chunk: ' requested'\n",
|
||||
"Chat model chunk: ' countries'\n",
|
||||
"Chat model chunk: ' and'\n",
|
||||
"Chat model chunk: ' their'\n",
|
||||
"Chat model chunk: ' populations'\n",
|
||||
"Chat model chunk: ':'\n",
|
||||
"Chat model chunk: '\\n\\n```'\n",
|
||||
"Chat model chunk: 'json'\n",
|
||||
"Chat model chunk: ''\n",
|
||||
"Parser chunk: {}\n",
|
||||
"Chat model chunk: '\\n{'\n",
|
||||
"Chat model chunk: '\\n '\n",
|
||||
"Chat model chunk: '{\\n'\n",
|
||||
"Chat model chunk: ' '\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Chat model chunk: 'countries'\n",
|
||||
"Chat model chunk: '\":'\n",
|
||||
"Parser chunk: {'countries': []}\n",
|
||||
"Chat model chunk: ' ['\n",
|
||||
"Chat model chunk: '\\n '\n",
|
||||
"Chat model chunk: ' [\\n'\n",
|
||||
"Chat model chunk: ' '\n",
|
||||
"Parser chunk: {'countries': [{}]}\n",
|
||||
"Chat model chunk: ' {'\n",
|
||||
"Chat model chunk: ' {\\n'\n",
|
||||
"Chat model chunk: ' '\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Chat model chunk: 'name'\n",
|
||||
"Chat model chunk: '\":'\n",
|
||||
"Parser chunk: {'countries': [{'name': ''}]}\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
|
||||
"Chat model chunk: 'France'\n",
|
||||
"Chat model chunk: '\",\\n'\n",
|
||||
"Chat model chunk: ' '\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"...\n"
|
||||
]
|
||||
}
|
||||
@@ -909,7 +897,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"execution_count": 18,
|
||||
"id": "4f0b581b-be63-4663-baba-c6d2b625cdf9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -917,17 +905,17 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_parser_start', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': ''}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 6739}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 673915}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67391582}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67391582}, {}]}}}\n",
|
||||
"{'event': 'on_parser_start', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': ''}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 670}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 670600}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}, {}]}}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}, {'name': ''}]}}}\n",
|
||||
"...\n"
|
||||
]
|
||||
}
|
||||
@@ -961,7 +949,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 19,
|
||||
"id": "096cd904-72f0-4ebe-a8b7-d0e730faea7f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -969,17 +957,17 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_chat_model_start', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' Here')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' is')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' JSON')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' with')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' requested')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' countries')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' and')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' their')}}\n",
|
||||
"{'event': 'on_chat_model_start', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='{\\n')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' \"')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='countries')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='\":')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' [\\n')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' {\\n')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
|
||||
"...\n"
|
||||
]
|
||||
}
|
||||
@@ -1020,7 +1008,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 20,
|
||||
"id": "26bac0d2-76d9-446e-b346-82790236b88d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1028,17 +1016,17 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_chain_start', 'run_id': '190875f3-3fb7-49ad-9b6e-f49da22f3e49', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}}\n",
|
||||
"{'event': 'on_chat_model_start', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
|
||||
"{'event': 'on_parser_start', 'name': 'JsonOutputParser', 'run_id': '3b5e4ca1-40fe-4a02-9a19-ba2a43a6115c', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' Here')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' is')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' JSON')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' with')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' requested')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' countries')}}\n",
|
||||
"{'event': 'on_chain_start', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}}\n",
|
||||
"{'event': 'on_chat_model_start', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
|
||||
"{'event': 'on_parser_start', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='')}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {}}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'tags': ['my_chain'], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': {}}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='{\"')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='countries')}}\n",
|
||||
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='\":')}}\n",
|
||||
"{'event': 'on_parser_stream', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'tags': ['my_chain'], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': {'countries': []}}}\n",
|
||||
"...\n"
|
||||
]
|
||||
}
|
||||
@@ -1074,7 +1062,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"execution_count": 21,
|
||||
"id": "0e6451d3-3b11-4a71-ae19-998f4c10180f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -1116,7 +1104,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 22,
|
||||
"id": "f9a8fe35-faab-4970-b8c0-5c780845d98a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1124,7 +1112,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"['France', 'Spain', 'Japan']\n"
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1145,7 +1133,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"execution_count": 23,
|
||||
"id": "b08215cd-bffa-4e76-aaf3-c52ee34f152c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1153,33 +1141,33 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Chat model chunk: ' Here'\n",
|
||||
"Chat model chunk: ' is'\n",
|
||||
"Chat model chunk: ' the'\n",
|
||||
"Chat model chunk: ' JSON'\n",
|
||||
"Chat model chunk: ' with'\n",
|
||||
"Chat model chunk: ' the'\n",
|
||||
"Chat model chunk: ' requested'\n",
|
||||
"Chat model chunk: ' countries'\n",
|
||||
"Chat model chunk: ' and'\n",
|
||||
"Chat model chunk: ' their'\n",
|
||||
"Chat model chunk: ' populations'\n",
|
||||
"Chat model chunk: ':'\n",
|
||||
"Chat model chunk: '\\n\\n```'\n",
|
||||
"Chat model chunk: 'json'\n",
|
||||
"Chat model chunk: ''\n",
|
||||
"Parser chunk: {}\n",
|
||||
"Chat model chunk: '\\n{'\n",
|
||||
"Chat model chunk: '\\n '\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Chat model chunk: '{\"'\n",
|
||||
"Chat model chunk: 'countries'\n",
|
||||
"Chat model chunk: '\":'\n",
|
||||
"Parser chunk: {'countries': []}\n",
|
||||
"Chat model chunk: ' ['\n",
|
||||
"Chat model chunk: '\\n '\n",
|
||||
"Chat model chunk: ' [\\n'\n",
|
||||
"Chat model chunk: ' '\n",
|
||||
"Parser chunk: {'countries': [{}]}\n",
|
||||
"Chat model chunk: ' {'\n",
|
||||
"Chat model chunk: '\\n '\n",
|
||||
"Chat model chunk: ' {\"'\n",
|
||||
"Chat model chunk: 'name'\n",
|
||||
"Chat model chunk: '\":'\n",
|
||||
"Parser chunk: {'countries': [{'name': ''}]}\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
|
||||
"Chat model chunk: 'France'\n",
|
||||
"Chat model chunk: '\",'\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Chat model chunk: 'population'\n",
|
||||
"Chat model chunk: '\":'\n",
|
||||
"Parser chunk: {'countries': [{'name': 'France', 'population': ''}]}\n",
|
||||
"Chat model chunk: ' \"'\n",
|
||||
"Parser chunk: {'countries': [{'name': 'France', 'population': '67'}]}\n",
|
||||
"Chat model chunk: '67'\n",
|
||||
"Parser chunk: {'countries': [{'name': 'France', 'population': '67 million'}]}\n",
|
||||
"Chat model chunk: ' million'\n",
|
||||
"Chat model chunk: '\"},\\n'\n",
|
||||
"...\n"
|
||||
]
|
||||
}
|
||||
@@ -1224,7 +1212,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"execution_count": 24,
|
||||
"id": "1854206d-b3a5-4f91-9e00-bccbaebac61f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1232,9 +1220,9 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_tool_start', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'name': 'bad_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_tool_stream', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'tags': [], 'metadata': {}, 'name': 'bad_tool', 'data': {'chunk': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_end', 'name': 'bad_tool', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
|
||||
"{'event': 'on_tool_start', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'name': 'bad_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_tool_stream', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'tags': [], 'metadata': {}, 'name': 'bad_tool', 'data': {'chunk': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_end', 'name': 'bad_tool', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1270,7 +1258,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"execution_count": 25,
|
||||
"id": "a20a6cb3-bb43-465c-8cfc-0a7349d70968",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1278,11 +1266,11 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_tool_start', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'name': 'correct_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'c4882303-8867-4dff-b031-7d9499b39dda', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'c4882303-8867-4dff-b031-7d9499b39dda', 'tags': [], 'metadata': {}, 'data': {'input': 'hello', 'output': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_stream', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'tags': [], 'metadata': {}, 'name': 'correct_tool', 'data': {'chunk': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_end', 'name': 'correct_tool', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
|
||||
"{'event': 'on_tool_start', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'name': 'correct_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '65e3679b-e238-47ce-a875-ee74480e696e', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '65e3679b-e238-47ce-a875-ee74480e696e', 'tags': [], 'metadata': {}, 'data': {'input': 'hello', 'output': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_stream', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'tags': [], 'metadata': {}, 'name': 'correct_tool', 'data': {'chunk': 'olleh'}}\n",
|
||||
"{'event': 'on_tool_end', 'name': 'correct_tool', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1307,7 +1295,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"execution_count": 26,
|
||||
"id": "0ac0a3c1-f3a4-4157-b053-4fec8d2e698c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1315,11 +1303,11 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_chain_start', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '335fe781-8944-4464-8d2e-81f61d1f85f5', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '335fe781-8944-4464-8d2e-81f61d1f85f5', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
|
||||
"{'event': 'on_chain_start', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '35a6470c-db65-4fe1-8dff-4e3418601d2f', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '35a6470c-db65-4fe1-8dff-4e3418601d2f', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1349,7 +1337,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"execution_count": 27,
|
||||
"id": "c896bb94-9d10-41ff-8fe2-d6b05b1ed74b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -1357,11 +1345,11 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'event': 'on_chain_start', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'e7cddab2-9b95-4e80-abaf-4b2429117835', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'e7cddab2-9b95-4e80-abaf-4b2429117835', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
|
||||
"{'event': 'on_chain_start', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'b1105188-9196-43c1-9603-4f2f58e51de4', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'b1105188-9196-43c1-9603-4f2f58e51de4', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
|
||||
"{'event': 'on_chain_stream', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
|
||||
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1397,7 +1385,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -93,3 +93,6 @@ Head to the reference section for full documentation of all classes and methods
|
||||
### [Developer's guide](/docs/contributing)
|
||||
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
|
||||
|
||||
### [Community](/docs/community)
|
||||
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
|
||||
|
||||
|
||||
@@ -184,6 +184,7 @@ A Retriever can be backed by anything - a SQL table, the internet, etc - but in
|
||||
|
||||
First, we need to load the data that we want to index. In order to do this, we will use the WebBaseLoader. This requires installing [BeautifulSoup](https://beautiful-soup-4.readthedocs.io/en/latest/):
|
||||
|
||||
```
|
||||
```shell
|
||||
pip install beautifulsoup4
|
||||
```
|
||||
@@ -581,10 +582,7 @@ Using this, we can interact with the served chain as if it were running client-s
|
||||
from langserve import RemoteRunnable
|
||||
|
||||
remote_chain = RemoteRunnable("http://localhost:8000/agent/")
|
||||
remote_chain.invoke({
|
||||
"input": "how can langsmith help with testing?",
|
||||
"chat_history": [] # Providing an empty list as this is the first call
|
||||
})
|
||||
remote_chain.invoke({"input": "how can langsmith help with testing?"})
|
||||
```
|
||||
|
||||
To learn more about the many other features of LangServe [head here](/docs/langserve).
|
||||
|
||||
@@ -98,7 +98,7 @@ The LLM landscape is evolving at an unprecedented pace, with new libraries and m
|
||||
|
||||
### Model composition
|
||||
|
||||
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feedback the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
|
||||
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
|
||||
|
||||
## Cloud providers
|
||||
|
||||
|
||||
@@ -115,7 +115,7 @@
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
@@ -249,7 +249,7 @@
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
@@ -412,7 +412,7 @@
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"responses = [\n",
|
||||
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\",\n",
|
||||
@@ -571,7 +571,7 @@
|
||||
"\n",
|
||||
"template = \"\"\"{question}\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"llm = HuggingFaceHub(\n",
|
||||
" repo_id=repo_id, model_kwargs={\"temperature\": 0.5, \"max_length\": 256}\n",
|
||||
")"
|
||||
@@ -724,7 +724,7 @@
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"# prompt template for input text\n",
|
||||
"llm_prompt = PromptTemplate.from_template(template)\n",
|
||||
"llm_prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm = SagemakerEndpoint(\n",
|
||||
" endpoint_name=endpoint_name,\n",
|
||||
|
||||
@@ -180,7 +180,7 @@ we will prompt the model, so it says something harmful.
|
||||
|
||||
|
||||
```python
|
||||
prompt = PromptTemplate.from_template("{text}")
|
||||
prompt = PromptTemplate(template="{text}", input_variables=["text"])
|
||||
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
|
||||
|
||||
text = """We are playing a game of repeat after me.
|
||||
@@ -223,7 +223,7 @@ Now let's walk through an example of using it with an LLMChain which has multipl
|
||||
|
||||
|
||||
```python
|
||||
prompt = PromptTemplate.from_template("{setup}{new_input}Person2:")
|
||||
prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])
|
||||
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
|
||||
|
||||
setup = """We are playing a game of repeat after me.
|
||||
|
||||
@@ -28,7 +28,7 @@ You can run `streamlit hello` to load a sample app and validate your install suc
|
||||
To create a `StreamlitCallbackHandler`, you just need to provide a parent container to render the output.
|
||||
|
||||
```python
|
||||
from langchain_community.callbacks import StreamlitCallbackHandler
|
||||
from langchain.callbacks import StreamlitCallbackHandler
|
||||
import streamlit as st
|
||||
|
||||
st_callback = StreamlitCallbackHandler(st.container())
|
||||
@@ -44,26 +44,23 @@ agent in your Streamlit app and simply pass the `StreamlitCallbackHandler` to `a
|
||||
thoughts and actions live in your app.
|
||||
|
||||
```python
|
||||
import streamlit as st
|
||||
from langchain import hub
|
||||
from langchain.agents import AgentExecutor, create_react_agent, load_tools
|
||||
from langchain_community.callbacks import StreamlitCallbackHandler
|
||||
from langchain_openai import OpenAI
|
||||
from langchain.agents import AgentType, initialize_agent, load_tools
|
||||
from langchain_community.callbacks import StreamlitCallbackHandler
|
||||
import streamlit as st
|
||||
|
||||
llm = OpenAI(temperature=0, streaming=True)
|
||||
tools = load_tools(["ddg-search"])
|
||||
prompt = hub.pull("hwchase17/react")
|
||||
agent = create_react_agent(llm, tools, prompt)
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
|
||||
agent = initialize_agent(
|
||||
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
|
||||
)
|
||||
|
||||
if prompt := st.chat_input():
|
||||
st.chat_message("user").write(prompt)
|
||||
with st.chat_message("assistant"):
|
||||
st_callback = StreamlitCallbackHandler(st.container())
|
||||
response = agent_executor.invoke(
|
||||
{"input": prompt}, {"callbacks": [st_callback]}
|
||||
)
|
||||
st.write(response["output"])
|
||||
response = agent.run(prompt, callbacks=[st_callback])
|
||||
st.write(response)
|
||||
```
|
||||
|
||||
**Note:** You will need to set `OPENAI_API_KEY` for the above app code to run successfully.
|
||||
|
||||
@@ -90,20 +90,16 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"system = (\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
")\n",
|
||||
"system = \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
"chain.invoke({\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,16 @@
|
||||
"execution_count": 1,
|
||||
"id": "378be79b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.14) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.llms.anthropic_functions import AnthropicFunctions"
|
||||
]
|
||||
@@ -32,7 +41,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "e1d535f6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -93,7 +102,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = model.invoke(\n",
|
||||
"response = model.predict_messages(\n",
|
||||
" [HumanMessage(content=\"whats the weater in boston?\")], functions=functions\n",
|
||||
")"
|
||||
]
|
||||
@@ -131,7 +140,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 8,
|
||||
"id": "7af5c567",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -153,7 +162,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"id": "bd01082a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -163,12 +172,24 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 10,
|
||||
"id": "b5a23e9f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'name': 'Alex', 'height': '5', 'hair_color': 'blonde'},\n",
|
||||
" {'name': 'Claudia', 'height': '6', 'hair_color': 'brunette'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(inp)"
|
||||
"chain.run(inp)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -235,7 +256,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"this is really cool\")"
|
||||
"chain.run(\"this is really cool\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -255,7 +276,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.0"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -51,18 +51,10 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Alternatively, you can set your API key with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"BAICHUAN_API_KEY\"] = \"YOUR_API_KEY\""
|
||||
"or you can set `api_key` in your environment variables\n",
|
||||
"```bash\n",
|
||||
"export BAICHUAN_API_KEY=YOUR_API_KEY\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -320,51 +320,20 @@
|
||||
"4. Message may be blocked if they violate the safety checks of the LLM. In this case, the model will return an empty response."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "54793b9e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Safety Settings\n",
|
||||
"\n",
|
||||
"Gemini models have default safety settings that can be overridden. If you are receiving lots of \"Safety Warnings\" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "75fdfad6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_genai import (\n",
|
||||
" ChatGoogleGenerativeAI,\n",
|
||||
" HarmBlockThreshold,\n",
|
||||
" HarmCategory,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = ChatGoogleGenerativeAI(\n",
|
||||
" model=\"gemini-pro\",\n",
|
||||
" safety_settings={\n",
|
||||
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e68e203d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)."
|
||||
]
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "92b5aca5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Additional Configuration\n",
|
||||
"## Additional Configuraation\n",
|
||||
"\n",
|
||||
"You can pass the following parameters to ChatGoogleGenerativeAI in order to customize the SDK's behavior:\n",
|
||||
"\n",
|
||||
|
||||
@@ -424,7 +424,9 @@
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chat = ChatVertexAI(model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5)\n",
|
||||
"chat = ChatVertexAI(\n",
|
||||
" model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"\n",
|
||||
"asyncio.run(\n",
|
||||
|
||||
@@ -15,23 +15,39 @@
|
||||
"source": [
|
||||
"# ChatKonko\n",
|
||||
"\n",
|
||||
"# Konko\n",
|
||||
"\n",
|
||||
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
|
||||
"\n",
|
||||
"Konko API is a fully managed API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. **Select** the right open source or proprietary LLMs for their application\n",
|
||||
"2. **Build** applications faster with integrations to leading application frameworks and fully managed APIs\n",
|
||||
"3. **Fine tune** smaller open-source LLMs to achieve industry-leading performance at a fraction of the cost\n",
|
||||
"4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n",
|
||||
"\n",
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/).\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` ChatCompletion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-chatcompletion)\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
@@ -48,7 +64,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Set Environment Variables\n",
|
||||
"## 2. Set API Keys\n",
|
||||
"\n",
|
||||
"<br />\n",
|
||||
"\n",
|
||||
"### Option 1: Set Environment Variables\n",
|
||||
"\n",
|
||||
"1. You can set environment variables for \n",
|
||||
" 1. KONKO_API_KEY (Required)\n",
|
||||
@@ -58,7 +78,18 @@
|
||||
"```shell\n",
|
||||
"export KONKO_API_KEY={your_KONKO_API_KEY_here}\n",
|
||||
"export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional\n",
|
||||
"```"
|
||||
"```\n",
|
||||
"\n",
|
||||
"Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts.\n",
|
||||
"\n",
|
||||
"### Option 2: Set API Keys Programmatically\n",
|
||||
"\n",
|
||||
"If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"konko.set_api_key('your_KONKO_API_KEY_here') \n",
|
||||
"konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -67,7 +98,7 @@
|
||||
"source": [
|
||||
"## Calling a model\n",
|
||||
"\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/list-of-models)\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/v0.5.0/docs/list-of-models)\n",
|
||||
"\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).\n",
|
||||
"\n",
|
||||
|
||||
@@ -15,53 +15,16 @@
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MistralAI\n",
|
||||
"# ChatMistralAI\n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with MistralAI chat models, via their [API](https://docs.mistral.ai/api/).\n",
|
||||
"\n",
|
||||
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.\n",
|
||||
"\n",
|
||||
"Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cc686b8f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"You will need the `langchain-core` and `langchain-mistralai` package to use the API. You can install these with:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -U langchain-core langchain-mistralai\n",
|
||||
"\n",
|
||||
"We'll also need to get a [Mistral API key](https://console.mistral.ai/users/api-keys/)"
|
||||
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "c3fd4184",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"\n",
|
||||
"mistral_api_key = getpass.getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "502127fd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -74,20 +37,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"mistral_api_key = os.environ.get(\"MISTRAL_API_KEY\")\n",
|
||||
"# If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.\n",
|
||||
"chat = ChatMistralAI(mistral_api_key=mistral_api_key)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -96,16 +62,16 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Who's there? I was just about to ask the same thing! How can I assist you today?\")"
|
||||
"AIMessage(content=\"Hello! I'm here to assist you. How can I help you today? If you have any questions or need information on a particular topic, feel free to ask. I'm ready to provide accurate and helpful answers to the best of my ability.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [HumanMessage(content=\"knock knock\")]\n",
|
||||
"messages = [HumanMessage(content=\"say a brief hello\")]\n",
|
||||
"chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
@@ -114,12 +80,12 @@
|
||||
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Async"
|
||||
"## `ChatMistralAI` also supports async and streaming functionality:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 4,
|
||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -128,10 +94,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Who\\'s there?\\n\\n(You can then continue the \"knock knock\" joke by saying the name of the person or character who should be responding. For example, if I say \"Banana,\" you could respond with \"Banana who?\" and I would say \"Banana bunch! Get it? Because a group of bananas is called a \\'bunch\\'!\" and then we would both laugh and have a great time. But really, you can put anything you want in the spot where I put \"Banana\" and it will still technically be a \"knock knock\" joke. The possibilities are endless!)')"
|
||||
"AIMessage(content=\"Hello! I'm glad you're here. If you have any questions or need assistance with something related to programming or software development, feel free to ask. I'll do my best to help you out. Have a great day!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -140,17 +106,9 @@
|
||||
"await chat.ainvoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "86ccef97",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Streaming\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 5,
|
||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -160,27 +118,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Who's there?\n",
|
||||
"\n",
|
||||
"(After this, the conversation can continue as a call and response \"who's there\" joke. Here is an example of how it could go:\n",
|
||||
"\n",
|
||||
"You say: Orange.\n",
|
||||
"I say: Orange who?\n",
|
||||
"You say: Orange you glad I didn't say banana!?)\n",
|
||||
"\n",
|
||||
"But since you asked for a knock knock joke specifically, here's one for you:\n",
|
||||
"\n",
|
||||
"Knock knock.\n",
|
||||
"\n",
|
||||
"Me: Who's there?\n",
|
||||
"\n",
|
||||
"You: Lettuce.\n",
|
||||
"\n",
|
||||
"Me: Lettuce who?\n",
|
||||
"\n",
|
||||
"You: Lettuce in, it's too cold out here!\n",
|
||||
"\n",
|
||||
"I hope this brings a smile to your face! Do you have a favorite knock knock joke you'd like to share? I'd love to hear it."
|
||||
"Hello! I'm happy to assist you. Is there a specific question or topic you would like to discuss? I can provide information and answer questions on a wide variety of subjects."
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -188,79 +126,6 @@
|
||||
"for chunk in chat.stream(messages):\n",
|
||||
" print(chunk.content, end=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f6189577",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "e63aebcb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\"Who's there? I was just about to ask the same thing! Go ahead and tell me who's there. I love a good knock-knock joke.\")]"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat.batch([messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "38e39e71",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "ee43a1ae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
|
||||
"chain = prompt | chat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "0dc49212",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Why do bears hate shoes so much? They like to run around in their bear feet.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -279,7 +144,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,463 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: YUAN2\n",
|
||||
"---"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%% raw\n"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"# YUAN2.0\n",
|
||||
"\n",
|
||||
"This notebook shows how to use [YUAN2 API](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/inference_server.md) in LangChain with the langchain.chat_models.ChatYuan2.\n",
|
||||
"\n",
|
||||
"[*Yuan2.0*](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md) is a new generation Fundamental Large Language Model developed by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan 2.0-51B, and Yuan 2.0-2B. And we provide relevant scripts for pretraining, fine-tuning, and inference services for other developers. Yuan2.0 is based on Yuan1.0, utilizing a wider range of high-quality pre training data and instruction fine-tuning datasets to enhance the model's understanding of semantics, mathematics, reasoning, code, knowledge, and other aspects."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Getting started\n",
|
||||
"### Installation\n",
|
||||
"First, Yuan2.0 provided an OpenAI compatible API, and we integrate ChatYuan2 into langchain chat model by using OpenAI client.\n",
|
||||
"Therefore, ensure the openai package is installed in your Python environment. Run the following command:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Importing the Required Modules\n",
|
||||
"After installation, import the necessary modules to your Python script:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models import ChatYuan2\n",
|
||||
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Setting Up Your API server\n",
|
||||
"Setting up your OpenAI compatible API server following [yuan2 openai api server](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md).\n",
|
||||
"If you deployed api server locally, you can simply set `api_key=\"EMPTY\"` or anything you want.\n",
|
||||
"Just make sure, the `api_base` is set correctly."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"yuan2_api_key = \"your_api_key\"\n",
|
||||
"yuan2_api_base = \"http://127.0.0.1:8001/v1\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Initialize the ChatYuan2 Model\n",
|
||||
"Here's how to initialize the chat model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatYuan2(\n",
|
||||
" yuan2_api_base=\"http://127.0.0.1:8001/v1\",\n",
|
||||
" temperature=1.0,\n",
|
||||
" model_name=\"yuan2\",\n",
|
||||
" max_retries=3,\n",
|
||||
" streaming=False,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Basic Usage\n",
|
||||
"Invoke the model with system and human messages like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
},
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" SystemMessage(content=\"你是一个人工智能助手。\"),\n",
|
||||
" HumanMessage(content=\"你好,你是谁?\"),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(chat(messages))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Basic Usage with streaming\n",
|
||||
"For continuous interaction, use the streaming feature:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"chat = ChatYuan2(\n",
|
||||
" yuan2_api_base=\"http://127.0.0.1:8001/v1\",\n",
|
||||
" temperature=1.0,\n",
|
||||
" model_name=\"yuan2\",\n",
|
||||
" max_retries=3,\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(content=\"你是个旅游小助手。\"),\n",
|
||||
" HumanMessage(content=\"给我介绍一下北京有哪些好玩的。\"),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Advanced Features\n",
|
||||
"### Usage with async calls\n",
|
||||
"\n",
|
||||
"Invoke the model with non-blocking calls, like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async def basic_agenerate():\n",
|
||||
" chat = ChatYuan2(\n",
|
||||
" yuan2_api_base=\"http://127.0.0.1:8001/v1\",\n",
|
||||
" temperature=1.0,\n",
|
||||
" model_name=\"yuan2\",\n",
|
||||
" max_retries=3,\n",
|
||||
" )\n",
|
||||
" messages = [\n",
|
||||
" [\n",
|
||||
" SystemMessage(content=\"你是个旅游小助手。\"),\n",
|
||||
" HumanMessage(content=\"给我介绍一下北京有哪些好玩的。\"),\n",
|
||||
" ]\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
" result = await chat.agenerate(messages)\n",
|
||||
" print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"\n",
|
||||
"asyncio.run(basic_agenerate())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Usage with prompt template\n",
|
||||
"\n",
|
||||
"Invoke the model with non-blocking calls and used chat template like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async def ainvoke_with_prompt_template():\n",
|
||||
" from langchain.prompts.chat import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" chat = ChatYuan2(\n",
|
||||
" yuan2_api_base=\"http://127.0.0.1:8001/v1\",\n",
|
||||
" temperature=1.0,\n",
|
||||
" model_name=\"yuan2\",\n",
|
||||
" max_retries=3,\n",
|
||||
" )\n",
|
||||
" prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"你是一个诗人,擅长写诗。\"),\n",
|
||||
" (\"human\", \"给我写首诗,主题是{theme}。\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" chain = prompt | chat\n",
|
||||
" result = await chain.ainvoke({\"theme\": \"明月\"})\n",
|
||||
" print(f\"type(result): {type(result)}; {result}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"asyncio.run(ainvoke_with_prompt_template())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Usage with async calls in streaming\n",
|
||||
"For non-blocking calls with streaming output, use the astream method:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async def basic_astream():\n",
|
||||
" chat = ChatYuan2(\n",
|
||||
" yuan2_api_base=\"http://127.0.0.1:8001/v1\",\n",
|
||||
" temperature=1.0,\n",
|
||||
" model_name=\"yuan2\",\n",
|
||||
" max_retries=3,\n",
|
||||
" )\n",
|
||||
" messages = [\n",
|
||||
" SystemMessage(content=\"你是个旅游小助手。\"),\n",
|
||||
" HumanMessage(content=\"给我介绍一下北京有哪些好玩的。\"),\n",
|
||||
" ]\n",
|
||||
" result = chat.astream(messages)\n",
|
||||
" async for chunk in result:\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true,
|
||||
"name": "#%%\n"
|
||||
},
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"\n",
|
||||
"asyncio.run(basic_astream())"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -1,110 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "MwTWzDxYgbrR"
|
||||
},
|
||||
"source": [
|
||||
"# Athena\n",
|
||||
"\n",
|
||||
"This notebooks goes over how to load documents from AWS Athena"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "F0zaLR3xgWmO"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install boto3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "076NLjfngoWJ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.document_loaders.athena import AthenaLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "XpMRQwU9gu44"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"database_name = \"my_database\"\n",
|
||||
"s3_output_path = \"s3://my_bucket/query_results/\"\n",
|
||||
"query = \"SELECT * FROM my_table\"\n",
|
||||
"profile_name = \"my_profile\"\n",
|
||||
"\n",
|
||||
"loader = AthenaLoader(\n",
|
||||
" query=query,\n",
|
||||
" database=database_name,\n",
|
||||
" s3_output_uri=s3_output_path,\n",
|
||||
" profile_name=profile_name,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"documents = loader.load()\n",
|
||||
"print(documents)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5IBapL3ejoEt"
|
||||
},
|
||||
"source": [
|
||||
"Example with metadata columns"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "wMx6nI1qjryD"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"database_name = \"my_database\"\n",
|
||||
"s3_output_path = \"s3://my_bucket/query_results/\"\n",
|
||||
"query = \"SELECT * FROM my_table\"\n",
|
||||
"profile_name = \"my_profile\"\n",
|
||||
"metadata_columns = [\"_row\", \"_created_at\"]\n",
|
||||
"\n",
|
||||
"loader = AthenaLoader(\n",
|
||||
" query=query,\n",
|
||||
" database=database_name,\n",
|
||||
" s3_output_uri=s3_output_path,\n",
|
||||
" profile_name=profile_name,\n",
|
||||
" metadata_columns=metadata_columns,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"documents = loader.load()\n",
|
||||
"print(documents)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -3,7 +3,7 @@ class MyClass:
|
||||
self.name = name
|
||||
|
||||
def greet(self):
|
||||
print(f"Hello, {self.name}!") # noqa: T201
|
||||
print(f"Hello, {self.name}!")
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# GitHub\n",
|
||||
"\n",
|
||||
"This notebooks shows how you can load issues and pull requests (PRs) for a given repository on [GitHub](https://github.com/). Also shows how you can load github files for a given repository on [GitHub](https://github.com/). We will use the LangChain Python repository as an example."
|
||||
"This notebooks shows how you can load issues and pull requests (PRs) for a given repository on [GitHub](https://github.com/). We will use the LangChain Python repository as an example."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -46,7 +46,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 10,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@@ -57,7 +57,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -91,7 +91,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -100,9 +100,27 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"# Creates GitHubLoader (#5257)\r\n",
|
||||
"\r\n",
|
||||
"GitHubLoader is a DocumentLoader that loads issues and PRs from GitHub.\r\n",
|
||||
"\r\n",
|
||||
"Fixes #5257\r\n",
|
||||
"\r\n",
|
||||
"Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:\r\n",
|
||||
"DataLoaders\r\n",
|
||||
"- @eyurtsev\r\n",
|
||||
"\n",
|
||||
"{'url': 'https://github.com/langchain-ai/langchain/pull/5408', 'title': 'DocumentLoader for GitHub', 'creator': 'UmerHA', 'created_at': '2023-05-29T14:50:53Z', 'comments': 0, 'state': 'open', 'labels': ['enhancement', 'lgtm', 'doc loader'], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5408, 'is_pull_request': True}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(docs[0].page_content)\n",
|
||||
"print(docs[0].metadata)"
|
||||
@@ -124,7 +142,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -139,68 +157,84 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"### System Info\n",
|
||||
"\n",
|
||||
"LangChain version = 0.0.167\r\n",
|
||||
"Python version = 3.11.0\r\n",
|
||||
"System = Windows 11 (using Jupyter)\n",
|
||||
"\n",
|
||||
"### Who can help?\n",
|
||||
"\n",
|
||||
"- @hwchase17\r\n",
|
||||
"- @agola11\r\n",
|
||||
"- @UmerHA (I have a fix ready, will submit a PR)\n",
|
||||
"\n",
|
||||
"### Information\n",
|
||||
"\n",
|
||||
"- [ ] The official example notebooks/scripts\n",
|
||||
"- [X] My own modified scripts\n",
|
||||
"\n",
|
||||
"### Related Components\n",
|
||||
"\n",
|
||||
"- [X] LLMs/Chat Models\n",
|
||||
"- [ ] Embedding Models\n",
|
||||
"- [X] Prompts / Prompt Templates / Prompt Selectors\n",
|
||||
"- [ ] Output Parsers\n",
|
||||
"- [ ] Document Loaders\n",
|
||||
"- [ ] Vector Stores / Retrievers\n",
|
||||
"- [ ] Memory\n",
|
||||
"- [ ] Agents / Agent Executors\n",
|
||||
"- [ ] Tools / Toolkits\n",
|
||||
"- [ ] Chains\n",
|
||||
"- [ ] Callbacks/Tracing\n",
|
||||
"- [ ] Async\n",
|
||||
"\n",
|
||||
"### Reproduction\n",
|
||||
"\n",
|
||||
"```\r\n",
|
||||
"import os\r\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"...\"\r\n",
|
||||
"\r\n",
|
||||
"from langchain.chains import LLMChain\r\n",
|
||||
"from langchain_openai import ChatOpenAI\r\n",
|
||||
"from langchain.prompts import PromptTemplate\r\n",
|
||||
"from langchain.prompts.chat import ChatPromptTemplate\r\n",
|
||||
"from langchain.schema import messages_from_dict\r\n",
|
||||
"\r\n",
|
||||
"role_strings = [\r\n",
|
||||
" (\"system\", \"you are a bird expert\"), \r\n",
|
||||
" (\"human\", \"which bird has a point beak?\")\r\n",
|
||||
"]\r\n",
|
||||
"prompt = ChatPromptTemplate.from_role_strings(role_strings)\r\n",
|
||||
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\r\n",
|
||||
"chain.run({})\r\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"### Expected behavior\n",
|
||||
"\n",
|
||||
"Chain should run\n",
|
||||
"{'url': 'https://github.com/langchain-ai/langchain/issues/5027', 'title': \"ChatOpenAI models don't work with prompts created via ChatPromptTemplate.from_role_strings\", 'creator': 'UmerHA', 'created_at': '2023-05-20T10:39:18Z', 'comments': 1, 'state': 'open', 'labels': [], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5027, 'is_pull_request': False}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(docs[0].page_content)\n",
|
||||
"print(docs[0].metadata)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load Github File Content\n",
|
||||
"\n",
|
||||
"For below code, loads all markdown file in rpeo `langchain-ai/langchain`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import GithubFileLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = GithubFileLoader(\n",
|
||||
" repo=\"langchain-ai/langchain\", # the repo name\n",
|
||||
" access_token=ACCESS_TOKEN,\n",
|
||||
" github_api_url=\"https://api.github.com\",\n",
|
||||
" file_filter=lambda file_path: file_path.endswith(\n",
|
||||
" \".md\"\n",
|
||||
" ), # load all markdowns files.\n",
|
||||
")\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"example output of one of document: \n",
|
||||
"\n",
|
||||
"```json\n",
|
||||
"documents.metadata: \n",
|
||||
" {\n",
|
||||
" \"path\": \"README.md\",\n",
|
||||
" \"sha\": \"82f1c4ea88ecf8d2dfsfx06a700e84be4\",\n",
|
||||
" \"source\": \"https://github.com/langchain-ai/langchain/blob/master/README.md\"\n",
|
||||
" }\n",
|
||||
"documents.content:\n",
|
||||
" mock content\n",
|
||||
"```"
|
||||
]
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -219,7 +253,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,88 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Pebblo Safe DocumentLoader\n",
|
||||
"\n",
|
||||
"> [Pebblo](https://github.com/daxa-ai/pebblo) enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.\n",
|
||||
"\n",
|
||||
"Pebblo has two components.\n",
|
||||
"\n",
|
||||
"1. Pebblo Safe DocumentLoader for Langchain\n",
|
||||
"1. Pebblo Daemon\n",
|
||||
"\n",
|
||||
"This document describes how to augment your existing Langchain DocumentLoader with Pebblo Safe DocumentLoader to get deep data visibility on the types of Topics and Entities ingested into the Gen-AI Langchain application. For details on `Pebblo Daemon` see this [pebblo daemon](https://daxa-ai.github.io/pebblo-docs/daemon.html) document.\n",
|
||||
"\n",
|
||||
"Pebblo Safeloader enables safe data ingestion for Langchain `DocumentLoader`. This is done by wrapping the document loader call with `Pebblo Safe DocumentLoader`.\n",
|
||||
"\n",
|
||||
"#### How to Pebblo enable Document Loading?\n",
|
||||
"\n",
|
||||
"Assume a Langchain RAG application snippet using `CSVLoader` to read a CSV document for inference.\n",
|
||||
"\n",
|
||||
"Here is the snippet of Document loading using `CSVLoader`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders.csv_loader import CSVLoader\n",
|
||||
"\n",
|
||||
"loader = CSVLoader(\"data/corp_sens_data.csv\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"print(documents)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The Pebblo SafeLoader can be enabled with few lines of code change to the above snippet."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders.csv_loader import CSVLoader\n",
|
||||
"from langchain_community.document_loaders import PebbloSafeLoader\n",
|
||||
"\n",
|
||||
"loader = PebbloSafeLoader(\n",
|
||||
" CSVLoader(\"data/corp_sens_data.csv\"),\n",
|
||||
" name=\"acme-corp-rag-1\", # App name (Mandatory)\n",
|
||||
" owner=\"Joe Smith\", # Owner (Optional)\n",
|
||||
" description=\"Support productivity RAG application\", # Description (Optional)\n",
|
||||
")\n",
|
||||
"documents = loader.load()\n",
|
||||
"print(documents)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -13,16 +13,27 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6)\n",
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip available: \u001b[0m\u001b[31;49m22.3.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.0.1\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet nest_asyncio"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -43,11 +54,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sitemap_loader = SitemapLoader(web_path=\"https://api.python.langchain.com/sitemap.xml\")\n",
|
||||
"sitemap_loader = SitemapLoader(web_path=\"https://langchain.readthedocs.io/sitemap.xml\")\n",
|
||||
"\n",
|
||||
"docs = sitemap_loader.load()"
|
||||
]
|
||||
@@ -79,7 +90,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLangChain Python API Reference Documentation.\\n\\n\\nYou will be automatically redirected to the new location of this page.\\n\\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2024-02-09T01:10:49.422114+00:00', 'changefreq': 'weekly', 'priority': '1'})"
|
||||
"Document(page_content='\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLangChain Python API Reference Documentation.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nYou will be automatically redirected to the new location of this page.\\n\\n', metadata={'source': 'https://api.python.langchain.com/en/stable/', 'loc': 'https://api.python.langchain.com/en/stable/', 'lastmod': '2023-10-13T18:13:26.966937+00:00', 'changefreq': 'weekly', 'priority': '1'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
@@ -102,12 +113,20 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Fetching pages: 100%|##########| 1/1 [00:00<00:00, 16.39it/s]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = SitemapLoader(\n",
|
||||
" web_path=\" https://api.python.langchain.com/sitemap.xml\",\n",
|
||||
" web_path=\"https://langchain.readthedocs.io/sitemap.xml\",\n",
|
||||
" filter_urls=[\"https://api.python.langchain.com/en/latest\"],\n",
|
||||
")\n",
|
||||
"documents = loader.load()"
|
||||
@@ -115,7 +134,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 28,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
@@ -123,10 +142,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLangChain Python API Reference Documentation.\\n\\n\\nYou will be automatically redirected to the new location of this page.\\n\\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2024-02-12T05:26:10.971077+00:00', 'changefreq': 'daily', 'priority': '0.9'})"
|
||||
"Document(page_content='\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLangChain Python API Reference Documentation.\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nYou will be automatically redirected to the new location of this page.\\n\\n', metadata={'source': 'https://api.python.langchain.com/en/latest/', 'loc': 'https://api.python.langchain.com/en/latest/', 'lastmod': '2023-10-13T18:09:58.478681+00:00', 'changefreq': 'daily', 'priority': '0.9'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": 28,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -164,7 +183,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 30,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -192,12 +211,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = SitemapLoader(\n",
|
||||
" \"https://api.python.langchain.com/sitemap.xml\",\n",
|
||||
" \"https://langchain.readthedocs.io/sitemap.xml\",\n",
|
||||
" filter_urls=[\"https://api.python.langchain.com/en/latest/\"],\n",
|
||||
" parsing_function=remove_nav_and_header_elements,\n",
|
||||
")"
|
||||
@@ -214,9 +233,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 32,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Fetching pages: 100%|##########| 3/3 [00:00<00:00, 12.46it/s]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sitemap_loader = SitemapLoader(web_path=\"example_data/sitemap.xml\", is_local=True)\n",
|
||||
"\n",
|
||||
|
||||
@@ -9,35 +9,7 @@
|
||||
"\n",
|
||||
"This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.\n",
|
||||
"\n",
|
||||
"This approach can potentially improve the accuracy of QA models over source code.\n",
|
||||
"\n",
|
||||
"The supported languages for code parsing are:\n",
|
||||
"\n",
|
||||
"- C (*)\n",
|
||||
"- C++ (*)\n",
|
||||
"- C# (*)\n",
|
||||
"- COBOL\n",
|
||||
"- Go (*)\n",
|
||||
"- Java (*)\n",
|
||||
"- JavaScript (requires package `esprima`)\n",
|
||||
"- Kotlin (*)\n",
|
||||
"- Lua (*)\n",
|
||||
"- Perl (*)\n",
|
||||
"- Python\n",
|
||||
"- Ruby (*)\n",
|
||||
"- Rust (*)\n",
|
||||
"- Scala (*)\n",
|
||||
"- TypeScript (*)\n",
|
||||
"\n",
|
||||
"Items marked with (*) require the packages `tree_sitter` and `tree_sitter_languages`.\n",
|
||||
"It is straightforward to add support for additional languages using `tree_sitter`,\n",
|
||||
"although this currently requires modifying LangChain.\n",
|
||||
"\n",
|
||||
"The language used for parsing can be configured, along with the minimum number of\n",
|
||||
"lines required to activate the splitting based on syntax.\n",
|
||||
"\n",
|
||||
"If a language is not explicitly specified, `LanguageParser` will infer one from\n",
|
||||
"filename extensions, if present."
|
||||
"This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -47,7 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU esprima esprima tree_sitter tree_sitter_languages"
|
||||
"%pip install --upgrade --quiet esprima"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -423,33 +395,6 @@
|
||||
"source": [
|
||||
"print(\"\\n\\n--8<--\\n\\n\".join([document.page_content for document in result]))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Adding Languages using Tree-sitter Template\n",
|
||||
"\n",
|
||||
"Expanding language support using the Tree-Sitter template involves a few essential steps:\n",
|
||||
"\n",
|
||||
"1. **Creating a New Language File**:\n",
|
||||
" - Begin by creating a new file in the designated directory (langchain/libs/community/langchain_community/document_loaders/parsers/language).\n",
|
||||
" - Model this file based on the structure and parsing logic of existing language files like **`cpp.py`**.\n",
|
||||
" - You will also need to create a file in the langchain directory (langchain/libs/langchain/langchain/document_loaders/parsers/language).\n",
|
||||
"2. **Parsing Language Specifics**:\n",
|
||||
" - Mimic the structure used in the **`cpp.py`** file, adapting it to suit the language you are incorporating.\n",
|
||||
" - The primary alteration involves adjusting the chunk query array to suit the syntax and structure of the language you are parsing.\n",
|
||||
"3. **Testing the Language Parser**:\n",
|
||||
" - For thorough validation, generate a test file specific to the new language. Create **`test_language.py`** in the designated directory(langchain/libs/community/tests/unit_tests/document_loaders/parsers/language).\n",
|
||||
" - Follow the example set by **`test_cpp.py`** to establish fundamental tests for the parsed elements in the new language.\n",
|
||||
"4. **Integration into the Parser and Text Splitter**:\n",
|
||||
" - Incorporate your new language within the **`language_parser.py`** file. Ensure to update LANGUAGE_EXTENSIONS and LANGUAGE_SEGMENTERS along with the docstring for LanguageParser to recognize and handle the added language.\n",
|
||||
" - Also, confirm that your language is included in **`text_splitter.py`** in class Language for proper parsing.\n",
|
||||
"\n",
|
||||
"By following these steps and ensuring comprehensive testing and integration, you'll successfully extend language support using the Tree-Sitter template.\n",
|
||||
"\n",
|
||||
"Best of luck!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -468,7 +413,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -27,17 +27,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 5,
|
||||
"id": "0cb0f937-b610-42a2-b765-336eed037031",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"name": "stdin",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"········\n"
|
||||
" ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -51,20 +51,21 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 6,
|
||||
"id": "6fb585dd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain_community.llms import AlephAlpha"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 7,
|
||||
"id": "f81a230d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -75,12 +76,12 @@
|
||||
"\n",
|
||||
"A:\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 8,
|
||||
"id": "f0d26e48",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -97,19 +98,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 9,
|
||||
"id": "6811d621",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = prompt | llm"
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 10,
|
||||
"id": "3058e63f",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -118,10 +119,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' Artificial Intelligence is the simulation of human intelligence processes by machines.\\n\\n'"
|
||||
"' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -129,16 +130,8 @@
|
||||
"source": [
|
||||
"question = \"What is AI?\"\n",
|
||||
"\n",
|
||||
"llm_chain.invoke({\"question\": question})"
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a3544eff",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -157,7 +150,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
"version": "3.10.6"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -66,7 +66,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -90,7 +90,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = prompt | llm"
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -104,7 +104,7 @@
|
||||
"source": [
|
||||
"question = \"When was George Washington president?\"\n",
|
||||
"\n",
|
||||
"llm_chain.invoke({\"question\": question})"
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -151,7 +151,7 @@
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
@@ -1,97 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Baichuan LLM\n",
|
||||
"Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisite\n",
|
||||
"An API key is required to access Baichuan LLM API. Visit https://platform.baichuan-ai.com/ to get your API key."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use Baichuan LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"BAICHUAN_API_KEY\"] = \"YOUR_API_KEY\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms import BaichuanLLM\n",
|
||||
"\n",
|
||||
"# Load the model\n",
|
||||
"llm = BaichuanLLM()\n",
|
||||
"\n",
|
||||
"res = llm(\"What's your name?\")\n",
|
||||
"print(res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"res = llm.generate(prompts=[\"你好!\"])\n",
|
||||
"res"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for res in llm.stream(\"Who won the second world war?\"):\n",
|
||||
" print(res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_aio_stream():\n",
|
||||
" async for res in llm.astream(\"Write a poem about the sun.\"):\n",
|
||||
" print(res)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"asyncio.run(run_aio_stream())"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -66,7 +66,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -107,43 +107,12 @@
|
||||
"conversation.predict(input=\"Hi there!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"custom_llm = Bedrock(\n",
|
||||
" credentials_profile_name=\"bedrock-admin\",\n",
|
||||
" provider=\"cohere\",\n",
|
||||
" model_id=\"<Custom model ARN>\", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model\n",
|
||||
" model_kwargs={\"temperature\": 1},\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"conversation = ConversationChain(\n",
|
||||
" llm=custom_llm, verbose=True, memory=ConversationBufferMemory()\n",
|
||||
")\n",
|
||||
"conversation.predict(input=\"What is the recipe of mayonnaise?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Guardrails for Amazon Bedrock example \n",
|
||||
"\n",
|
||||
"## Guardrails for Amazon Bedrock (Preview) \n",
|
||||
"[Guardrails for Amazon Bedrock](https://aws.amazon.com/bedrock/guardrails/) evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models.\n",
|
||||
"**Note**: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if you’d like access to this feature.\n",
|
||||
"In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. "
|
||||
]
|
||||
},
|
||||
@@ -167,7 +136,7 @@
|
||||
" print(f\"Guardrails: {kwargs}\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Guardrails for Amazon Bedrock with trace\n",
|
||||
"# guardrails for Amazon Bedrock with trace\n",
|
||||
"llm = Bedrock(\n",
|
||||
" credentials_profile_name=\"bedrock-admin\",\n",
|
||||
" model_id=\"<Model_ID>\",\n",
|
||||
@@ -194,7 +163,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -92,7 +92,7 @@
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model\n",
|
||||
"llm = NIBittensorLLM(\n",
|
||||
|
||||
@@ -101,7 +101,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -11,102 +11,7 @@
|
||||
"\n",
|
||||
"[ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the new features like better performance, longer context and more efficient inference.\n",
|
||||
"\n",
|
||||
"[ChatGLM3](https://github.com/THUDM/ChatGLM3) is a new generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is the open-source model in the ChatGLM3 series"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install required dependencies\n",
|
||||
"\n",
|
||||
"%pip install -qU langchain langchain-community"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## ChatGLM3\n",
|
||||
"\n",
|
||||
"This examples goes over how to use LangChain to interact with ChatGLM3-6B Inference for text completion."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.messages import AIMessage\n",
|
||||
"from langchain_community.llms.chatglm3 import ChatGLM3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"{question}\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"endpoint_url = \"http://127.0.0.1:8000/v1/chat/completions\"\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" AIMessage(content=\"我将从美国到中国来旅游,出行前希望了解中国的城市\"),\n",
|
||||
" AIMessage(content=\"欢迎问我任何问题。\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = ChatGLM3(\n",
|
||||
" endpoint_url=endpoint_url,\n",
|
||||
" max_tokens=80000,\n",
|
||||
" prefix_messages=messages,\n",
|
||||
" top_p=0.9,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'北京和上海是中国两个不同的城市,它们在很多方面都有所不同。\\n\\n北京是中国的首都,也是历史悠久的城市之一。它有着丰富的历史文化遗产,如故宫、颐和园等,这些景点吸引着众多游客前来观光。北京也是一个政治、文化和教育中心,有很多政府机构和学术机构总部设在北京。\\n\\n上海则是一个现代化的城市,它是中国的经济中心之一。上海拥有许多高楼大厦和国际化的金融机构,是中国最国际化的城市之一。上海也是一个美食和购物天堂,有许多著名的餐厅和购物中心。\\n\\n北京和上海的气候也不同。北京属于温带大陆性气候,冬季寒冷干燥,夏季炎热多风;而上海属于亚热带季风气候,四季分明,春秋宜人。\\n\\n北京和上海有很多不同之处,但都是中国非常重要的城市,每个城市都有自己独特的魅力和特色。'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"question = \"北京和上海两座城市有什么不同?\"\n",
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## ChatGLM and ChatGLM2\n",
|
||||
"\n",
|
||||
"The following example shows how to use LangChain to interact with the ChatGLM2-6B Inference to complete text.\n",
|
||||
"This example goes over how to use LangChain to interact with ChatGLM2-6B Inference for text completion.\n",
|
||||
"ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both."
|
||||
]
|
||||
},
|
||||
@@ -130,7 +35,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"{question}\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -201,7 +106,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "langchain-dev",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -215,9 +120,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
||||
@@ -114,7 +114,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
"\n",
|
||||
"AI Assistant: \"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -109,7 +109,7 @@
|
||||
"\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
@@ -201,7 +201,7 @@
|
||||
"template = \"\"\"{question}\n",
|
||||
"\n",
|
||||
"Let's think step by step. \"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
@@ -146,7 +146,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -80,7 +80,7 @@
|
||||
"\n",
|
||||
"template = \"What is capital of {country}?\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
@@ -264,41 +264,13 @@
|
||||
" sys.stdout.flush()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aefe6df7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Safety Settings\n",
|
||||
"\n",
|
||||
"Gemini models have default safety settings that can be overridden. If you are receiving lots of \"Safety Warnings\" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7e2682e6",
|
||||
"id": "aefe6df7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_genai import GoogleGenerativeAI, HarmBlockThreshold, HarmCategory\n",
|
||||
"\n",
|
||||
"llm = GoogleGenerativeAI(\n",
|
||||
" model=\"gemini-pro\",\n",
|
||||
" google_api_key=api_key,\n",
|
||||
" safety_settings={\n",
|
||||
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e8d0ee0b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)."
|
||||
]
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -111,7 +111,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -73,7 +73,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -175,7 +175,7 @@
|
||||
"\n",
|
||||
"Answer: \"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,27 +1,20 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Konko\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "136d9ba6-c42a-435b-9e19-77ebcc7a3145",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Konko\n",
|
||||
"# ChatKonko\n",
|
||||
"\n",
|
||||
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. **Select** the right open source or proprietary LLMs for their application\n",
|
||||
"2. **Build** applications faster with integrations to leading application frameworks and fully managed APIs\n",
|
||||
"3. **Fine tune** smaller open-source LLMs to achieve industry-leading performance at a fraction of the cost\n",
|
||||
"4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n"
|
||||
"Konko API is a fully managed API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -29,44 +22,25 @@
|
||||
"id": "0d896d07-82b4-4f38-8c37-f0bc8b0e4fe1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` completion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-completion)\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Set Environment Variables\n",
|
||||
"\n",
|
||||
"1. You can set environment variables for \n",
|
||||
" 1. KONKO_API_KEY (Required)\n",
|
||||
" 2. OPENAI_API_KEY (Optional)\n",
|
||||
"2. In your current shell session, use the export command:\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"export KONKO_API_KEY={your_KONKO_API_KEY_here}\n",
|
||||
"export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Calling a model\n",
|
||||
"\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/list-of-models)\n",
|
||||
"\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).\n",
|
||||
"\n",
|
||||
"From here, we can initialize our model:"
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "dd70bccb-7a65-42d0-a3f2-8116f3549da7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
|
||||
@@ -234,7 +234,7 @@
|
||||
"\n",
|
||||
"Answer: Let's work this out in a step by step way to be sure we have the right answer.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -91,7 +91,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"CONCISE SUMMARY:\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(_prompt)\n",
|
||||
"prompt = PromptTemplate(template=_prompt, input_variables=[\"text\"])\n",
|
||||
"\n",
|
||||
"text_splitter = CharacterTextSplitter()\n",
|
||||
"\n",
|
||||
|
||||
@@ -113,7 +113,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -122,7 +122,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
"source": [
|
||||
"template = \"\"\"Question: {question}\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -90,7 +90,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -26,19 +26,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OCTOAI_API_TOKEN\"] = \"OCTOAI_API_TOKEN\"\n",
|
||||
"os.environ[\"ENDPOINT_URL\"] = \"https://text.octoai.run/v1/chat/completions\""
|
||||
"os.environ[\"ENDPOINT_URL\"] = \"https://mpt-7b-demo-f1kzsig6xes9.octoai.run/generate\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -56,50 +56,46 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n Instruction:\\n{question}\\n Response: \"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 30,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OctoAIEndpoint(\n",
|
||||
" model_kwargs={\n",
|
||||
" \"model\": \"llama-2-13b-chat-fp16\",\n",
|
||||
" \"max_tokens\": 128,\n",
|
||||
" \"presence_penalty\": 0,\n",
|
||||
" \"temperature\": 0.1,\n",
|
||||
" \"top_p\": 0.9,\n",
|
||||
" \"messages\": [\n",
|
||||
" {\n",
|
||||
" \"role\": \"system\",\n",
|
||||
" \"content\": \"You are a helpful assistant. Keep your responses limited to one short paragraph if possible.\",\n",
|
||||
" },\n",
|
||||
" ],\n",
|
||||
" \"max_new_tokens\": 200,\n",
|
||||
" \"temperature\": 0.75,\n",
|
||||
" \"top_p\": 0.95,\n",
|
||||
" \"repetition_penalty\": 1,\n",
|
||||
" \"seed\": None,\n",
|
||||
" \"stop\": [],\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Sure thing! Here's my response:\n",
|
||||
"\n",
|
||||
"Leonardo da Vinci was a true Renaissance man - an Italian polymath who excelled in various fields, including painting, sculpture, engineering, mathematics, anatomy, and geology. He is widely considered one of the greatest painters of all time, and his inventive and innovative works continue to inspire and influence artists and thinkers to this day. Some of his most famous works include the Mona Lisa, The Last Supper, and Vitruvian Man. \n"
|
||||
]
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\nLeonardo da Vinci was an Italian polymath and painter regarded by many as one of the greatest painters of all time. He is best known for his masterpieces including Mona Lisa, The Last Supper, and The Virgin of the Rocks. He was a draftsman, sculptor, architect, and one of the most important figures in the history of science. Da Vinci flew gliders, experimented with water turbines and windmills, and invented the catapult and a joystick-type human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and scientist.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@@ -107,7 +103,7 @@
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"print(llm_chain.run(question))"
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -127,7 +123,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
"version": "3.10.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -244,7 +244,7 @@
|
||||
"\n",
|
||||
"def plt_img_base64(img_base64):\n",
|
||||
" \"\"\"\n",
|
||||
" Display base64 encoded string as image\n",
|
||||
" Disply base64 encoded string as image\n",
|
||||
"\n",
|
||||
" :param img_base64: Base64 string\n",
|
||||
" \"\"\"\n",
|
||||
|
||||
@@ -84,7 +84,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -119,7 +119,7 @@
|
||||
"\n",
|
||||
"template = \"What is a good name for a company that makes {product}?\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"product\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"for model in [\"text-davinci-003\", \"huggingface.co/gpt2\"]:\n",
|
||||
" llm = OpenLM(model=model)\n",
|
||||
|
||||
@@ -4,9 +4,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Alibaba Cloud PAI EAS\n",
|
||||
"\n",
|
||||
">[Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labeling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`). `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
|
||||
"# AliCloud PAI EAS\n",
|
||||
"Machine Learning Platform for AI of Alibaba Cloud is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, Machine Learning Platform for AI provides whole-process AI engineering capabilities including data labeling (PAI-iTAG), model building (PAI-Designer and PAI-DSW), model training (PAI-DLC), compilation optimization, and inference deployment (PAI-EAS). PAI-EAS supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -23,14 +22,14 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"One who wants to use EAS LLMs must set up EAS service first. When the EAS service is launched, `EAS_SERVICE_URL` and `EAS_SERVICE_TOKEN` can be obtained. Users can refer to https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/ for more information,"
|
||||
"One who want to use eas llms must set up eas service first. When the eas service is launched, eas_service_rul and eas_service token can be got. Users can refer to https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/ for more information,"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,7 +50,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -66,16 +65,16 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm_chain = prompt | llm\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
|
||||
"llm_chain.invoke({\"question\": question})"
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -89,9 +88,10 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
"version": "3.10.11"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -133,7 +133,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -107,7 +107,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
"Query: {query}\n",
|
||||
"\n",
|
||||
"Result: \"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"query\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -191,7 +191,7 @@
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
|
||||
"\n",
|
||||
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
|
||||
@@ -209,7 +209,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
|
||||
"\n",
|
||||
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
|
||||
|
||||
@@ -83,7 +83,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -96,7 +96,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -53,7 +53,7 @@
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"llm = TextGen(model_url=model_url)\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"question = \"What NFL team won the Super Bowl in the year Justin Bieber was born?\"\n",
|
||||
@@ -104,7 +104,7 @@
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"llm = TextGen(\n",
|
||||
" model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()]\n",
|
||||
")\n",
|
||||
|
||||
@@ -146,7 +146,7 @@
|
||||
"\n",
|
||||
"template = \"What is the capital of {country}\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"\n",
|
||||
|
||||
@@ -95,7 +95,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -82,7 +82,7 @@
|
||||
" temperature=0.8,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(llm.invoke(\"What is the capital of France ?\"))"
|
||||
"print(llm(\"What is the capital of France ?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -117,7 +117,8 @@
|
||||
"1. The first Pokemon game was released in 1996.\n",
|
||||
"2. The president was Bill Clinton.\n",
|
||||
"3. Clinton was president from 1993 to 2001.\n",
|
||||
"4. The answer is Clinton.\n"
|
||||
"4. The answer is Clinton.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -135,13 +136,13 @@
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"question = \"Who was the US president in the year the first Pokemon game was released?\"\n",
|
||||
"\n",
|
||||
"print(llm_chain.invoke(question))"
|
||||
"print(llm_chain.run(question))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -171,36 +172,7 @@
|
||||
" trust_remote_code=True, # mandatory for hf models\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm.invoke(\"What is the future of AI?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d6ca8fd911d25faa",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Quantization\n",
|
||||
"\n",
|
||||
"vLLM supports `awq` quantization. To enable it, pass `quantization` to `vllm_kwargs`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2cada3174c46a0ea",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_q = VLLM(\n",
|
||||
" model=\"TheBloke/Llama-2-7b-Chat-AWQ\",\n",
|
||||
" trust_remote_code=True,\n",
|
||||
" max_new_tokens=512,\n",
|
||||
" vllm_kwargs={\"quantization\": \"awq\"},\n",
|
||||
")"
|
||||
"llm(\"What is the future of AI?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -244,7 +216,7 @@
|
||||
" model_name=\"tiiuae/falcon-7b\",\n",
|
||||
" model_kwargs={\"stop\": [\".\"]},\n",
|
||||
")\n",
|
||||
"print(llm.invoke(\"Rome is\"))"
|
||||
"print(llm(\"Rome is\"))"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -7,9 +7,8 @@
|
||||
"source": [
|
||||
"# IBM watsonx.ai\n",
|
||||
"\n",
|
||||
">[WatsonxLLM](https://ibm.github.io/watsonx-ai-python-sdk/fm_extensions.html#langchain) is a wrapper for IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) foundation models.\n",
|
||||
"\n",
|
||||
"This example shows how to communicate with `watsonx.ai` models using `LangChain`."
|
||||
"[WatsonxLLM](https://ibm.github.io/watsonx-ai-python-sdk/fm_extensions.html#langchain) is a wrapper for IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) foundation models.\n",
|
||||
"This example shows how to communicate with watsonx.ai models using LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -17,8 +16,6 @@
|
||||
"id": "ea35b2b7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up\n",
|
||||
"\n",
|
||||
"Install the package [`ibm-watsonx-ai`](https://ibm.github.io/watsonx-ai-python-sdk/install.html)."
|
||||
]
|
||||
},
|
||||
@@ -63,7 +60,6 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load the model\n",
|
||||
"\n",
|
||||
"You might need to adjust model `parameters` for different models or tasks. For details, refer to [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames)."
|
||||
]
|
||||
},
|
||||
@@ -332,7 +328,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -72,7 +72,7 @@
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -126,7 +126,7 @@
|
||||
"\n",
|
||||
"template = \"Where can we visit in the capital of {country}?\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user