Compare commits

..

3 Commits

Author SHA1 Message Date
Erick Friis
70f6be32e0 number 2023-11-22 13:44:10 -05:00
Erick Friis
1548f32f3d lint 2023-11-22 13:42:59 -05:00
Erick Friis
6c59db482b unit test for core 2023-11-22 13:38:00 -05:00
3567 changed files with 180645 additions and 300377 deletions

View File

@@ -3,17 +3,31 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
### 👩‍💻 Contributing Code
There are many ways to contribute to LangChain. Here are some common ways people contribute:
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
- [**Documentation**](https://python.langchain.com/docs/contributing/documentation): Help improve our docs, including this one!
- [**Code**](https://python.langchain.com/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](https://python.langchain.com/docs/contributing/integration): Help us integrate with your favorite vendors and tools.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/modules`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
### 🚩GitHub Issues
@@ -40,6 +54,268 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
### Contributor Documentation
## 🚀 Quick Start
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
This quick start guide explains how to run the repository locally.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
### Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
### Core vs. Experimental
This repository contains two separate projects:
- `langchain`: core langchain code, abstractions, and use cases.
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain core:
```bash
cd libs/langchain
```
### Local Development Dependencies
Install langchain development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
poetry install --with test
```
Then verify dependency installation:
```bash
make test
```
If the tests don't pass, you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running
Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases.
If you are still seeing this bug on v1.6.1, you may also try disabling "modern installation"
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
### Testing
_some test dependencies are optional; see section about optional dependencies_.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
#### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
To run formatting for docs, cookbook and templates:
```bash
make format
```
To run formatting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
#### Linting
Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).
To run linting for docs, cookbook and templates:
```bash
make lint
```
To run linting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
```python
[tool.codespell]
...
# Add here:
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
## Working with Optional Dependencies
Langchain relies heavily on optional dependencies to keep the Langchain package lightweight.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook
If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Documentation
While the code is split between `langchain` and `langchain.experimental`, the documentation is one holistic thing.
This covers how to get started contributing to documentation.
From the top-level of this repo, install documentation dependencies:
```bash
poetry install
```
### Contribute Documentation
The docs directory contains Documentation and API Reference.
Documentation is built using [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, run the link checker to ensure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
### Verify Documentation changes
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).
## 🏭 Release Process
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a developer and published to [PyPI](https://pypi.org/project/langchain/).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
### 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -27,4 +27,4 @@ body:
attributes:
label: Your contribution
description: |
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the [Contributing Guide](https://python.langchain.com/docs/contributing/)
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md)

View File

@@ -1,20 +1,20 @@
<!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->

View File

@@ -1,51 +0,0 @@
import json
import sys
import os
LANGCHAIN_DIRS = {
"libs/core",
"libs/langchain",
"libs/experimental",
"libs/community",
}
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run = set()
for file in files:
if any(
file.startswith(dir_)
for dir_ in (
".github/workflows",
".github/tools",
".github/actions",
"libs/core",
".github/scripts/check_diff.py",
)
):
dirs_to_run.update(LANGCHAIN_DIRS)
elif "libs/community" in file:
dirs_to_run.update(
("libs/community", "libs/langchain", "libs/experimental")
)
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}"):
dirs_to_run.update(
(
f"libs/partners/{partner_dir}",
"libs/langchain",
"libs/experimental",
)
)
# Skip if the directory was deleted
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
dirs_to_run.add("libs/experimental")
elif file.startswith("libs/"):
dirs_to_run.update(LANGCHAIN_DIRS)
else:
pass
print(json.dumps(list(dirs_to_run)))

View File

@@ -1,106 +0,0 @@
---
name: langchain CI
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
- libs/community
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ inputs.working-directory }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
jobs:
lint:
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
test:
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
compile-integration-tests:
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
dependencies:
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
extended-tests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
defaults:
run:
working-directory: ${{ inputs.working-directory }}
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -7,6 +7,10 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
langchain-core-location:
required: false
type: string
description: "Relative path to the langchain core library folder"
env:
POETRY_VERSION: "1.6.1"
@@ -38,7 +42,15 @@ jobs:
- name: Install integration dependencies
shell: bash
run: poetry install --with=test_integration,test
run: poetry install --with=test_integration
- name: Install langchain core editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-core-location }}
env:
LANGCHAIN_CORE_LOCATION: ${{ inputs.langchain-core-location }}
run: |
poetry run pip install -e "$LANGCHAIN_CORE_LOCATION"
- name: Check integration tests compile
shell: bash

View File

@@ -1,60 +0,0 @@
name: Integration tests
on:
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
env:
POETRY_VERSION: "1.6.1"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.11"
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test,test_integration
- name: Run integration tests
shell: bash
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
make integration_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -11,6 +11,10 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
langchain-core-location:
required: false
type: string
description: "Relative path to the langchain core library folder"
env:
POETRY_VERSION: "1.6.1"
@@ -68,7 +72,7 @@ jobs:
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with lint,typing
poetry install --with dev,lint,test,typing
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
@@ -78,6 +82,14 @@ jobs:
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Install langchain core editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-core-location }}
env:
LANGCHAIN_CORE_LOCATION: ${{ inputs.langchain-core-location }}
run: |
poetry run pip install -e "$LANGCHAIN_CORE_LOCATION"
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v3
env:
@@ -85,37 +97,9 @@ jobs:
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
key: mypy-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint_package
- name: Install test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests
make lint

View File

@@ -1,4 +1,4 @@
name: dependencies
name: pydantic v1/v2 compatibility
on:
workflow_call:
@@ -11,6 +11,10 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
langchain-core-location:
required: false
type: string
description: "Relative path to the langchain core library folder"
env:
POETRY_VERSION: "1.6.1"
@@ -28,7 +32,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: dependencies - Python ${{ matrix.python-version }}
name: Pydantic v1/v2 compatibility - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
@@ -44,14 +48,6 @@ jobs:
shell: bash
run: poetry install
- name: Check imports with base dependencies
shell: bash
run: poetry run make check_imports
- name: Install test dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}
@@ -60,6 +56,14 @@ jobs:
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Install langchain core editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-core-location }}
env:
LANGCHAIN_CORE_LOCATION: ${{ inputs.langchain-core-location }}
run: |
poetry run pip install -e "$LANGCHAIN_CORE_LOCATION"
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash

View File

@@ -7,12 +7,6 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
default: 'libs/langchain'
env:
PYTHON_VERSION: "3.10"
@@ -82,8 +76,6 @@ jobs:
- test-pypi-publish
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
#
@@ -96,17 +88,12 @@ jobs:
# - Tests pass, because the dependency is present even though it wasn't specified.
# - The package is published, and it breaks on the missing dependency when
# used in the real world.
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
- name: Import published package
- name: Test published package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
@@ -118,8 +105,9 @@ jobs:
# (https://test.pypi.org/simple). This will include the PKG_NAME==VERSION
# package because VERSION will not have been uploaded to regular PyPI yet.
#
# TODO: add more in-depth pre-publish tests after testing that importing works
run: |
poetry run pip install \
pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
@@ -127,38 +115,7 @@ jobs:
# since that's how Python imports packages with dashes in the name.
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
publish:
needs:

View File

@@ -11,6 +11,10 @@ on:
required: false
type: string
description: "Relative path to the langchain library folder"
langchain-core-location:
required: false
type: string
description: "Relative path to the langchain core library folder"
env:
POETRY_VERSION: "1.6.1"
@@ -42,7 +46,7 @@ jobs:
- name: Install dependencies
shell: bash
run: poetry install --with test
run: poetry install
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
@@ -52,6 +56,14 @@ jobs:
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Install langchain core editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-core-location }}
env:
LANGCHAIN_CORE_LOCATION: ${{ inputs.langchain-core-location }}
run: |
poetry run pip install -e "$LANGCHAIN_CORE_LOCATION"
- name: Run core tests
shell: bash
run: |

View File

@@ -1,47 +0,0 @@
---
name: Check library diffs
on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/**"
- ".github/tools/**"
- ".github/workflows/**"
- "libs/**"
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: echo "dirs-to-run=$(python .github/scripts/check_diff.py ${{ steps.files.outputs.all }})" >> $GITHUB_OUTPUT
outputs:
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:
needs: [ build ]
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-run) }}
uses: ./.github/workflows/_all_ci.yml
with:
working-directory: ${{ matrix.working-directory }}

154
.github/workflows/langchain_ci.yml vendored Normal file
View File

@@ -0,0 +1,154 @@
---
name: libs/langchain CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/_pydantic_compatibility.yml'
- '.github/workflows/langchain_ci.yml'
- 'libs/*'
- 'libs/langchain/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/langchain"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/langchain
langchain-core-location: ../core
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/langchain
langchain-core-location: ../core
secrets: inherit
compile-integration-tests:
uses:
./.github/workflows/_compile_integration_test.yml
with:
working-directory: libs/langchain
langchain-core-location: ../core
secrets: inherit
pydantic-compatibility:
uses:
./.github/workflows/_pydantic_compatibility.yml
with:
working-directory: libs/langchain
langchain-core-location: ../core
secrets: inherit
# It's possible that langchain works fine with the latest *published* langchain-core,
# but is broken with the langchain-core on `master`.
#
# We want to catch situations like that *before* releasing a new langchain-core, hence this test.
test-with-latest-langchain-core:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: test with unpublished langchain-core - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ env.WORKDIR }}
cache-key: unpublished-langchain-core
- name: Install dependencies
shell: bash
run: |
echo "Running tests with unpublished langchain, installing dependencies with poetry..."
poetry install
echo "Editably installing langchain-core outside of poetry, to avoid messing up lockfile..."
poetry run pip install -e ../core
- name: Run tests
run: make test
extended-tests:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: libs/langchain
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
- name: Install langchain core editable
shell: bash
run: |
poetry run pip install -e ../core
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

47
.github/workflows/langchain_cli_ci.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
---
name: libs/cli CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/_pydantic_compatibility.yml'
- '.github/workflows/langchain_cli_ci.yml'
- 'libs/cli/**'
- 'libs/*'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/cli"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/cli
langchain-location: ../langchain
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/cli
secrets: inherit

View File

@@ -1,13 +0,0 @@
---
name: libs/community Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/community
secrets: inherit

52
.github/workflows/langchain_core_ci.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
---
name: libs/langchain core CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/_pydantic_compatibility.yml'
- '.github/workflows/langchain_core_ci.yml'
- 'libs/core/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/core"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/core
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/core
secrets: inherit
pydantic-compatibility:
uses:
./.github/workflows/_pydantic_compatibility.yml
with:
working-directory: libs/core
secrets: inherit

View File

@@ -0,0 +1,141 @@
---
name: libs/experimental CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/langchain_experimental_ci.yml'
- 'libs/*'
- 'libs/experimental/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "libs/experimental"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/experimental
langchain-location: ../langchain
langchain-core-location: ../core
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/experimental
langchain-location: ../langchain
langchain-core-location: ../core
secrets: inherit
compile-integration-tests:
uses:
./.github/workflows/_compile_integration_test.yml
with:
working-directory: libs/experimental
secrets: inherit
# It's possible that langchain-experimental works fine with the latest *published* langchain,
# but is broken with the langchain on `master`.
#
# We want to catch situations like that *before* releasing a new langchain, hence this test.
test-with-latest-langchain:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: test with unpublished langchain - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ env.WORKDIR }}
cache-key: unpublished-langchain
- name: Install dependencies
shell: bash
run: |
echo "Running tests with unpublished langchain, installing dependencies with poetry..."
poetry install
echo "Editably installing langchain outside of poetry, to avoid messing up lockfile..."
poetry run pip install -e ../langchain
poetry run pip install -e ../core
- name: Run tests
run: make test
extended-tests:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.WORKDIR }}
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: libs/experimental
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,13 +0,0 @@
---
name: libs/core Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/core
secrets: inherit

View File

@@ -52,7 +52,13 @@ jobs:
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration,test
poetry install --with=test_integration
poetry run pip install google-cloud-aiplatform
poetry run pip install "boto3>=1.28.57"
if [[ ${{ matrix.python-version }} != "3.8" ]]
then
poetry run pip install fireworks-ai
fi
- name: Run tests
shell: bash
@@ -62,9 +68,7 @@ jobs:
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_DEPLOYMENT_NAME }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
run: |
make scheduled_tests

View File

@@ -33,4 +33,5 @@ jobs:
./.github/workflows/_lint.yml
with:
working-directory: templates
langchain-location: ../libs/langchain
secrets: inherit

3
.gitignore vendored
View File

@@ -167,7 +167,8 @@ docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/*api_reference.rst
docs/api_reference/api_reference.rst
docs/api_reference/experimental_api_reference.rst
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/

View File

@@ -13,7 +13,6 @@ build:
- python -mvirtualenv $READTHEDOCS_VIRTUALENV_PATH
- python -m pip install --upgrade --no-cache-dir pip setuptools
- python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
- python -m pip install ./libs/partners/*
- python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- python docs/api_reference/create_api_rst.py
- cat docs/api_reference/conf.py

12
LICENSE
View File

@@ -1,6 +1,6 @@
MIT License
The MIT License
Copyright (c) LangChain, Inc.
Copyright (c) Harrison Chase
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -41,10 +41,9 @@ spell_fix:
# LINTING AND FORMATTING
######################
lint lint_package lint_tests:
lint:
poetry run ruff docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
format format_diff:
poetry run ruff format docs templates cookbook

View File

@@ -3,7 +3,8 @@
⚡ Building applications with LLMs through composability ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/langchain_ci.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/langchain_ci.yml)
[![Experimental CI](https://github.com/langchain-ai/langchain/actions/workflows/langchain_experimental_ci.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/langchain_experimental_ci.yml)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
@@ -29,7 +30,7 @@ pip install langchain
With conda:
```bash
conda install langchain -c conda-forge
pip install langsmith && conda install langchain -c conda-forge
```
## 🤔 What is LangChain?
@@ -44,10 +45,7 @@ This framework consists of several parts.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
**This repo contains the `langchain` ([here](libs/langchain)), `langchain-experimental` ([here](libs/experimental)), and `langchain-cli` ([here](libs/cli)) Python packages, as well as [LangChain Templates](templates).**
![LangChain Stack](docs/static/img/langchain_stack.png)
@@ -105,8 +103,4 @@ Please see [here](https://python.langchain.com) for full documentation, which in
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
## 🌟 Contributors
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
For detailed information on how to contribute, see [here](.github/CONTRIBUTING.md).

View File

@@ -164,8 +164,8 @@
")\n",
"\n",
"# Chain to query\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"sql_response = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",
@@ -217,7 +217,7 @@
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
@@ -293,7 +293,7 @@
"memory = ConversationBufferMemory(return_messages=True)\n",
"\n",
"# Chain to query with memory\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"sql_chain = (\n",
" RunnablePassthrough.assign(\n",
@@ -345,7 +345,7 @@
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",

View File

@@ -46,7 +46,7 @@
"\n",
"---\n",
"\n",
"A separate cookbook highlights `Option 1` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/multi_modal_RAG_chroma.ipynb).\n",
"A seperate cookbook highlights `Option 1` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/multi_modal_RAG_chroma.ipynb).\n",
"\n",
"And option `Option 2` is appropriate for cases when a multi-modal LLM cannot be used for answer synthesis (e.g., cost, etc).\n",
"\n",
@@ -200,7 +200,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"\n",
"# Generate summaries of text elements\n",
@@ -270,7 +270,7 @@
"import base64\n",
"import os\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain.schema.messages import HumanMessage\n",
"\n",
"\n",
"def encode_image(image_path):\n",
@@ -355,9 +355,9 @@
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",
@@ -442,7 +442,7 @@
"import re\n",
"\n",
"from IPython.display import HTML, display\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"from PIL import Image\n",
"\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -237,7 +237,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -320,9 +320,9 @@
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
@@ -374,7 +374,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"# Prompt template\n",
"template = \"\"\"Answer the question based only on the following context, which can include text and tables:\n",

View File

@@ -213,7 +213,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -375,9 +375,9 @@
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
@@ -646,7 +646,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"# Prompt template\n",
"template = \"\"\"Answer the question based only on the following context, which can include text and tables:\n",

View File

@@ -211,7 +211,7 @@
"source": [
"from langchain.chat_models import ChatOllama\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -378,9 +378,9 @@
"\n",
"from langchain.embeddings import GPT4AllEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
@@ -532,7 +532,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"# Prompt template\n",
"template = \"\"\"Answer the question based only on the following context, which can include text and tables:\n",

View File

@@ -162,7 +162,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"# Prompt\n",
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text for retrieval. \\\n",
@@ -202,7 +202,7 @@
"import os\n",
"from io import BytesIO\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain.schema.messages import HumanMessage\n",
"from PIL import Image\n",
"\n",
"\n",
@@ -273,8 +273,8 @@
"from base64 import b64decode\n",
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_core.documents import Document\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",
@@ -475,7 +475,7 @@
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"# Prompt\n",
"template = \"\"\"Answer the question based only on the following context, which can include text and tables:\n",
@@ -521,7 +521,7 @@
"import re\n",
"\n",
"from langchain.schema import Document\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"\n",
"def looks_like_base64(sb):\n",

View File

@@ -34,12 +34,12 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "5740fc70-c513-4ff4-9d72-cfc098f85fef",
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain docugami==0.0.8 dgml-utils==0.3.0 pydantic langchainhub chromadb hnswlib --upgrade --quiet"
"! pip install langchain docugami==0.0.4 dgml-utils==0.2.0 pydantic langchainhub chromadb --upgrade --quiet"
]
},
{
@@ -63,7 +63,7 @@
"1. Create an access token via the Developer Playground for your workspace. [Detailed instructions](https://help.docugami.com/home/docugami-api).\n",
"1. Add your documents (PDF \\[scanned or digital\\], DOC or DOCX) to Docugami for processing. There are two ways to do this:\n",
" 1. Use the simple Docugami web experience. [Detailed instructions](https://help.docugami.com/home/adding-documents).\n",
" 1. Use the [Docugami API](https://api-docs.docugami.com), specifically the [documents](https://api-docs.docugami.com/#tag/documents/operation/upload-document) endpoint. You can also use the [docugami python library](https://pypi.org/project/docugami/) as a convenient wrapper.\n",
" 1. Use the [Docugami API](https://api-docs.docugami.com), specifically the [documents](https://api-docs.docugami.com/#tag/documents/operation/upload-document) endpoint. Code samples are available for [python](../upload_file/) and [JavaScript](../../js/upload-file/) or you can use the [docugami](https://pypi.org/project/docugami/) python library.\n",
"\n",
"Once your documents are in Docugami, they are processed and organized into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. Docugami is not limited to any particular types of documents, and the clusters created depend on your particular documents. You can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later if you wish. You can monitor file status in the simple Docugami webapp, or use a [webhook](https://api-docs.docugami.com/#tag/webhooks) to be informed when your documents are done processing.\n",
"\n",
@@ -76,7 +76,98 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 45,
"id": "fc0767d4-9155-4591-855c-ef2e14e0e10f",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import tempfile\n",
"from pathlib import Path\n",
"from pprint import pprint\n",
"from time import sleep\n",
"from typing import Dict, List\n",
"\n",
"import requests\n",
"from docugami import Docugami\n",
"from docugami.types import Document as DocugamiDocument\n",
"\n",
"api_key = os.environ.get(\"DOCUGAMI_API_KEY\")\n",
"if not api_key:\n",
" raise Exception(\"Please set Docugami API key environment variable\")\n",
"\n",
"client = Docugami()\n",
"\n",
"\n",
"def upload_files(local_paths: List[str], docset_name: str) -> List[DocugamiDocument]:\n",
" docset_list_response = client.docsets.list(name=docset_name)\n",
" if docset_list_response and docset_list_response.docsets:\n",
" # Docset already exists with this name\n",
" docset_id = docset_list_response.docsets[0]\n",
" else:\n",
" dg_docset = client.docsets.create(name=docset_name)\n",
" docset_id = dg_docset.id\n",
"\n",
" document_list_response = client.documents.list(limit=int(1e5))\n",
" dg_docs: List[DocugamiDocument] = []\n",
" if document_list_response and document_list_response.documents:\n",
" new_names = [Path(f).name for f in local_paths]\n",
"\n",
" dg_docs = [\n",
" d\n",
" for d in document_list_response.documents\n",
" if Path(d.name).name in new_names\n",
" ]\n",
" existing_names = [Path(d.name).name for d in dg_docs]\n",
"\n",
" # Upload any files not previously uploaded\n",
" for f in local_paths:\n",
" if Path(f).name not in existing_names:\n",
" dg_docs.append(\n",
" client.documents.contents.upload(\n",
" file=Path(f).absolute(),\n",
" docset_id=docset_id,\n",
" )\n",
" )\n",
" return dg_docs\n",
"\n",
"\n",
"def wait_for_xml(dg_docs: List[DocugamiDocument]) -> dict[str, str]:\n",
" dgml_paths: dict[str, str] = {}\n",
" while len(dgml_paths) < len(dg_docs):\n",
" for doc in dg_docs:\n",
" doc = client.documents.retrieve(doc.id) # update with latest\n",
" current_status = doc.status\n",
" if current_status == \"Error\":\n",
" raise Exception(\n",
" \"Document could not be processed, please confirm it is not a zero length, corrupt or password protected file\"\n",
" )\n",
" elif current_status == \"Ready\":\n",
" dgml_url = doc.docset.url + f\"/documents/{doc.id}/dgml\"\n",
" headers = {\"Authorization\": f\"Bearer {api_key}\"}\n",
" dgml_response = requests.get(dgml_url, headers=headers)\n",
" if not dgml_response.ok:\n",
" raise Exception(\n",
" f\"Could not download DGML artifact {dgml_url}: {dgml_response.status_code}\"\n",
" )\n",
" dgml_contents = dgml_response.text\n",
" with tempfile.NamedTemporaryFile(delete=False, mode=\"w\") as temp_file:\n",
" temp_file.write(dgml_contents)\n",
" temp_file_path = temp_file.name\n",
" dgml_paths[doc.name] = temp_file_path\n",
"\n",
" print(f\"{len(dgml_paths)} docs done processing out of {len(dg_docs)}...\")\n",
"\n",
" if len(dgml_paths) == len(dg_docs):\n",
" # done\n",
" return dgml_paths\n",
" else:\n",
" sleep(30) # try again in a bit"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "ce0b2b21-7623-46e7-ae2c-3a9f67e8b9b9",
"metadata": {},
"outputs": [
@@ -84,22 +175,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'Report_CEN23LA277_192541.pdf': '/tmp/tmpa0c77x46',\n",
" 'Report_CEN23LA338_192753.pdf': '/tmp/tmpaftfld2w',\n",
" 'Report_CEN23LA363_192876.pdf': '/tmp/tmpn7gp6be2',\n",
" 'Report_CEN23LA394_192995.pdf': '/tmp/tmp9udymprf',\n",
" 'Report_ERA23LA114_106615.pdf': '/tmp/tmpxdjbh4r_',\n",
" 'Report_WPR23LA254_192532.pdf': '/tmp/tmpz6h75a0h'}\n"
"6 docs done processing out of 6...\n",
"{'Report_CEN23LA277_192541.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmpel3o0rpg',\n",
" 'Report_CEN23LA338_192753.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmpgugb9ut1',\n",
" 'Report_CEN23LA363_192876.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmp3_gf2sky',\n",
" 'Report_CEN23LA394_192995.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmpwmfgoxkl',\n",
" 'Report_ERA23LA114_106615.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmptibrz2yu',\n",
" 'Report_WPR23LA254_192532.pdf': '/var/folders/0h/6cchx4k528bdj8cfcsdm0dqr0000gn/T/tmpvazrbbsi'}\n"
]
}
],
"source": [
"from pprint import pprint\n",
"\n",
"from docugami import Docugami\n",
"from docugami.lib.upload import upload_to_named_docset, wait_for_dgml\n",
"\n",
"#### START DOCSET INFO (please change this values as needed)\n",
"#### START DOCSET INFO (please change)\n",
"DOCSET_NAME = \"NTSB Aviation Incident Reports\"\n",
"FILE_PATHS = [\n",
" \"/Users/tjaffri/ntsb/Report_CEN23LA277_192541.pdf\",\n",
@@ -110,15 +197,13 @@
" \"/Users/tjaffri/ntsb/Report_WPR23LA254_192532.pdf\",\n",
"]\n",
"\n",
"# Note: Please specify ~6 (or more!) similar files to process together as a document set\n",
"# This is currently a requirement for Docugami to automatically detect motifs\n",
"# across the document set to generate a semantic XML Knowledge Graph.\n",
"assert len(FILE_PATHS) > 5, \"Please provide at least 6 files\"\n",
"assert (\n",
" len(FILE_PATHS) > 5\n",
") # Please specify ~6 (or more!) similar files to process together as a document set\n",
"#### END DOCSET INFO\n",
"\n",
"dg_client = Docugami()\n",
"dg_docs = upload_to_named_docset(dg_client, FILE_PATHS, DOCSET_NAME)\n",
"dgml_paths = wait_for_dgml(dg_client, dg_docs)\n",
"dg_docs = upload_files(FILE_PATHS, DOCSET_NAME)\n",
"dgml_paths = wait_for_xml(dg_docs)\n",
"\n",
"pprint(dgml_paths)"
]
@@ -143,7 +228,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 47,
"id": "05fcdd57-090f-44bf-a1fb-2c3609c80e34",
"metadata": {},
"outputs": [
@@ -152,13 +237,13 @@
"output_type": "stream",
"text": [
"found 30 chunks, here are the first few\n",
"<AviationInvestigationFinalReport-section>Aviation </AviationInvestigationFinalReport-section>Investigation Final Report\n",
"<table><tbody><tr><td>Location: </td> <td><Location><TownName>Elbert</TownName>, <USState>Colorado </USState></Location></td> <td>Accident Number: </td> <td><AccidentNumber>CEN23LA277 </AccidentNumber></td></tr> <tr><td><LocationDateTime>Date &amp; Time: </LocationDateTime></td> <td><DateTime><EventDate>June 26, 2023</EventDate>, <EventTime>11:00 Local </EventTime></DateTime></td> <td><DateTimeAccidentNumber>Registration: </DateTimeAccidentNumber></td> <td><Registration>N23161 </Registration></td></tr> <tr><td><LocationAircraft>Aircraft: </LocationAircraft></td> <td><AircraftType>Piper <AircraftType>J3C-50 </AircraftType></AircraftType></td> <td><AircraftAccidentNumber>Aircraft Damage: </AircraftAccidentNumber></td> <td><AircraftDamage>Substantial </AircraftDamage></td></tr> <tr><td><LocationDefiningEvent>Defining Event: </LocationDefiningEvent></td> <td><DefiningEvent>Nose over/nose down </DefiningEvent></td> <td><DefiningEventAccidentNumber>Injuries: </DefiningEventAccidentNumber></td> <td><Injuries><Minor>1 </Minor>Minor </Injuries></td></tr> <tr><td><LocationFlightConductedUnder>Flight Conducted Under: </LocationFlightConductedUnder></td> <td><FlightConductedUnder><Part91-cell>Part <RegulationPart>91</RegulationPart>: General aviation - Personal </Part91-cell></FlightConductedUnder></td><td/><td><FlightConductedUnderCEN23LA277/></td></tr></tbody></table>\n",
"Aviation Investigation Final Report\n",
"<table><tbody><tr><td>Location: </td> <td><Location><TownName>Elbert</TownName>, <USState>Colorado </USState></Location></td> <td>Accident Number: </td> <td><AccidentNumber>CEN23LA277 </AccidentNumber></td></tr> <tr><td><LocationDateTime>Date &amp; Time: </LocationDateTime></td> <td><DateTime><EventDate>June 26, 2023</EventDate>, <EventTime>11:00 Local </EventTime></DateTime></td> <td><DateTimeAccidentNumber>Registration: </DateTimeAccidentNumber></td> <td><Registration>N23161 </Registration></td></tr> <tr><td><LocationAircraft>Aircraft: </LocationAircraft></td> <td><Aircraft>Piper <AircraftType>J3C-50 </AircraftType></Aircraft></td> <td><AircraftAccidentNumber>Aircraft Damage: </AircraftAccidentNumber></td> <td><AircraftDamage>Substantial </AircraftDamage></td></tr> <tr><td><LocationDefiningEvent>Defining Event: </LocationDefiningEvent></td> <td><DefiningEvent>Nose over/nose down </DefiningEvent></td> <td><DefiningEventAccidentNumber>Injuries: </DefiningEventAccidentNumber></td> <td><Injuries><Minor>1 </Minor>Minor </Injuries></td></tr> <tr><td><LocationFlightConductedUnder>Flight Conducted Under: </LocationFlightConductedUnder></td> <td><Part91-cell>Part <RegulationPart>91</RegulationPart>: General aviation - Personal </Part91-cell></td><td/><td><FlightConductedUnderCEN23LA277/></td></tr></tbody></table>\n",
"Analysis\n",
"<TakeoffAccident> <Analysis>The pilot reported that, as the tail lifted during takeoff, the airplane veered left. He attempted to correct with full right rudder and full brakes. However, the airplane subsequently nosed over resulting in substantial damage to the fuselage, lift struts, rudder, and vertical stabilizer. </Analysis></TakeoffAccident>\n",
"<TakeoffAccident> The pilot reported that, as the tail lifted during takeoff, the airplane veered left. He attempted to correct with full right rudder and full brakes. However, the airplane subsequently nosed over resulting in substantial damage to the fuselage, lift struts, rudder, and vertical stabilizer. </TakeoffAccident>\n",
"<AircraftCondition> The pilot reported that there were no preaccident mechanical malfunctions or anomalies with the airplane that would have precluded normal operation. </AircraftCondition>\n",
"<WindConditions> At about the time of the accident, wind was from <WindDirection>180</WindDirection>° at <WindConditions>5 </WindConditions>knots. The pilot decided to depart on runway <Runway>35 </Runway>due to the prevailing airport traffic. He stated that departing with “more favorable wind conditions” may have prevented the accident. </WindConditions>\n",
"<ProbableCauseAndFindings-section>Probable Cause and Findings </ProbableCauseAndFindings-section>\n",
"Probable Cause and Findings\n",
"<ProbableCause> The <ProbableCause>National Transportation Safety Board </ProbableCause>determines the probable cause(s) of this accident to be: </ProbableCause>\n",
"<AccidentCause> The pilot's loss of directional control during takeoff and subsequent excessive use of brakes which resulted in a nose-over. Contributing to the accident was his decision to takeoff downwind. </AccidentCause>\n",
"Page 1 of <PageNumber>5 </PageNumber>\n"
@@ -166,8 +251,6 @@
}
],
"source": [
"from pathlib import Path\n",
"\n",
"from dgml_utils.segmentation import get_chunks_str\n",
"\n",
"# Here we just read the first file, you can do the same for others\n",
@@ -200,7 +283,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 48,
"id": "8a4b49e0-de78-4790-a930-ad7cf324697a",
"metadata": {},
"outputs": [
@@ -260,7 +343,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 109,
"id": "7b697d30-1e94-47f0-87e8-f81d4b180da2",
"metadata": {},
"outputs": [
@@ -270,14 +353,12 @@
"39"
]
},
"execution_count": 6,
"execution_count": 109,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import requests\n",
"\n",
"# Download XML from known URL\n",
"dgml = requests.get(\n",
" \"https://raw.githubusercontent.com/docugami/dgml-utils/main/python/tests/test_data/article/Jane%20Doe.xml\"\n",
@@ -288,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 98,
"id": "14714576-6e1d-499b-bcc8-39140bb2fd78",
"metadata": {},
"outputs": [
@@ -298,7 +379,7 @@
"{'h1': 9, 'div': 12, 'p': 3, 'lim h1': 9, 'lim': 1, 'table': 1, 'h1 div': 4}"
]
},
"execution_count": 7,
"execution_count": 98,
"metadata": {},
"output_type": "execute_result"
}
@@ -319,7 +400,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 99,
"id": "5462f29e-fd59-4e0e-9493-ea3b560e523e",
"metadata": {},
"outputs": [
@@ -352,7 +433,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 100,
"id": "2b4ece00-2e43-4254-adc9-66dbb79139a6",
"metadata": {},
"outputs": [
@@ -390,7 +471,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 101,
"id": "08350119-aa22-4ec1-8f65-b1316a0d4123",
"metadata": {},
"outputs": [
@@ -418,7 +499,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 112,
"id": "bcac8294-c54a-4b6e-af9d-3911a69620b2",
"metadata": {},
"outputs": [
@@ -465,7 +546,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 113,
"id": "8e275736-3408-4d7a-990e-4362c88e81f8",
"metadata": {},
"outputs": [],
@@ -476,7 +557,7 @@
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -496,7 +577,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 114,
"id": "1b12536a-1303-41ad-9948-4eb5a5f32614",
"metadata": {},
"outputs": [],
@@ -513,7 +594,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 115,
"id": "8d8b567c-b442-4bf0-b639-04bd89effc62",
"metadata": {},
"outputs": [],
@@ -538,7 +619,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 116,
"id": "346c3a02-8fea-4f75-a69e-fc9542b99dbc",
"metadata": {},
"outputs": [],
@@ -547,9 +628,9 @@
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.vectorstores.chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"\n",
"def build_retriever(text_elements, tables, table_summaries):\n",
@@ -600,12 +681,12 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 117,
"id": "f2489de4-51e3-48b4-bbcd-ed9171deadf3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"system_prompt = SystemMessagePromptTemplate.from_template(\n",
" \"You are a helpful assistant that answers questions based on provided context. Your provided context can include text or tables, \"\n",
@@ -644,17 +725,10 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 120,
"id": "636e992f-823b-496b-a082-8b4fcd479de5",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1\n"
]
},
{
"name": "stdout",
"output_type": "stream",
@@ -696,7 +770,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 121,
"id": "0e4a2f43-dd48-4ae3-8e27-7e87d169965f",
"metadata": {},
"outputs": [
@@ -706,7 +780,7 @@
"669"
]
},
"execution_count": 20,
"execution_count": 121,
"metadata": {},
"output_type": "execute_result"
}
@@ -721,7 +795,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 124,
"id": "56b78fb3-603d-4343-ae72-be54a3c5dd72",
"metadata": {},
"outputs": [
@@ -746,7 +820,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 125,
"id": "d3cc5ba9-8553-4eda-a5d1-b799751186af",
"metadata": {},
"outputs": [],
@@ -758,7 +832,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 126,
"id": "d7c73faf-74cb-400d-8059-b69e2493de38",
"metadata": {},
"outputs": [],
@@ -770,7 +844,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 127,
"id": "4c553722-be42-42ce-83b8-76a17f323f1c",
"metadata": {},
"outputs": [],
@@ -780,7 +854,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 128,
"id": "65dce40b-f1c3-494a-949e-69a9c9544ddb",
"metadata": {},
"outputs": [
@@ -790,7 +864,7 @@
"'The number of training tokens for LLaMA2 is 2.0T for all parameter sizes.'"
]
},
"execution_count": 25,
"execution_count": 128,
"metadata": {},
"output_type": "execute_result"
}
@@ -885,51 +959,14 @@
" </tr>\n",
" </tbody>\n",
"</table>\n",
"```"
"``"
]
},
{
"cell_type": "markdown",
"id": "867f8e11-384c-4aa1-8b3e-c59fb8d5fd7d",
"id": "0879349e-7298-4f2c-b246-f1142e97a8e5",
"metadata": {},
"source": [
"Finally, you can ask other questions that rely on more subtle parsing of the table, e.g.:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "d38f1459-7d2b-40df-8dcd-e747f85eb144",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The learning rate for LLaMA2 was 3.0 × 104 for the 7B and 13B models, and 1.5 × 104 for the 34B and 70B models.'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llama2_chain.invoke(\"What was the learning rate for LLaMA2?\")"
]
},
{
"cell_type": "markdown",
"id": "94826165",
"metadata": {},
"source": [
"## Docugami KG-RAG Template\n",
"\n",
"Docugami also provides a [langchain template](https://github.com/docugami/langchain-template-docugami-kg-rag) that you can integrate into your langchain projects.\n",
"\n",
"Here's a walkthrough of how you can do this.\n",
"\n",
"[![Docugami KG-RAG Walkthrough](https://img.youtube.com/vi/xOHOmL1NFMg/0.jpg)](https://www.youtube.com/watch?v=xOHOmL1NFMg)\n"
]
"source": []
}
],
"metadata": {

View File

@@ -23,7 +23,7 @@
"\n",
"from langchain.chains.openai_tools import create_extraction_chain_pydantic\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.pydantic_v1 import BaseModel"
"from langchain.pydantic_v1 import BaseModel"
]
},
{
@@ -151,11 +151,11 @@
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.runnables import Runnable\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain.schema.runnable import Runnable\n",
"from langchain.pydantic_v1 import BaseModel\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_core.language_models import BaseLanguageModel\n",
"from langchain.schema.messages import SystemMessage\n",
"from langchain.schema.language_model import BaseLanguageModel\n",
"\n",
"_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned \\\n",
"in the following passage together with their properties.\n",

View File

@@ -51,7 +51,7 @@
"\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")"
"llm = OpenAI(model=\"text-davinci-003\")"
]
},
{

View File

@@ -92,7 +92,7 @@
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
"from langchain.schema.messages import HumanMessage, SystemMessage"
]
},
{

View File

@@ -42,7 +42,7 @@
"* We will use Open Clip multi-modal embeddings.\n",
"* We will use [Chroma](https://www.trychroma.com/) with support for multi-modal.\n",
"\n",
"A separate cookbook highlights `Options 2 and 3` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb).\n",
"A seperate cookbook highlights `Options 2 and 3` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb).\n",
"\n",
"![chroma_multimodal.png](attachment:1920fda3-1808-407c-9820-f518c9c6f566.png)\n",
"\n",
@@ -316,9 +316,9 @@
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain.schema.messages import HumanMessage, SystemMessage\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",

View File

@@ -31,7 +31,7 @@
"source": [
"import re\n",
"\n",
"from IPython.display import Image, display\n",
"from IPython.display import Image\n",
"from steamship import Block, Steamship"
]
},
@@ -180,7 +180,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.3"
}
},
"nbformat": 4,

View File

@@ -29,7 +29,7 @@
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
"from langchain.schema.messages import HumanMessage, SystemMessage"
]
},
{
@@ -252,7 +252,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.agents import AgentFinish\n",
"from langchain.schema.agent import AgentFinish\n",
"\n",
"\n",
"def execute_agent(agent, tools, input):\n",
@@ -457,8 +457,8 @@
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.pydantic_v1 import BaseModel, Field\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetCurrentWeather(BaseModel):\n",

View File

@@ -29,11 +29,11 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents.tools import Tool\n",
"from langchain.chains import LLMMathChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain_core.tools import Tool\n",
"from langchain_experimental.plan_and_execute import (\n",
" PlanAndExecute,\n",
" load_agent_executor,\n",

View File

@@ -37,8 +37,7 @@
"source": [
"#!pip install qianfan\n",
"#!pip install bce-python-sdk\n",
"#!pip install elasticsearch == 7.11.0\n",
"#!pip install sentence-transformers"
"#!pip install elasticsearch == 7.11.0"
]
},
{
@@ -55,10 +54,8 @@
"metadata": {},
"outputs": [],
"source": [
"import sentence_transformers\n",
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from langchain.chains.retrieval_qa import RetrievalQA\n",
"from langchain.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader\n",
"from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
@@ -164,22 +161,15 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.17"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
@@ -187,5 +177,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 2
}

View File

@@ -87,7 +87,7 @@
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{

View File

@@ -133,7 +133,7 @@
"from tqdm import tqdm\n",
"\n",
"for i in tqdm(range(len(title_embeddings))):\n",
" title = song_titles[i].replace(\"'\", \"''\")\n",
" title = titles[i].replace(\"'\", \"''\")\n",
" embedding = title_embeddings[i]\n",
" sql_command = (\n",
" f'UPDATE \"Track\" SET \"embeddings\" = ARRAY{embedding} WHERE \"Name\" ='\n",
@@ -268,8 +268,8 @@
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"db = SQLDatabase.from_uri(\n",
" CONNECTION_STRING\n",
@@ -324,7 +324,7 @@
"source": [
"import re\n",
"\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"\n",
"def replace_brackets(match):\n",
@@ -681,9 +681,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 2
}

View File

@@ -33,9 +33,9 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper"
]
},
{

View File

@@ -19,8 +19,8 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompt_values import PromptValue"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.prompt import PromptValue"
]
},
{

View File

@@ -25,8 +25,8 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda"
]
},
{

View File

@@ -26,7 +26,7 @@
"source": [
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=1, max_tokens=512, model=\"gpt-3.5-turbo-instruct\")"
"llm = OpenAI(temperature=1, max_tokens=512, model=\"text-davinci-003\")"
]
},
{

View File

@@ -187,7 +187,7 @@
" for key in path:\n",
" try:\n",
" current = current[key]\n",
" except KeyError:\n",
" except:\n",
" return None\n",
" return current\n",
"\n",

View File

@@ -9,15 +9,13 @@ SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"
mkdir -p ../_dist
rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --exclude .docusaurus . ../_dist
cp -r . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
poetry run nbdoc_build --srcdir docs
cp ../cookbook/README.md src/pages/cookbook.mdx
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
cp ../.github/CONTRIBUTING.md docs/contributing.md
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
yarn
quarto preview docs
poetry run python scripts/generate_api_reference_links.py
yarn install
yarn start

View File

@@ -1,3 +1,49 @@
# LangChain Documentation
# Website
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
### Continuous Integration
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open-source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
```
$ yarn ci
```

View File

@@ -72,8 +72,8 @@ def setup(app):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain, Inc."
author = "LangChain, Inc."
copyright = "2023, Harrison Chase"
author = "Harrison Chase"
version = data["tool"]["poetry"]["version"]
release = version
@@ -136,25 +136,18 @@ html_theme_path = ["themes"]
# redirects dictionary maps from old links to new links
html_additional_pages = {}
redirects = {
"index": "langchain_api_reference",
"index": "api_reference",
}
for old_link in redirects:
html_additional_pages[old_link] = "redirects.html"
partners_dir = Path(__file__).parent.parent.parent / "libs/partners"
partners = [
(p.name, p.name.replace("-", "_") + "_api_reference")
for p in partners_dir.iterdir()
]
html_context = {
"display_github": True, # Integrate GitHub
"github_user": "langchain-ai", # Username
"github_user": "hwchase17", # Username
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
"redirects": redirects,
"partners": partners,
}
# Add any paths that contain custom static files (such as style sheets) here,

View File

@@ -1,18 +1,23 @@
"""Script for auto-generating api_reference.rst."""
import importlib
import inspect
import os
import typing
from enum import Enum
from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
import toml
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
EXP_DIR = ROOT_DIR / "libs" / "experimental" / "langchain_experimental"
CORE_DIR = ROOT_DIR / "libs" / "core" / "langchain_core"
WRITE_FILE = HERE / "api_reference.rst"
EXP_WRITE_FILE = HERE / "experimental_api_reference.rst"
CORE_WRITE_FILE = HERE / "core_api_reference.rst"
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
@@ -191,15 +196,11 @@ def _load_package_modules(
return modules_by_namespace
def _construct_doc(
package_namespace: str,
members_by_namespace: Dict[str, ModuleMembers],
package_version: str,
) -> str:
def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) -> str:
"""Construct the contents of the reference.rst file for the given package.
Args:
package_namespace: The package top level namespace
pkg: The package name
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
@@ -209,7 +210,7 @@ def _construct_doc(
"""
full_doc = f"""\
=======================
``{package_namespace}`` {package_version}
``{pkg}`` API Reference
=======================
"""
@@ -221,13 +222,13 @@ def _construct_doc(
functions = _members["functions"]
if not (classes or functions):
continue
section = f":mod:`{package_namespace}.{module}`"
section = f":mod:`{pkg}.{module}`"
underline = "=" * (len(section) + 1)
full_doc += f"""\
{section}
{underline}
.. automodule:: {package_namespace}.{module}
.. automodule:: {pkg}.{module}
:no-members:
:no-inherited-members:
@@ -237,7 +238,7 @@ def _construct_doc(
full_doc += f"""\
Classes
--------------
.. currentmodule:: {package_namespace}
.. currentmodule:: {pkg}
.. autosummary::
:toctree: {module}
@@ -269,7 +270,7 @@ Classes
full_doc += f"""\
Functions
--------------
.. currentmodule:: {package_namespace}
.. currentmodule:: {pkg}
.. autosummary::
:toctree: {module}
@@ -281,71 +282,57 @@ Functions
return full_doc
def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for building of documentation.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
"""
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
package_version = _get_package_version(package_dir)
with open(_out_file_path(package_name), "w") as f:
f.write(
_doc_first_line(package_name)
+ _construct_doc(
_package_namespace(package_name), package_members, package_version
)
)
def _document_langchain_experimental() -> None:
"""Document the langchain_experimental package."""
# Generate experimental_api_reference.rst
exp_members = _load_package_modules(EXP_DIR)
exp_doc = ".. _experimental_api_reference:\n\n" + _construct_doc(
"langchain_experimental", exp_members
)
with open(EXP_WRITE_FILE, "w") as f:
f.write(exp_doc)
def _package_namespace(package_name: str) -> str:
return (
package_name
if package_name == "langchain"
else f"langchain_{package_name.replace('-', '_')}"
def _document_langchain_core() -> None:
"""Document the langchain_core package."""
# Generate core_api_reference.rst
core_members = _load_package_modules(EXP_DIR)
core_doc = ".. _core_api_reference:\n\n" + _construct_doc(
"langchain_core", core_members
)
with open(CORE_WRITE_FILE, "w") as f:
f.write(core_doc)
def _document_langchain() -> None:
"""Document the main langchain package."""
# load top level module members
lc_members = _load_package_modules(PKG_DIR)
# Add additional packages
tools = _load_package_modules(PKG_DIR, "tools")
agents = _load_package_modules(PKG_DIR, "agents")
schema = _load_package_modules(PKG_DIR, "schema")
lc_members.update(
{
"agents.output_parsers": agents["output_parsers"],
"agents.format_scratchpad": agents["format_scratchpad"],
"tools.render": tools["render"],
}
)
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if package_name in ("langchain", "experimental", "community", "core", "cli"):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
ROOT_DIR
/ "libs"
/ "partners"
/ package_name
/ _package_namespace(package_name)
)
def _get_package_version(package_dir: Path) -> str:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
return pyproject["tool"]["poetry"]["version"]
def _out_file_path(package_name: str = "langchain") -> Path:
"""Return the path to the file containing the documentation."""
return HERE / f"{package_name.replace('-', '_')}_api_reference.rst"
def _doc_first_line(package_name: str = "langchain") -> str:
"""Return the path to the file containing the documentation."""
return f".. {package_name.replace('-', '_')}_api_reference:\n\n"
with open(WRITE_FILE, "w") as f:
f.write(lc_doc)
def main() -> None:
"""Generate the api_reference.rst file for each package."""
for dir in os.listdir(ROOT_DIR / "libs"):
if dir in ("cli", "partners"):
continue
else:
_build_rst_file(package_name=dir)
for dir in os.listdir(ROOT_DIR / "libs" / "partners"):
_build_rst_file(package_name=dir)
"""Generate the reference.rst file for each package."""
_document_langchain()
_document_langchain_experimental()
_document_langchain_core()
if __name__ == "__main__":

View File

@@ -1,7 +1,5 @@
-e libs/experimental
-e libs/langchain
-e libs/core
-e libs/community
-e libs/experimental
pydantic<2
autodoc_pydantic==1.8.0
myst_parser
@@ -14,4 +12,4 @@ sphinx-panels
toml
myst_nb
sphinx_copybutton
pydata-sphinx-theme==0.13.1
pydata-sphinx-theme==0.13.1

View File

@@ -63,6 +63,13 @@
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Next</a>
{%- endif %}
</div>
{%- if pagename != "install" %}
<div class="alert alert-warning p-1 mb-2" role="alert">
<p class="text-center mb-0">
<strong>LangChain {{ release }}</strong><br/>
</p>
</div>
{%- endif %}
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
@@ -80,8 +87,7 @@
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
{% set second_layer_ns = inner_child.title[inner_child.title.index(".")+1:] %}
<a href="{{ inner_child.url }}">{{ second_layer_ns }}</a>
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>

View File

@@ -32,33 +32,22 @@
<div class="sk-navbar-collapse collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('langchain_api_reference') }}">LangChain</a>
<a class="sk-nav-link nav-link" href="{{ pathto('api_reference') }}">API</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('core_api_reference') }}">Core</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('community_api_reference') }}">Community</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
{%- for title, pathname in partners %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ pathto(pathname) }}">{{ title }}</a>
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Python Docs</a>
</li>
{%- for title, link, link_attrs in drop_down_navigation %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ link }}" {{ link_attrs }}>{{ title }}</a>
</li>
{%- endfor %}
<li class="nav-item dropdown nav-more-item-dropdown">
<a class="sk-nav-link nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Partner libs</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
{%- for title, pathname in partners %}
<a class="sk-nav-dropdown-item dropdown-item" href="{{ pathto(pathname) }}">{{ title }}</a>
{%- endfor %}
</div>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Docs</a>
</li>
</ul>
{%- if pagename != "search"%}
<div id="searchbox" role="search">

File diff suppressed because it is too large Load Diff

View File

@@ -6,12 +6,7 @@ Below are links to tutorials and courses on LangChain. For written guides on com
---------------------
### [LangChain](https://en.wikipedia.org/wiki/LangChain) on Wikipedia
### Books
#### ⛓[Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
### [LangChain on Wikipedia](https://en.wikipedia.org/wiki/LangChain)
### DeepLearning.AI courses
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)

View File

@@ -18,7 +18,7 @@ Whether youre new to LangChain, looking to go deeper, or just want to get mor
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is ******still****** so much to do together. Here are some ways to get involved:
- **[Open a pull request](https://github.com/langchain-ai/langchain/issues):** Wed appreciate all forms of contributionsnew features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, wed love to work on it with you.
- **[Read our contributor guidelines:](./contributing/)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
- **[Read our contributor guidelines:](https://github.com/langchain-ai/langchain/blob/bbd22b9b761389a5e40fc45b0570e1830aabb707/.github/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
- **First time contributor?** [Try one of these PRs with the “good first issue” tag](https://github.com/langchain-ai/langchain/contribute).
- **Become an expert:** Our experts help the community by answering product questions in Discord. If thats a role youd like to play, wed be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at hello@langchain.dev and well take it from there!
- **Integrate with LangChain:** If your product integrates with LangChainor aspires towe want to help make sure the experience is as smooth as possible for you and end users. Send us an email at hello@langchain.dev and tell us what youre working on.

View File

@@ -1,250 +0,0 @@
---
sidebar_position: 1
---
# Contribute Code
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/docs/`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
## 🚀 Quick Start
This quick start guide explains how to run the repository locally.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
### Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
### Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
- Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain-community:
```bash
cd libs/community
```
### Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
poetry install --with lint,typing,test,test_integration
```
Then verify dependency installation:
```bash
make test
```
If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running
Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases.
If you are still seeing this bug on v1.6.1, you may also try disabling "modern installation"
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
### Testing
_In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional; see section about optional dependencies_.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
There are also [integration tests and code-coverage](./testing) available.
### Only develop langchain_core or langchain_experimental
If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:
```bash
cd libs/core
poetry install --with test
make test
```
Or:
```bash
cd libs/experimental
poetry install --with test
make test
```
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
#### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
To run formatting for docs, cookbook and templates:
```bash
make format
```
To run formatting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
#### Linting
Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).
To run linting for docs, cookbook and templates:
```bash
make lint
```
To run linting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
```python
[tool.codespell]
...
# Add here:
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
## Working with Optional Dependencies
`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.
`langchain-core` and partner packages **do not use** optional dependencies in this way.
You only need to add a new dependency if a **unit test** relies on the package.
If your package is only required for **integration tests**, then you can skip these
steps and leave all pyproject.toml and poetry.lock files alone.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook
If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.

View File

@@ -1,67 +0,0 @@
---
sidebar_position: 3
---
# Contribute Documentation
The docs directory contains Documentation and API Reference.
Documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and are hosted by [Read the Docs](https://readthedocs.org/).
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
## Build Documentation Locally
### Install dependencies
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus.
- `poetry install` from the monorepo root
### Building
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, run the link checker to ensure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
### Linting and Formatting
The docs are linted from the monorepo root. To lint the docs, run the following from there:
```bash
poetry install --with lint,typing
make lint
```
If you have formatting-related errors, you can fix them automatically with:
```bash
make format
```
## Verify Documentation changes
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).

View File

@@ -1,42 +0,0 @@
---
sidebar_position: 0
---
# Welcome Contributors
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](./documentation): Help improve our docs, including this one!
- [**Code**](./code): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](./integrations): Help us integrate with your favorite vendors and tools.
### 🚩GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.

View File

@@ -1,145 +0,0 @@
---
sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).
There are a few different places you can contribute integrations for LangChain:
- **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community.
- **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner.
For the most part, new integrations should be added to the Community package. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
## Community Package
The `langchain-community` package is in `libs/community` and contains most integrations.
It is installed by users with `pip install langchain-community`, and exported members can be imported with code like
```python
from langchain_community.chat_models import ParrotLinkLLM
from langchain_community.llms import ChatParrotLink
from langchain_community.vectorstores import ParrotLinkVectorStore
```
The community package relies on manually-installed dependent packages, so you will see errors if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
```python
from langchain_core.language_models.chat_models import BaseChatModel
class ChatParrotLink(BaseChatModel):
"""ChatParrotLink chat model.
Example:
.. code-block:: python
from langchain_parrot_link import ChatParrotLink
model = ChatParrotLink()
"""
...
```
And we would write tests in:
- Unit tests: `libs/community/tests/unit_tests/chat_models/test_parrot_link.py`
- Integration tests: `libs/community/tests/integration_tests/chat_models/test_parrot_link.py`
And add documentation to:
- `docs/docs/integrations/chat/parrot_link.ipynb`
- `docs/docs/
## Partner Packages
Partner packages are in `libs/partners/*` and are installed by users with `pip install langchain-{partner}`, and exported members can be imported with code like
```python
from langchain_{partner} import X
```
### Set up a new package
To set up a new partner package, use the latest version of the LangChain CLI. You can install or update it with:
```bash
pip install -U langchain-cli
```
Let's say you want to create a new partner package working for a company called Parrot Link AI.
Then, run the following command to create a new partner package:
```bash
cd libs/partners
langchain-cli integration new
> Name: parrot-link
> Name of integration in PascalCase [ParrotLink]: ParrotLink
```
This will create a new package in `libs/partners/parrot-link` with the following structure:
```
libs/partners/parrot-link/
langchain_parrot_link/ # folder containing your package
...
tests/
...
docs/ # bootstrapped docs notebooks, must be moved to /docs in monorepo root
...
scripts/ # scripts for CI
...
LICENSE
README.md # fill out with information about your package
Makefile # default commands for CI
pyproject.toml # package metadata, mostly managed by Poetry
poetry.lock # package lockfile, managed by Poetry
.gitignore
```
### Implement your package
First, add any dependencies your package needs, such as your company's SDK:
```bash
poetry add parrot-link-sdk
```
If you need separate dependencies for type checking, you can add them to the `typing` group with:
```bash
poetry add --group typing types-parrot-link-sdk
```
Then, implement your package in `libs/partners/parrot-link/langchain_parrot_link`.
By default, this will include stubs for a Chat Model, an LLM, and/or a Vector Store. You should delete any of the files you won't use and remove them from `__init__.py`.
### Write Unit and Integration Tests
Some basic tests are generated in the tests/ directory. You should add more tests to cover your package's functionality.
For information on running and implementing tests, see the [Testing guide](./testing).
### Write documentation
Documentation is generated from Jupyter notebooks in the `docs/` directory. You should move the generated notebooks to the relevant `docs/docs/integrations` directory in the monorepo root.
### Additional steps
Contributor steps:
- [ ] Add secret names to manual integrations workflow in `.github/workflows/_integration_test.yml`
- [ ] Add secrets to release workflow (for pre-release testing) in `.github/workflows/_release.yml`
Maintainer steps (Contributors should **not** do these):
- [ ] set up pypi and test pypi projects
- [ ] add credential secrets to Github Actions
- [ ] add package to conda-forge

View File

@@ -1,56 +0,0 @@
---
sidebar_label: Package Versioning
sidebar_position: 4
---
# 📕 Package Versioning
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a maintainer and published to [PyPI](https://pypi.org/).
The different packages are versioned slightly differently.
## `langchain-core`
`langchain-core` is currently on version `0.1.x`.
As `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in `langchain_core.beta`. The reason for `langchain_core.beta` is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.
Minor version increases will occur for:
- Breaking changes for any public interfaces NOT in `langchain_core.beta`
Patch version increases will occur for:
- Bug fixes
- New features
- Any changes to private interfaces
- Any changes to `langchain_core.beta`
## `langchain`
`langchain` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase. Any changes to public interfaces are nearly always done in a backwards compatible way and will be communicated ahead of time when they are not backwards compatible.
We are targeting January 2024 for a release of `langchain` v0.1, at which point `langchain` will adopt the same versioning policy as `langchain-core`.
## `langchain-community`
`langchain-community` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
## `langchain-experimental`
`langchain-experimental` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
## Partner Packages
Partner packages are versioned independently.
# 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -1,147 +0,0 @@
---
sidebar_position: 2
---
# Testing
All of our packages have unit tests and integration tests, and we favor unit tests over integration tests.
Unit tests run on every pull request, so they should be fast and reliable.
Integration tests run once a day, and they require more setup, so they should be reserved for confirming interface points with external services.
## Unit Tests
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To install dependencies for unit tests:
```bash
poetry install --with test
```
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
To run a specific test:
```bash
TEST_FILE=tests/unit_tests/test_imports.py make test
```
## Integration Tests
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
If you add support for a new external API, please add a new integration test.
**Warning:** Almost no tests should be integration tests.
Tests that require making network connections make it difficult for other
developers to test the code.
Instead favor relying on `responses` library and/or mock.patch to mock
requests using small fixtures.
To install dependencies for integration tests:
```bash
poetry install --with test,test_integration
```
To run integration tests:
```bash
make integration_tests
```
### Prepare
The integration tests use several search engines and databases. The tests
aim to verify the correct behavior of the engines and databases according to
their specifications and requirements.
To run some integration tests, such as tests located in
`tests/integration_tests/vectorstores/`, you will need to install the following
software:
- Docker
- Python 3.8.1 or later
Any new dependencies should be added by running:
```bash
# add package and install it after adding:
poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration
```
Before running any tests, you should start a specific Docker container that has all the
necessary dependencies installed. For instance, we use the `elasticsearch.yml` container
for `test_elasticsearch.py`:
```bash
cd tests/integration_tests/vectorstores/docker-compose
docker-compose -f elasticsearch.yml up
```
For environments that requires more involving preparation, look for `*.sh`. For instance,
`opensearch.sh` builds a required docker image and then launch opensearch.
### Prepare environment variables for local testing:
- copy `tests/integration_tests/.env.example` to `tests/integration_tests/.env`
- set variables in `tests/integration_tests/.env` file, e.g `OPENAI_API_KEY`
Additionally, it's important to note that some integration tests may require certain
environment variables to be set, such as `OPENAI_API_KEY`. Be sure to set any required
environment variables before running the tests to ensure they run correctly.
### Recording HTTP interactions with pytest-vcr
Some of the integration tests in this repository involve making HTTP requests to
external services. To prevent these requests from being made every time the tests are
run, we use pytest-vcr to record and replay HTTP interactions.
When running tests in a CI/CD pipeline, you may not want to modify the existing
cassettes. You can use the --vcr-record=none command-line option to disable recording
new cassettes. Here's an example:
```bash
pytest --log-cli-level=10 tests/integration_tests/vectorstores/test_pinecone.py --vcr-record=none
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none
```
### Run some tests with coverage:
```bash
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html
start "" htmlcov/index.html || open htmlcov/index.html
```
## Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
Coverage requires the dependencies for integration tests:
```bash
poetry install --with test_integration
```
To get a report of current coverage, run the following:
```bash
make coverage
```

View File

@@ -21,7 +21,7 @@
"from langchain.prompts import (\n",
" ChatPromptTemplate,\n",
")\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain_experimental.utilities import PythonREPL"
]
},

View File

@@ -22,9 +22,9 @@
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 2
---
# Cookbook

View File

@@ -22,7 +22,7 @@
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",

View File

@@ -69,7 +69,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"prompt1 = ChatPromptTemplate.from_template(\n",
" \"generate a {attribute} color. Return the name of the color and nothing else:\"\n",
@@ -146,7 +146,7 @@
"source": [
"### Branching and Merging\n",
"\n",
"You may want the output of one component to be processed by 2 or more other components. [RunnableParallels](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html#langchain_core.runnables.base.RunnableParallel) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
"You may want the output of one component to be processed by 2 or more other components. [RunnableMaps](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableMap.html) let you split or fork the chain so multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:\n",
"\n",
"```text\n",
" Input\n",

View File

@@ -191,7 +191,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"chain = prompt | model | StrOutputParser()"
]
@@ -317,7 +317,7 @@
"source": [
"## Simplifying input\n",
"\n",
"To make invocation even simpler, we can add a `RunnableParallel` to take care of creating the prompt input dict for us:"
"To make invocation even simpler, we can add a `RunnableMap` to take care of creating the prompt input dict for us:"
]
},
{
@@ -327,9 +327,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
"\n",
"map_ = RunnableParallel(foo=RunnablePassthrough())\n",
"map_ = RunnableMap(foo=RunnablePassthrough())\n",
"chain = (\n",
" map_\n",
" | prompt\n",

View File

@@ -209,10 +209,7 @@
"id": "637f994a-5134-402a-bcf0-4de3911eaf49",
"metadata": {},
"source": [
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/60909eae-f4f1-43eb-9f96-354f5176f66f/r)\n",
"\n",
":::tip [LangSmith trace](https://smith.langchain.com/public/60909eae-f4f1-43eb-9f96-354f5176f66f/r)\n",
":::"
]
},
@@ -377,10 +374,7 @@
"id": "5a7e498b-dc68-4267-a35c-90ceffa91c46",
"metadata": {},
"source": [
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/3b27d47f-e4df-4afb-81b1-0f88b80ca97e/r)\n",
"\n",
":::tip [LangSmith trace](https://smith.langchain.com/public/3b27d47f-e4df-4afb-81b1-0f88b80ca97e/r)\n",
":::"
]
}

View File

@@ -31,7 +31,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 10,
"id": "33be32af",
"metadata": {},
"outputs": [],
@@ -41,14 +41,14 @@
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"from langchain.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 6,
"id": "bfc47ec1",
"metadata": {},
"outputs": [],
@@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "eae31755",
"metadata": {},
"outputs": [],
@@ -85,7 +85,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 18,
"id": "f3040b0c",
"metadata": {},
"outputs": [
@@ -95,7 +95,7 @@
"'Harrison worked at Kensho.'"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -106,7 +106,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "e1d20c7c",
"metadata": {},
"outputs": [],
@@ -134,7 +134,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "7ee8b2d4",
"metadata": {},
"outputs": [
@@ -144,7 +144,7 @@
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -165,19 +165,18 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 8,
"id": "3f30c348",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import format_document\n",
"from langchain_core.messages import AIMessage, HumanMessage, get_buffer_string\n",
"from langchain_core.runnables import RunnableParallel"
"from langchain.schema.runnable import RunnableMap"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "64ab1dbf",
"metadata": {},
"outputs": [],
@@ -195,7 +194,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "7d628c97",
"metadata": {},
"outputs": [],
@@ -210,7 +209,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "f60a5d0f",
"metadata": {},
"outputs": [],
@@ -227,14 +226,33 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "7d007db6",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Tuple\n",
"\n",
"\n",
"def _format_chat_history(chat_history: List[Tuple]) -> str:\n",
" buffer = \"\"\n",
" for dialogue_turn in chat_history:\n",
" human = \"Human: \" + dialogue_turn[0]\n",
" ai = \"Assistant: \" + dialogue_turn[1]\n",
" buffer += \"\\n\" + \"\\n\".join([human, ai])\n",
" return buffer"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "5c32cc89",
"metadata": {},
"outputs": [],
"source": [
"_inputs = RunnableParallel(\n",
"_inputs = RunnableMap(\n",
" standalone_question=RunnablePassthrough.assign(\n",
" chat_history=lambda x: get_buffer_string(x[\"chat_history\"])\n",
" chat_history=lambda x: _format_chat_history(x[\"chat_history\"])\n",
" )\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
@@ -249,17 +267,17 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 14,
"id": "135c8205",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison was employed at Kensho.')"
"AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)"
]
},
"execution_count": 12,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -275,17 +293,17 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 15,
"id": "424e7e7a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison worked at Kensho.')"
"AIMessage(content='Harrison worked at Kensho.', additional_kwargs={}, example=False)"
]
},
"execution_count": 22,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -294,10 +312,7 @@
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did he work?\",\n",
" \"chat_history\": [\n",
" HumanMessage(content=\"Who wrote this notebook?\"),\n",
" AIMessage(content=\"Harrison\"),\n",
" ],\n",
" \"chat_history\": [(\"Who wrote this notebook?\", \"Harrison\")],\n",
" }\n",
")"
]
@@ -314,7 +329,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 16,
"id": "e31dd17c",
"metadata": {},
"outputs": [],
@@ -326,7 +341,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 17,
"id": "d4bffe94",
"metadata": {},
"outputs": [],
@@ -338,7 +353,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 18,
"id": "733be985",
"metadata": {},
"outputs": [],
@@ -352,7 +367,7 @@
"standalone_question = {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: get_buffer_string(x[\"chat_history\"]),\n",
" \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"]),\n",
" }\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
@@ -379,18 +394,18 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 19,
"id": "806e390c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison was employed at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
"{'answer': AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False),\n",
" 'docs': [Document(page_content='harrison worked at kensho', metadata={})]}"
]
},
"execution_count": 17,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -403,7 +418,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 20,
"id": "977399fd",
"metadata": {},
"outputs": [],
@@ -416,18 +431,18 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 21,
"id": "f94f7de4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='where did harrison work?'),\n",
" AIMessage(content='Harrison was employed at Kensho.')]}"
"{'history': [HumanMessage(content='where did harrison work?', additional_kwargs={}, example=False),\n",
" AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)]}"
]
},
"execution_count": 19,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
@@ -435,38 +450,6 @@
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "88f2b7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison actually worked at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"but where did he really work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207a2782",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -485,7 +468,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -94,8 +94,8 @@
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"model = ChatOpenAI()\n",
"\n",

View File

@@ -29,8 +29,8 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.tools import DuckDuckGoSearchRun\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.tools import DuckDuckGoSearchRun"
]
},
{

View File

@@ -1,494 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "366a0e68-fd67-4fe5-a292-5c33733339ea",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Get started\n",
"keywords: [chain.invoke]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "befa7fd1",
"metadata": {},
"source": [
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
]
},
{
"cell_type": "markdown",
"id": "9a9acd2e",
"metadata": {},
"source": [
"## Basic example: prompt + model + output parser\n",
"\n",
"The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "466b65b3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why did the ice cream go to therapy?\\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a short joke about {topic}\")\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser\n",
"\n",
"chain.invoke({\"topic\": \"ice cream\"})"
]
},
{
"cell_type": "markdown",
"id": "81c502c5-85ee-4f36-aaf4-d6e350b7792f",
"metadata": {},
"source": [
"Notice this line of this code, where we piece together then different components into a single chain using LCEL:\n",
"\n",
"```\n",
"chain = prompt | model | output_parser\n",
"```\n",
"\n",
"The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components feeds the output from one component as input into the next component. \n",
"\n",
"In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on. "
]
},
{
"cell_type": "markdown",
"id": "aa1b77fa",
"metadata": {},
"source": [
"### 1. Prompt\n",
"\n",
"`prompt` is a `BasePromptTemplate`, which means it takes in a dictionary of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `BaseMessage`s and for producing a string."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "b8656990",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value = prompt.invoke({\"topic\": \"ice cream\"})\n",
"prompt_value"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e6034488",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='tell me a short joke about ice cream')]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_messages()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "60565463",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Human: tell me a short joke about ice cream'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt_value.to_string()"
]
},
{
"cell_type": "markdown",
"id": "577f0f76",
"metadata": {},
"source": [
"### 2. Model\n",
"\n",
"The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "33cf5f72",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\")"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = model.invoke(prompt_value)\n",
"message"
]
},
{
"cell_type": "markdown",
"id": "327e7db8",
"metadata": {},
"source": [
"If our `model` was an `LLM`, it would output a string."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8feb05da",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nRobot: Why did the ice cream go to therapy? Because it had a rocky road.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"llm.invoke(prompt_value)"
]
},
{
"cell_type": "markdown",
"id": "91847478",
"metadata": {},
"source": [
"### 3. Output parser\n",
"\n",
"And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n",
"`BaseMessage` as input. The `StrOutputParser` specifically simple converts any input into a string."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "533e59a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn't find its cone-fidence!\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"output_parser.invoke(message)"
]
},
{
"cell_type": "markdown",
"id": "9851e842",
"metadata": {},
"source": [
"### 4. Entire Pipeline\n",
"\n",
"To follow the steps along:\n",
"\n",
"1. We pass in user input on the desired topic as `{\"topic\": \"ice cream\"}`\n",
"2. The `prompt` component takes the user input, which is then used to construct a PromptValue after using the `topic` to construct the prompt. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method. \n"
]
},
{
"cell_type": "markdown",
"id": "c4873109",
"metadata": {},
"source": [
"```mermaid\n",
"graph LR\n",
" A(Input: topic=ice cream) --> |Dict| B(PromptTemplate)\n",
" B -->|PromptValue| C(ChatModel) \n",
" C -->|ChatMessage| D(StrOutputParser)\n",
" D --> |String| F(Result)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "fe63534d",
"metadata": {},
"source": [
":::info\n",
"\n",
"Note that if youre curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11089b6f-23f8-474f-97ec-8cae8d0ca6d4",
"metadata": {},
"outputs": [],
"source": [
"input = {\"topic\": \"ice cream\"}\n",
"\n",
"prompt.invoke(input)\n",
"# > ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])\n",
"\n",
"(prompt | model).invoke(input)\n",
"# > AIMessage(content=\"Why did the ice cream go to therapy?\\nBecause it had too many toppings and couldn't cone-trol itself!\")"
]
},
{
"cell_type": "markdown",
"id": "cc7d3b9d-e400-4c9b-9188-f29dac73e6bb",
"metadata": {},
"source": [
"## RAG Search Example\n",
"\n",
"For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "662426e8-4316-41dc-8312-9b58edc7e0c9",
"metadata": {},
"outputs": [],
"source": [
"# Requires:\n",
"# pip install langchain docarray tiktoken\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_texts(\n",
" [\"harrison worked at kensho\", \"bears like to eat honey\"],\n",
" embedding=OpenAIEmbeddings(),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"output_parser = StrOutputParser()\n",
"\n",
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser\n",
"\n",
"chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "f0999140-6001-423b-970b-adf1dfdb4dec",
"metadata": {},
"source": [
"In this case, the composed chain is: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b88e9bb-f04a-4a56-87ec-19a0e6350763",
"metadata": {},
"outputs": [],
"source": [
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "6e929e15-40a5-4569-8969-384f636cab87",
"metadata": {},
"source": [
"To explain this, we first can see that the prompt template above takes in `context` and `question` as values to be substituted in the prompt. Before building the prompt template, we want to retrieve relevant documents to the search and include them as part of the context. \n",
"\n",
"As a preliminary step, weve setup the retriever using an in memory store, which can retrieve documents based on a query. This is a runnable component as well that can be chained together with other components, but you can also try to run it separately:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7319ef6-613b-4638-ad7d-4a2183702c1d",
"metadata": {},
"outputs": [],
"source": [
"retriever.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "e6833844-f1c4-444c-a3d2-31b3c6b31d46",
"metadata": {},
"source": [
"We then use the `RunnableParallel` to prepare the expected inputs into the prompt by using the entries for the retrieved documents as well as the original user question, using the retriever for document search, and RunnablePassthrough to pass the users question:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcbca26b-d6b9-4c24-806c-1ec8fdaab4ed",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "68c721c1-048b-4a64-9d78-df54fe465992",
"metadata": {},
"source": [
"To review, the complete chain is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d5115a7-7b8e-458b-b936-26cc87ee81c4",
"metadata": {},
"outputs": [],
"source": [
"setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
")\n",
"chain = setup_and_retrieval | prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "5c6f5f74-b387-48a0-bedd-1fae202cd10a",
"metadata": {},
"source": [
"With the flow being:\n",
"\n",
"1. The first steps create a `RunnableParallel` object with two entries. The first entry, `context` will include the document results fetched by the retriever. The second entry, `question` will contain the users original question. To pass on the question, we use `RunnablePassthrough` to copy this entry. \n",
"2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n",
"3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n",
"4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method.\n",
"\n",
"```mermaid\n",
"graph LR\n",
" A(Question) --> B(RunnableParallel)\n",
" B -->|Question| C(Retriever)\n",
" B -->|Question| D(RunnablePassThrough)\n",
" C -->|context=retrieved docs| E(PromptTemplate)\n",
" D -->|question=Question| E\n",
" E -->|PromptValue| F(ChatModel) \n",
" F -->|ChatMessage| G(StrOutputParser)\n",
" G --> |String| H(Result)\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "8c2438df-164e-4bbe-b5f4-461695e45b0f",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"We recommend reading our [Why use LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -22,7 +22,7 @@
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough"
"from langchain.schema.runnable import RunnablePassthrough"
]
},
{

View File

@@ -43,7 +43,6 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.runnables import ConfigurableField\n",
"\n",
"model = ChatOpenAI(temperature=0).configurable_fields(\n",
" temperature=ConfigurableField(\n",
@@ -265,7 +264,7 @@
"source": [
"from langchain.chat_models import ChatAnthropic, ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.runnables import ConfigurableField"
"from langchain.schema.runnable import ConfigurableField"
]
},
{
@@ -595,7 +594,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -26,7 +26,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "d3e893bf",
"metadata": {},
"outputs": [],
@@ -44,24 +44,19 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "dfdd8bf5",
"metadata": {},
"outputs": [],
"source": [
"from unittest.mock import patch\n",
"\n",
"import httpx\n",
"from openai import RateLimitError\n",
"\n",
"request = httpx.Request(\"GET\", \"/\")\n",
"response = httpx.Response(200, request=request)\n",
"error = RateLimitError(\"rate limit\", response=response, body=\"\")"
"from openai.error import RateLimitError"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 5,
"id": "e6fdffc1",
"metadata": {},
"outputs": [],
@@ -74,7 +69,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 27,
"id": "584461ab",
"metadata": {},
"outputs": [
@@ -88,10 +83,10 @@
],
"source": [
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -111,10 +106,10 @@
],
"source": [
"# Now let's try with fallbacks to Anthropic\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -153,10 +148,10 @@
" ]\n",
")\n",
"chain = prompt | llm\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -190,10 +185,10 @@
")\n",
"\n",
"chain = prompt | llm\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -216,7 +211,7 @@
"source": [
"# First let's create a chain with a ChatModel\n",
"# We add in a string output parser here so the outputs between the two are the same type\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -291,7 +286,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -1,17 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "ce0e08fd",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: \"RunnableLambda: Run Custom Functions\"\n",
"keywords: [RunnableLambda, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fbc4bf6e",
@@ -19,14 +7,14 @@
"source": [
"# Run custom functions\n",
"\n",
"You can use arbitrary functions in the pipeline.\n",
"You can use arbitrary functions in the pipeline\n",
"\n",
"Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "6bb221b3",
"metadata": {},
"outputs": [],
@@ -35,7 +23,7 @@
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"\n",
"def length_function(text):\n",
@@ -68,17 +56,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "5488ec85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 + 9 equals 12.')"
"AIMessage(content='3 + 9 equals 12.', additional_kwargs={}, example=False)"
]
},
"execution_count": 2,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -94,23 +82,23 @@
"source": [
"## Accepting a Runnable Config\n",
"\n",
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.RunnableConfig.html?highlight=runnableconfig#langchain.schema.runnable.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 9,
"id": "80b3b5f6-5d58-44b9-807e-cce9a46bf49f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableConfig"
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnableConfig"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 10,
"id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36",
"metadata": {},
"outputs": [],
@@ -137,7 +125,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 12,
"id": "1a5e709e-9d75-48c7-bb9c-503251990505",
"metadata": {},
"outputs": [
@@ -145,7 +133,6 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'foo': 'bar'}\n",
"Tokens Used: 65\n",
"\tPrompt Tokens: 56\n",
"\tCompletion Tokens: 9\n",
@@ -158,10 +145,9 @@
"from langchain.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" output = RunnableLambda(parse_or_fix).invoke(\n",
" RunnableLambda(parse_or_fix).invoke(\n",
" \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n",
" )\n",
" print(output)\n",
" print(cb)"
]
},
@@ -190,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -17,13 +17,6 @@
"Let's implement a custom output parser for comma-separated lists."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sync version"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -34,7 +27,7 @@
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"Write a comma-separated list of 5 animals similar to: {animal}\"\n",
@@ -64,7 +57,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -73,7 +66,7 @@
"'lion, tiger, wolf, gorilla, panda'"
]
},
"execution_count": 3,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -159,81 +152,12 @@
"list_chain.invoke({\"animal\": \"bear\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async version"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import AsyncIterator\n",
"\n",
"\n",
"async def asplit_into_list(\n",
" input: AsyncIterator[str],\n",
") -> AsyncIterator[List[str]]: # async def\n",
" buffer = \"\"\n",
" async for (\n",
" chunk\n",
" ) in input: # `input` is a `async_generator` object, so use `async for`\n",
" buffer += chunk\n",
" while \",\" in buffer:\n",
" comma_index = buffer.index(\",\")\n",
" yield [buffer[:comma_index].strip()]\n",
" buffer = buffer[comma_index + 1 :]\n",
" yield [buffer.strip()]\n",
"\n",
"\n",
"list_chain = str_chain | asplit_into_list"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['lion']\n",
"['tiger']\n",
"['wolf']\n",
"['gorilla']\n",
"['panda']\n"
]
}
],
"source": [
"async for chunk in list_chain.astream({\"animal\": \"bear\"}):\n",
" print(chunk, flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await list_chain.ainvoke({\"animal\": \"bear\"})"
]
"source": []
}
],
"metadata": {
@@ -252,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 1
---
# How to

View File

@@ -1,29 +1,56 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e2596041-9b76-4e74-836f-e6235086bbf0",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: \"RunnableParallel: Manipulating data\"\n",
"keywords: [RunnableParallel, RunnableMap, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Manipulating inputs & output\n",
"# Parallelize steps\n",
"\n",
"RunnableParallel can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n",
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7e1873d6-d4b6-43ac-96a1-edcf178201e0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'joke': AIMessage(content=\"Why don't bears wear shoes? \\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
" 'poem': AIMessage(content=\"In woodland depths, bear prowls with might,\\nSilent strength, nature's sovereign, day and night.\", additional_kwargs={}, example=False)}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.runnable import RunnableParallel\n",
"\n",
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"model = ChatOpenAI()\n",
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
"poem_chain = (\n",
" ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
")\n",
"\n",
"\n"
"map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n",
"\n",
"map_chain.invoke({\"topic\": \"bear\"})"
]
},
{
"cell_type": "markdown",
"id": "df867ae9-1cec-4c9e-9fef-21969b206af5",
"metadata": {},
"source": [
"## Manipulating outputs/inputs\n",
"Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence."
]
},
{
@@ -44,12 +71,10 @@
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
@@ -61,7 +86,6 @@
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"\n",
"retrieval_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
@@ -78,133 +102,9 @@
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
":::\n",
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"\n",
"```\n",
"{\"context\": retriever, \"question\": RunnablePassthrough()}\n",
"```\n",
"\n",
"```\n",
"RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
"```\n",
"\n",
"```\n",
"RunnableParallel(context=retriever, question=RunnablePassthrough())\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "7c1b8baa-3a80-44f0-bb79-d22f79815d3d",
"metadata": {},
"source": [
"## Using itemgetter as shorthand\n",
"\n",
"Note that you can use Python's `itemgetter` as shorthand to extract data from the map when combining with `RunnableParallel`. You can find more information about itemgetter in the [Python Documentation](https://docs.python.org/3/library/operator.html#operator.itemgetter). \n",
"\n",
"In the example below, we use itemgetter to extract specific keys from the map:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "84fc49e1-2daf-4700-ae33-a0a6ed47d5f6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\n",
"Answer in the following language: {language}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"chain = (\n",
" {\n",
" \"context\": itemgetter(\"question\") | retriever,\n",
" \"question\": itemgetter(\"question\"),\n",
" \"language\": itemgetter(\"language\"),\n",
" }\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
]
},
{
"cell_type": "markdown",
"id": "bc2f9847-39aa-4fe4-9049-3a8969bc4bce",
"metadata": {},
"source": [
"## Parallelize steps\n",
"\n",
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "31f18442-f837-463f-bef4-8729368f5f8b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'joke': AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n",
" 'poem': AIMessage(content=\"In the wild's embrace, bear roams free,\\nStrength and grace, a majestic decree.\")}"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableParallel\n",
"\n",
"model = ChatOpenAI()\n",
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
"poem_chain = (\n",
" ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
")\n",
"\n",
"map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n",
"\n",
"map_chain.invoke({\"topic\": \"bear\"})"
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictionary in the RunnableMap class — the type conversion is handled for us."
]
},
{
@@ -214,7 +114,7 @@
"source": [
"## Parallelism\n",
"\n",
"RunnableParallel are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
"RunnableMaps are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
]
},
{
@@ -294,7 +194,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -10,13 +10,11 @@
"The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n",
"\n",
"Specifically, it can be used for any Runnable that takes as input one of\n",
"\n",
"* a sequence of `BaseMessage`\n",
"* a dict with a key that takes a sequence of `BaseMessage`\n",
"* a dict with a key that takes the latest message(s) as a string or sequence of `BaseMessage`, and a separate key that takes historical messages\n",
"\n",
"And returns as output one of\n",
"\n",
"* a string that can be treated as the contents of an `AIMessage`\n",
"* a sequence of `BaseMessage`\n",
"* a dict with a key that contains a sequence of `BaseMessage`\n",
@@ -134,8 +132,8 @@
"from langchain.chat_models import ChatAnthropic\n",
"from langchain.memory.chat_message_histories import RedisChatMessageHistory\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory"
"from langchain.schema.chat_history import BaseChatMessageHistory\n",
"from langchain.schema.runnable.history import RunnableWithMessageHistory"
]
},
{
@@ -253,10 +251,7 @@
"id": "da3d1feb-b4bb-4624-961c-7db2e1180df7",
"metadata": {},
"source": [
":::tip\n",
"\n",
"[Langsmith trace](https://smith.langchain.com/public/863a003b-7ca8-4b24-be9e-d63ec13c106e/r)\n",
"\n",
":::tip [Langsmith trace](https://smith.langchain.com/public/863a003b-7ca8-4b24-be9e-d63ec13c106e/r)\n",
":::"
]
},
@@ -294,10 +289,10 @@
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.runnables import RunnableParallel\n",
"from langchain.schema.messages import HumanMessage\n",
"from langchain.schema.runnable import RunnableMap\n",
"\n",
"chain = RunnableParallel({\"output_message\": ChatAnthropic(model=\"claude-2\")})\n",
"chain = RunnableMap({\"output_message\": ChatAnthropic(model=\"claude-2\")})\n",
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
@@ -339,10 +334,7 @@
"id": "b898d1b1-11e6-4d30-a8dd-cc5e45533611",
"metadata": {},
"source": [
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/f6c3e1d1-a49d-4955-a9fa-c6519df74fa7/r)\n",
"\n",
":::tip [LangSmith trace](https://smith.langchain.com/public/f6c3e1d1-a49d-4955-a9fa-c6519df74fa7/r)\n",
":::"
]
},

View File

@@ -1,159 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d35de667-0352-4bfb-a890-cebe7f676fe7",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: \"RunnablePassthrough: Passing data through\"\n",
"keywords: [RunnablePassthrough, RunnableParallel, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Passing data through\n",
"\n",
"RunnablePassthrough allows to pass inputs unchanged or with the addition of extra keys. This typically is used in conjuction with RunnableParallel to assign data to a new key in the map. \n",
"\n",
"RunnablePassthrough() called on it's own, will simply take the input and pass it through. \n",
"\n",
"RunnablePassthrough called with assign (`RunnablePassthrough.assign(...)`) will take the input, and will add the extra arguments passed to the assign function. \n",
"\n",
"See the example below:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "03988b8d-d54c-4492-8707-1594372cf093",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'passed': {'num': 1}, 'extra': {'num': 1, 'mult': 3}, 'modified': 2}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"runnable = RunnableParallel(\n",
" passed=RunnablePassthrough(),\n",
" extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n",
" modified=lambda x: x[\"num\"] + 1,\n",
")\n",
"\n",
"runnable.invoke({\"num\": 1})"
]
},
{
"cell_type": "markdown",
"id": "702c7acc-cd31-4037-9489-647df192fd7c",
"metadata": {},
"source": [
"As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. \n",
"\n",
"In the second line, we used `RunnablePastshrough.assign` with a lambda that multiplies the numerical value by 3. In this cased, `extra` was set with `{'num': 1, 'mult': 3}` which is the original value with the `mult` key added. \n",
"\n",
"Finally, we also set a third key in the map with `modified` which uses a labmda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`."
]
},
{
"cell_type": "markdown",
"id": "15187a3b-d666-4b9b-a258-672fc51fe0e2",
"metadata": {},
"source": [
"## Retrieval Example\n",
"\n",
"In the example below, we see a use case where we use RunnablePassthrough along with RunnableMap. "
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"\n",
"retrieval_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")\n",
"\n",
"retrieval_chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "markdown",
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key. In this case, the RunnablePassthrough allows us to pass on the user's question to the prompt and model. \n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,16 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: \"RunnableBranch: Dynamically route logic based on input\"\n",
"keywords: [RunnableBranch, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "4b47436a",
@@ -53,7 +42,7 @@
"source": [
"from langchain.chat_models import ChatAnthropic\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
@@ -74,7 +63,7 @@
"chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
"\n",
" \n",
"Do not respond with more than one word.\n",
"\n",
"<question>\n",
@@ -164,7 +153,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableBranch\n",
"from langchain.schema.runnable import RunnableBranch\n",
"\n",
"branch = RunnableBranch(\n",
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), anthropic_chain),\n",
@@ -279,7 +268,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableLambda\n",
"from langchain.schema.runnable import RunnableLambda\n",
"\n",
"full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | RunnableLambda(\n",
" route\n",
@@ -304,7 +293,7 @@
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use Anthropic?\"})"
"full_chain.invoke({\"question\": \"how do I use Anthroipc?\"})"
]
},
{

View File

@@ -20,7 +20,7 @@ Whenever your LCEL chains have steps that can be executed in parallel (eg if you
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results**
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
**Input and output schemas**
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
@@ -30,4 +30,4 @@ As your chains get more and more complex, it becomes increasingly important to u
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
**Seamless LangServe deployment integration**
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).

View File

@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"sidebar_position: 0\n",
"title: Interface\n",
"---"
]
@@ -16,7 +16,7 @@
"id": "9a9acd2e",
"metadata": {},
"source": [
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.Runnable.html#langchain.schema.runnable.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",
"The standard interface includes:\n",
"\n",
@@ -660,9 +660,9 @@
],
"source": [
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
@@ -920,7 +920,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableParallel\n",
"from langchain.schema.runnable import RunnableParallel\n",
"\n",
"chain1 = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
"chain2 = (\n",

File diff suppressed because it is too large Load Diff

View File

@@ -29,20 +29,6 @@ If you want to install from source, you can do so by cloning the repo and be sur
pip install -e .
```
## LangChain community
The `langchain-community` package contains third-party integrations. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
pip install langchain-community
```
## LangChain core
The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
pip install langchain-core
```
## LangChain experimental
The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
Install with:
@@ -75,4 +61,4 @@ If not using LangChain, install with:
```bash
pip install langsmith
```
```

View File

@@ -29,11 +29,6 @@ The main value props of the LangChain packages are:
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
The LangChain libraries themselves are made up of several different packages.
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
## Get started
[Heres](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.
@@ -84,7 +79,7 @@ Walkthroughs and techniques for common end-to-end use cases, like:
### [Integrations](/docs/integrations/providers/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Guides](/docs/guides/guides/debugging)
### [Guides](/docs/guides/adapters/openai)
Best practices for developing with LangChain.
### [API reference](https://api.python.langchain.com)

View File

@@ -66,7 +66,7 @@ If you do want to use LangSmith, after you sign up at the link above, make sure
```shell
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."
export LANGCHAIN_API_KEY=...
```
### LangServe
@@ -154,7 +154,7 @@ chat_model.invoke(messages)
<details> <summary>Go deeper</summary>
`LLM.invoke` and `ChatModel.invoke` actually both support as input any of `Union[str, List[BaseMessage], PromptValue]`.
`PromptValue` is an object that defines its own custom logic for returning its inputs either as a string or as messages.
`PromptValue` is an object that defines it's own custom logic for returning it's inputs either as a string or as messages.
`LLM`s have logic for coercing any of these into a string, and `ChatModel`s have logic for coercing any of these to messages.
The fact that `LLM` and `ChatModel` accept the same inputs means that you can directly swap them for one another in most chains without breaking anything,
though it's of course important to think about how inputs are being coerced and how that may affect model performance.
@@ -166,7 +166,7 @@ To dive deeper on models head to the [Language models](/docs/modules/model_io/mo
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it would be great if the user only had to provide the description of a company/product without worrying about giving the model instructions.
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it would be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
PromptTemplates help with exactly this!
They bundle up all the logic for going from user input into a fully formatted prompt.
@@ -220,8 +220,8 @@ ChatPromptTemplates can also be constructed in other ways - see the [section on
### Output parsers
`OutputParser`s convert the raw output of a language model into a format that can be used downstream.
There are a few main types of `OutputParser`s, including:
`OutputParsers` convert the raw output of a language model into a format that can be used downstream.
There are few main types of `OutputParser`s, including:
- Convert text from `LLM` into structured information (e.g. JSON)
- Convert a `ChatMessage` into just a string
@@ -344,7 +344,7 @@ category_chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple API server using LangChain's Runnable interfaces",
description="A simple api server using Langchain's Runnable interfaces",
)
# 3. Adding chain route

View File

@@ -5,9 +5,7 @@
"id": "700a516b",
"metadata": {},
"source": [
"# OpenAI Adapter(Old)\n",
"\n",
"**Please ensure OpenAI library is less than 1.0.0; otherwise, refer to the newer doc [OpenAI Adapter](./openai).**\n",
"# OpenAI Adapter\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"\n",
@@ -51,6 +49,18 @@
"Original OpenAI call"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e1d27dfa",
"metadata": {},
"outputs": [],
"source": [
"result = openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
@@ -69,9 +79,6 @@
}
],
"source": [
"result = openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"result[\"choices\"][0][\"message\"].to_dict_recursive()"
]
},
@@ -83,6 +90,18 @@
"LangChain OpenAI wrapper call"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "87c2d515",
"metadata": {},
"outputs": [],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
@@ -101,9 +120,6 @@
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"gpt-3.5-turbo\", temperature=0\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
]
},
@@ -115,6 +131,18 @@
"Swapping out model providers"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "7a2c011c",
"metadata": {},
"outputs": [],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"claude-2\", temperature=0, provider=\"ChatAnthropic\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
@@ -133,9 +161,6 @@
}
],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, model=\"claude-2\", temperature=0, provider=\"ChatAnthropic\"\n",
")\n",
"lc_result[\"choices\"][0][\"message\"]"
]
},
@@ -277,7 +302,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -12,7 +12,7 @@ Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [Wand
For anyone building production-grade LLM applications, we highly recommend using a platform like this.
![LangSmith run](../../static/img/run_details.png)
![LangSmith run](/img/run_details.png)
## `set_debug` and `set_verbose`

View File

@@ -89,7 +89,6 @@
"- reference (str) (Only for the labeled_pairwise_string variant) The reference response.\n",
"\n",
"They return a dictionary with the following values:\n",
"\n",
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
@@ -160,7 +159,6 @@
"## Defining the Criteria\n",
"\n",
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
"\n",
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",

View File

@@ -20,21 +20,6 @@ We also are working to share guides and cookbooks that demonstrate how to use th
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## LangSmith Evaluation
LangSmith provides an integrated evaluation and tracing framework that allows you to check for regressions, compare systems, and easily identify and fix any sources of errors and performance issues. Check out the docs on [LangSmith Evaluation](https://docs.smith.langchain.com/category/testing--evaluation) and additional [cookbooks](https://docs.smith.langchain.com/category/langsmith-cookbook) for more detailed information on evaluating your applications.
## LangChain benchmarks
Your application quality is a function both of the LLM you choose and the prompting and data retrieval strategies you employ to provide model contexet. We have published a number of benchmark tasks within the [LangChain Benchmarks](https://langchain-ai.github.io/langchain-benchmarks/) package to grade different LLM systems on tasks such as:
- Agent tool use
- Retrieval-augmented question-answering
- Structured Extraction
Check out the docs for examples and leaderboard information.
## Reference Docs
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.

View File

@@ -9,7 +9,7 @@
"# Embedding Distance\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
"\n",
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"\n",
"\n",
"**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the prediction is to the reference, according to their embedded representation.\n",

View File

@@ -5,13 +5,13 @@
"id": "465cfbef-5bba-4b3b-b02d-fe2eba39db17",
"metadata": {},
"source": [
"# JSON Evaluators\n",
"# Evaluating Structured Output: JSON Evaluators\n",
"\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following JSON validators provide provide functionality to check your model's output in a consistent way.\n",
"\n",
"## JsonValidityEvaluator\n",
"\n",
"The `JsonValidityEvaluator` is designed to check the validity of a `JSON` string prediction.\n",
"The `JsonValidityEvaluator` is designed to check the validity of a JSON string prediction.\n",
"\n",
"### Overview:\n",
"- **Requires Input?**: No\n",
@@ -377,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -8,12 +8,9 @@
"# String Distance\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
"\n",
">In information theory, linguistics, and computer science, the [Levenshtein distance (Wikipedia)](https://en.wikipedia.org/wiki/Levenshtein_distance) is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.\n",
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
"\n",
"\n",
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as `Levenshtein` or `postfix` distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
"\n",
"This can be accessed using the `string_distance` evaluator, which uses distance metrics from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
"This can be accessed using the `string_distance` evaluator, which uses distance metric's from the [rapidfuzz](https://github.com/maxbachmann/RapidFuzz) library.\n",
"\n",
"**Note:** The returned scores are _distances_, meaning lower is typically \"better\".\n",
"\n",
@@ -216,9 +213,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -28,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 18,
"id": "d3e893bf",
"metadata": {},
"outputs": [],
@@ -46,24 +46,19 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 21,
"id": "dfdd8bf5",
"metadata": {},
"outputs": [],
"source": [
"from unittest.mock import patch\n",
"\n",
"import httpx\n",
"from openai import RateLimitError\n",
"\n",
"request = httpx.Request(\"GET\", \"/\")\n",
"response = httpx.Response(200, request=request)\n",
"error = RateLimitError(\"rate limit\", response=response, body=\"\")"
"from openai.error import RateLimitError"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 24,
"id": "e6fdffc1",
"metadata": {},
"outputs": [],
@@ -76,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 27,
"id": "584461ab",
"metadata": {},
"outputs": [
@@ -90,10 +85,10 @@
],
"source": [
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -113,10 +108,10 @@
],
"source": [
"# Now let's try with fallbacks to Anthropic\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -155,10 +150,10 @@
" ]\n",
")\n",
"chain = prompt | llm\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
"with patch(\"openai.ChatCompletion.create\", side_effect=RateLimitError()):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except RateLimitError:\n",
" except:\n",
" print(\"Hit error\")"
]
},
@@ -181,7 +176,7 @@
"source": [
"# First let's create a chain with a ChatModel\n",
"# We add in a string output parser here so the outputs between the two are the same type\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -436,7 +431,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.10.12"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More