Compare commits
1 Commits
v0.0.220
...
vwp/struct
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3612fddb7a |
@@ -1,37 +0,0 @@
|
||||
# Dev container
|
||||
|
||||
This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.
|
||||
|
||||
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
|
||||
|
||||
## GitHub Codespaces
|
||||
[](https://codespaces.new/hwchase17/langchain)
|
||||
|
||||
You may use the button above, or follow these steps to open this repo in a Codespace:
|
||||
1. Click the **Code** drop-down menu at the top of https://github.com/hwchase17/langchain.
|
||||
1. Click on the **Codespaces** tab.
|
||||
1. Click **Create codespace on master** .
|
||||
|
||||
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
||||
|
||||
## VS Code Dev Containers
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
|
||||
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
|
||||
|
||||
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
|
||||
|
||||
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
|
||||
|
||||
2. Open a locally cloned copy of the code:
|
||||
|
||||
- Clone this repository to your local filesystem.
|
||||
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
|
||||
- Select the cloned copy of this folder, wait for the container to start, and try things out!
|
||||
|
||||
You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).
|
||||
|
||||
## Tips and tricks
|
||||
|
||||
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
|
||||
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
|
||||
@@ -1,36 +0,0 @@
|
||||
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
|
||||
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
|
||||
{
|
||||
// Name for the dev container
|
||||
"name": "langchain",
|
||||
|
||||
// Point to a Docker Compose file
|
||||
"dockerComposeFile": "./docker-compose.yaml",
|
||||
|
||||
// Required when using Docker Compose. The name of the service to connect to once running
|
||||
"service": "langchain",
|
||||
|
||||
// The optional 'workspaceFolder' property is the path VS Code should open by default when
|
||||
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
|
||||
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
|
||||
|
||||
// Prevent the container from shutting down
|
||||
"overrideCommand": true
|
||||
|
||||
// Features to add to the dev container. More info: https://containers.dev/features
|
||||
// "features": {
|
||||
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
|
||||
// }
|
||||
|
||||
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
||||
// "forwardPorts": [],
|
||||
|
||||
// Uncomment the next line to run commands after the container is created.
|
||||
// "postCreateCommand": "cat /etc/os-release",
|
||||
|
||||
// Configure tool-specific properties.
|
||||
// "customizations": {},
|
||||
|
||||
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
|
||||
// "remoteUser": "root"
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
version: '3'
|
||||
services:
|
||||
langchain:
|
||||
build:
|
||||
dockerfile: dev.Dockerfile
|
||||
context: ..
|
||||
volumes:
|
||||
# Update this to wherever you want VS Code to mount the folder of your project
|
||||
- ..:/workspaces:cached
|
||||
networks:
|
||||
- langchain-network
|
||||
# environment:
|
||||
# MONGO_ROOT_USERNAME: root
|
||||
# MONGO_ROOT_PASSWORD: example123
|
||||
# depends_on:
|
||||
# - mongo
|
||||
# mongo:
|
||||
# image: mongo
|
||||
# restart: unless-stopped
|
||||
# environment:
|
||||
# MONGO_INITDB_ROOT_USERNAME: root
|
||||
# MONGO_INITDB_ROOT_PASSWORD: example123
|
||||
# ports:
|
||||
# - "27017:27017"
|
||||
# networks:
|
||||
# - langchain-network
|
||||
|
||||
networks:
|
||||
langchain-network:
|
||||
driver: bridge
|
||||
|
||||
|
||||
3
.gitattributes
vendored
@@ -1,3 +0,0 @@
|
||||
* text=auto eol=lf
|
||||
*.{cmd,[cC][mM][dD]} text eol=crlf
|
||||
*.{bat,[bB][aA][tT]} text eol=crlf
|
||||
125
.github/CONTRIBUTING.md
vendored
@@ -2,64 +2,60 @@
|
||||
|
||||
Hi there! Thank you for even being interested in contributing to LangChain.
|
||||
As an open source project in a rapidly developing field, we are extremely open
|
||||
to contributions, whether they be in the form of new features, improved infra, better documentation, or bug fixes.
|
||||
|
||||
## 🗺️ Guidelines
|
||||
|
||||
### 👩💻 Contributing Code
|
||||
to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
|
||||
|
||||
To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
|
||||
Please do not try to push directly to this repo unless you are maintainer.
|
||||
|
||||
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
|
||||
maintainers.
|
||||
|
||||
Pull requests cannot land without passing the formatting, linting and testing checks first. See
|
||||
[Common Tasks](#-common-tasks) for how to run these checks locally.
|
||||
|
||||
It's essential that we maintain great documentation and testing. If you:
|
||||
- Fix a bug
|
||||
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
|
||||
- Make an improvement
|
||||
- Update any affected example notebooks and documentation. These lives in `docs`.
|
||||
- Update unit and integration tests when relevant.
|
||||
- Add a feature
|
||||
- Add a demo notebook in `docs/modules`.
|
||||
- Add unit and integration tests.
|
||||
|
||||
We're a small, building-oriented team. If there's something you'd like to add or change, opening a pull request is the
|
||||
best way to get our attention.
|
||||
## 🗺️Contributing Guidelines
|
||||
|
||||
### 🚩GitHub Issues
|
||||
|
||||
Our [issues](https://github.com/hwchase17/langchain/issues) page is kept up to date
|
||||
with bugs, improvements, and feature requests.
|
||||
with bugs, improvements, and feature requests. There is a taxonomy of labels to help
|
||||
with sorting and discovery of issues of interest. These include:
|
||||
|
||||
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help
|
||||
organize issues.
|
||||
- prompts: related to prompt tooling/infra.
|
||||
- llms: related to LLM wrappers/tooling/infra.
|
||||
- chains
|
||||
- utilities: related to different types of utilities to integrate with (Python, SQL, etc.).
|
||||
- agents
|
||||
- memory
|
||||
- applications: related to example applications to build
|
||||
|
||||
If you start working on an issue, please assign it to yourself.
|
||||
|
||||
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
|
||||
If two issues are related, or blocking, please link them rather than combining them.
|
||||
If you are adding an issue, please try to keep it focused on a single modular bug/improvement/feature.
|
||||
If the two issues are related, or blocking, please link them rather than keep them as one single one.
|
||||
|
||||
We will try to keep these issues as up to date as possible, though
|
||||
with the rapid rate of develop in this field some may get out of date.
|
||||
If you notice this happening, please let us know.
|
||||
If you notice this happening, please just let us know.
|
||||
|
||||
### 🙋Getting Help
|
||||
|
||||
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
|
||||
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
|
||||
smooth for future contributors.
|
||||
Although we try to have a developer setup to make it as easy as possible for others to contribute (see below)
|
||||
it is possible that some pain point may arise around environment setup, linting, documentation, or other.
|
||||
Should that occur, please contact a maintainer! Not only do we want to help get you unblocked,
|
||||
but we also want to make sure that the process is smooth for future contributors.
|
||||
|
||||
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
|
||||
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
|
||||
we do not want these to get in the way of getting good code into the codebase.
|
||||
If you are finding these difficult (or even just annoying) to work with,
|
||||
feel free to contact a maintainer for help - we do not want these to get in the way of getting
|
||||
good code into the codebase.
|
||||
|
||||
## 🚀 Quick Start
|
||||
### 🏭Release process
|
||||
|
||||
> **Note:** You can run this repository locally (which is described below) or in a [development container](https://containers.dev/) (which is described in the [.devcontainer folder](https://github.com/hwchase17/langchain/tree/master/.devcontainer)).
|
||||
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
|
||||
a developer and published to [PyPI](https://pypi.org/project/langchain/).
|
||||
|
||||
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
|
||||
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
|
||||
|
||||
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
|
||||
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
|
||||
|
||||
## 🚀Quick Start
|
||||
|
||||
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
|
||||
|
||||
@@ -81,7 +77,7 @@ This will install all requirements for running the package, examples, linting, f
|
||||
|
||||
Now, you should be able to run the common tasks in the following section. To double check, run `make test`, all tests should pass. If they don't you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
|
||||
|
||||
## ✅ Common Tasks
|
||||
## ✅Common Tasks
|
||||
|
||||
Type `make` for a list of common tasks.
|
||||
|
||||
@@ -117,37 +113,8 @@ To get a report of current coverage, run the following:
|
||||
make coverage
|
||||
```
|
||||
|
||||
### Working with Optional Dependencies
|
||||
|
||||
Langchain relies heavily on optional dependencies to keep the Langchain package lightweight.
|
||||
|
||||
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
|
||||
that most users won't have it installed.
|
||||
|
||||
Users that do not have the dependency installed should be able to **import** your code without
|
||||
any side effects (no warnings, no errors, no exceptions).
|
||||
|
||||
To introduce the dependency to the pyproject.toml file correctly, please do the following:
|
||||
|
||||
1. Add the dependency to the main group as an optional dependency
|
||||
```bash
|
||||
poetry add --optional [package_name]
|
||||
```
|
||||
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
|
||||
3. Relock the poetry file to update the extra.
|
||||
```bash
|
||||
poetry lock --no-update
|
||||
```
|
||||
4. Add a unit test that the very least attempts to import the new code. Ideally the unit
|
||||
test makes use of lightweight fixtures to test the logic of the code.
|
||||
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
|
||||
|
||||
### Testing
|
||||
|
||||
See section about optional dependencies.
|
||||
|
||||
#### Unit Tests
|
||||
|
||||
Unit tests cover modular logic that does not require calls to outside APIs.
|
||||
|
||||
To run unit tests:
|
||||
@@ -164,20 +131,8 @@ make docker_tests
|
||||
|
||||
If you add new logic, please add a unit test.
|
||||
|
||||
|
||||
|
||||
#### Integration Tests
|
||||
|
||||
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
|
||||
|
||||
**warning** Almost no tests should be integration tests.
|
||||
|
||||
Tests that require making network connections make it difficult for other
|
||||
developers to test the code.
|
||||
|
||||
Instead favor relying on `responses` library and/or mock.patch to mock
|
||||
requests using small fixtures.
|
||||
|
||||
To run integration tests:
|
||||
|
||||
```bash
|
||||
@@ -233,17 +188,3 @@ Finally, you can build the documentation as outlined below:
|
||||
```bash
|
||||
make docs_build
|
||||
```
|
||||
|
||||
## 🏭 Release Process
|
||||
|
||||
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
|
||||
a developer and published to [PyPI](https://pypi.org/project/langchain/).
|
||||
|
||||
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
|
||||
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
|
||||
|
||||
### 🌟 Recognition
|
||||
|
||||
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
|
||||
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
|
||||
|
||||
|
||||
106
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,106 +0,0 @@
|
||||
name: "\U0001F41B Bug Report"
|
||||
description: Submit a bug report to help us improve LangChain
|
||||
labels: ["02 Bug Report"]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: >
|
||||
Thank you for taking the time to file a bug report. Before creating a new
|
||||
issue, please make sure to take a few moments to check the issue tracker
|
||||
for existing issues about the bug.
|
||||
|
||||
- type: textarea
|
||||
id: system-info
|
||||
attributes:
|
||||
label: System Info
|
||||
description: Please share your system info with us.
|
||||
placeholder: LangChain version, platform, python version, ...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: who-can-help
|
||||
attributes:
|
||||
label: Who can help?
|
||||
description: |
|
||||
Your issue will be replied to more quickly if you can figure out the right person to tag with @
|
||||
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
|
||||
|
||||
The core maintainers strive to read all issues, but tagging them will help them prioritize.
|
||||
|
||||
Please tag fewer than 3 people.
|
||||
|
||||
@hwchase17 - project lead
|
||||
|
||||
Tracing / Callbacks
|
||||
- @agola11
|
||||
|
||||
Async
|
||||
- @agola11
|
||||
|
||||
DataLoader Abstractions
|
||||
- @eyurtsev
|
||||
|
||||
LLM/Chat Wrappers
|
||||
- @hwchase17
|
||||
- @agola11
|
||||
|
||||
Tools / Toolkits
|
||||
- ...
|
||||
|
||||
placeholder: "@Username ..."
|
||||
|
||||
- type: checkboxes
|
||||
id: information-scripts-examples
|
||||
attributes:
|
||||
label: Information
|
||||
description: "The problem arises when using:"
|
||||
options:
|
||||
- label: "The official example notebooks/scripts"
|
||||
- label: "My own modified scripts"
|
||||
|
||||
- type: checkboxes
|
||||
id: related-components
|
||||
attributes:
|
||||
label: Related Components
|
||||
description: "Select the components related to the issue (if applicable):"
|
||||
options:
|
||||
- label: "LLMs/Chat Models"
|
||||
- label: "Embedding Models"
|
||||
- label: "Prompts / Prompt Templates / Prompt Selectors"
|
||||
- label: "Output Parsers"
|
||||
- label: "Document Loaders"
|
||||
- label: "Vector Stores / Retrievers"
|
||||
- label: "Memory"
|
||||
- label: "Agents / Agent Executors"
|
||||
- label: "Tools / Toolkits"
|
||||
- label: "Chains"
|
||||
- label: "Callbacks/Tracing"
|
||||
- label: "Async"
|
||||
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Reproduction
|
||||
description: |
|
||||
Please provide a [code sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
|
||||
If you have code snippets, error messages, stack traces please provide them here as well.
|
||||
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
|
||||
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
- type: textarea
|
||||
id: expected-behavior
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: "A clear and concise description of what you would expect to happen."
|
||||
6
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,6 +0,0 @@
|
||||
blank_issues_enabled: true
|
||||
version: 2.1
|
||||
contact_links:
|
||||
- name: Discord
|
||||
url: https://discord.gg/6adMQxSpJS
|
||||
about: General community discussions
|
||||
19
.github/ISSUE_TEMPLATE/documentation.yml
vendored
@@ -1,19 +0,0 @@
|
||||
name: Documentation
|
||||
description: Report an issue related to the LangChain documentation.
|
||||
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
|
||||
labels: [03 - Documentation]
|
||||
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue with current documentation:"
|
||||
description: >
|
||||
Please make sure to leave a reference to the document/code you're
|
||||
referring to.
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Idea or request for content:"
|
||||
description: >
|
||||
Please describe as clearly as possible what topics you think are missing
|
||||
from the current documentation.
|
||||
30
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
@@ -1,30 +0,0 @@
|
||||
name: "\U0001F680 Feature request"
|
||||
description: Submit a proposal/request for a new LangChain feature
|
||||
labels: ["02 Feature Request"]
|
||||
body:
|
||||
- type: textarea
|
||||
id: feature-request
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Feature request
|
||||
description: |
|
||||
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
|
||||
|
||||
- type: textarea
|
||||
id: motivation
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Motivation
|
||||
description: |
|
||||
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
|
||||
|
||||
- type: textarea
|
||||
id: contribution
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Your contribution
|
||||
description: |
|
||||
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md)
|
||||
18
.github/ISSUE_TEMPLATE/other.yml
vendored
@@ -1,18 +0,0 @@
|
||||
name: Other Issue
|
||||
description: Raise an issue that wouldn't be covered by the other templates.
|
||||
title: "Issue: <Please write a comprehensive title after the 'Issue: ' prefix>"
|
||||
labels: [04 - Other]
|
||||
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue you'd like to raise."
|
||||
description: >
|
||||
Please describe the issue you'd like to raise as clearly as possible.
|
||||
Make sure to include any relevant links or references.
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Suggestion:"
|
||||
description: >
|
||||
Please outline a suggestion to improve the issue here.
|
||||
26
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,26 +0,0 @@
|
||||
<!-- Thank you for contributing to LangChain!
|
||||
|
||||
Replace this comment with:
|
||||
- Description: a description of the change,
|
||||
- Issue: the issue # it fixes (if applicable),
|
||||
- Dependencies: any dependencies required for this change,
|
||||
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
|
||||
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
If you're adding a new integration, please include:
|
||||
1. a test for the integration, preferably unit tests that do not rely on network access,
|
||||
2. an example notebook showing its use.
|
||||
|
||||
Maintainer responsibilities:
|
||||
- General / Misc / if you don't know who to tag: @dev2049
|
||||
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
|
||||
- Models / Prompts: @hwchase17, @dev2049
|
||||
- Memory: @hwchase17
|
||||
- Agents / Tools / Toolkits: @vowelparrot
|
||||
- Tracing / Callbacks: @agola11
|
||||
- Async: @agola11
|
||||
|
||||
If no one reviews your PR within a few days, feel free to @-mention the same people again.
|
||||
|
||||
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
|
||||
-->
|
||||
76
.github/actions/poetry_setup/action.yml
vendored
@@ -1,76 +0,0 @@
|
||||
# An action for setting up poetry install with caching.
|
||||
# Using a custom action since the default action does not
|
||||
# take poetry install groups into account.
|
||||
# Action code from:
|
||||
# https://github.com/actions/setup-python/issues/505#issuecomment-1273013236
|
||||
name: poetry-install-with-caching
|
||||
description: Poetry install with support for caching of dependency groups.
|
||||
|
||||
inputs:
|
||||
python-version:
|
||||
description: Python version, supporting MAJOR.MINOR only
|
||||
required: true
|
||||
|
||||
poetry-version:
|
||||
description: Poetry version
|
||||
required: true
|
||||
|
||||
install-command:
|
||||
description: Command run for installing dependencies
|
||||
required: false
|
||||
default: poetry install
|
||||
|
||||
cache-key:
|
||||
description: Cache key to use for manual handling of caching
|
||||
required: true
|
||||
|
||||
working-directory:
|
||||
description: Directory to run install-command in
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- uses: actions/setup-python@v4
|
||||
name: Setup python $${ inputs.python-version }}
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
- uses: actions/cache@v3
|
||||
id: cache-pip
|
||||
name: Cache Pip ${{ inputs.python-version }}
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
|
||||
with:
|
||||
path: |
|
||||
~/.cache/pip
|
||||
key: pip-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}
|
||||
|
||||
- run: pipx install poetry==${{ inputs.poetry-version }} --python python${{ inputs.python-version }}
|
||||
shell: bash
|
||||
|
||||
- name: Check Poetry File
|
||||
shell: bash
|
||||
run: |
|
||||
poetry check
|
||||
|
||||
- name: Check lock file
|
||||
shell: bash
|
||||
run: |
|
||||
poetry lock --check
|
||||
|
||||
- uses: actions/cache@v3
|
||||
id: cache-poetry
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
|
||||
with:
|
||||
path: |
|
||||
~/.cache/pypoetry/virtualenvs
|
||||
~/.cache/pypoetry/cache
|
||||
~/.cache/pypoetry/artifacts
|
||||
key: poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles('poetry.lock') }}
|
||||
|
||||
- run: ${{ inputs.install-command }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
shell: bash
|
||||
36
.github/workflows/linkcheck.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
name: linkcheck
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.11"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install poetry
|
||||
run: |
|
||||
pipx install poetry==$POETRY_VERSION
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
cache: poetry
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
poetry install --with docs
|
||||
- name: Build the docs
|
||||
run: |
|
||||
make docs_build
|
||||
- name: Analyzing the docs with linkcheck
|
||||
run: |
|
||||
make docs_linkcheck
|
||||
31
.github/workflows/test.yml
vendored
@@ -4,7 +4,6 @@ on:
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
@@ -19,31 +18,17 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
test_type:
|
||||
- "core"
|
||||
- "extended"
|
||||
name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install poetry
|
||||
run: pipx install poetry==$POETRY_VERSION
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: "1.4.2"
|
||||
cache-key: ${{ matrix.test_type }}
|
||||
install-command: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
echo "Running core tests, installing dependencies with poetry..."
|
||||
poetry install
|
||||
else
|
||||
echo "Running extended tests, installing dependencies with poetry..."
|
||||
poetry install -E extended_testing
|
||||
fi
|
||||
- name: Run ${{matrix.test_type}} tests
|
||||
cache: "poetry"
|
||||
- name: Install dependencies
|
||||
run: poetry install
|
||||
- name: Run unit tests
|
||||
run: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
make test
|
||||
else
|
||||
make extended_tests
|
||||
fi
|
||||
shell: bash
|
||||
make test
|
||||
|
||||
22
.gitignore
vendored
@@ -1,4 +1,3 @@
|
||||
.vs/
|
||||
.vscode/
|
||||
.idea/
|
||||
# Byte-compiled / optimized / DLL files
|
||||
@@ -73,7 +72,6 @@ instance/
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
docs/docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
@@ -146,22 +144,4 @@ wandb/
|
||||
/.ruff_cache/
|
||||
|
||||
*.pkl
|
||||
*.bin
|
||||
|
||||
# integration test artifacts
|
||||
data_map*
|
||||
\[('_type', 'fake'), ('stop', None)]
|
||||
|
||||
# Replit files
|
||||
*replit*
|
||||
|
||||
node_modules
|
||||
docs/.yarn/
|
||||
docs/node_modules/
|
||||
docs/.docusaurus/
|
||||
docs/.cache-loader/
|
||||
docs/_dist
|
||||
docs/api_reference/_build
|
||||
docs/docs_skeleton/build
|
||||
docs/docs_skeleton/node_modules
|
||||
docs/docs_skeleton/yarn.lock
|
||||
*.bin
|
||||
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
||||
[submodule "docs/_docs_skeleton"]
|
||||
path = docs/_docs_skeleton
|
||||
url = https://github.com/langchain-ai/langchain-shared-docs
|
||||
branch = main
|
||||
@@ -1,26 +0,0 @@
|
||||
# Read the Docs configuration file
|
||||
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
||||
|
||||
# Required
|
||||
version: 2
|
||||
|
||||
# Set the version of Python and other tools you might need
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.11"
|
||||
|
||||
# Build documentation in the docs/ directory with Sphinx
|
||||
sphinx:
|
||||
configuration: docs/api_reference/conf.py
|
||||
|
||||
# If using Sphinx, optionally build your docs in additional formats such as PDF
|
||||
# formats:
|
||||
# - pdf
|
||||
|
||||
# Optionally declare the Python requirements required to build your docs
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/requirements.txt
|
||||
- method: pip
|
||||
path: .
|
||||
@@ -1,7 +1,5 @@
|
||||
# This is a Dockerfile for running unit tests
|
||||
|
||||
ARG POETRY_HOME=/opt/poetry
|
||||
|
||||
# Use the Python base image
|
||||
FROM python:3.11.2-bullseye AS builder
|
||||
|
||||
@@ -9,7 +7,7 @@ FROM python:3.11.2-bullseye AS builder
|
||||
ARG POETRY_VERSION=1.4.2
|
||||
|
||||
# Define the directory to install Poetry to (default is /opt/poetry)
|
||||
ARG POETRY_HOME
|
||||
ARG POETRY_HOME=/opt/poetry
|
||||
|
||||
# Create a Python virtual environment for Poetry and install it
|
||||
RUN python3 -m venv ${POETRY_HOME} && \
|
||||
@@ -25,8 +23,6 @@ WORKDIR /app
|
||||
# Use a multi-stage build to install dependencies
|
||||
FROM builder AS dependencies
|
||||
|
||||
ARG POETRY_HOME
|
||||
|
||||
# Copy only the dependency files for installation
|
||||
COPY pyproject.toml poetry.lock poetry.toml ./
|
||||
|
||||
|
||||
39
Makefile
@@ -1,4 +1,4 @@
|
||||
.PHONY: all clean format lint test tests test_watch integration_tests docker_tests help extended_tests
|
||||
.PHONY: all clean format lint test tests test_watch integration_tests docker_tests help
|
||||
|
||||
all: help
|
||||
|
||||
@@ -10,9 +10,6 @@ coverage:
|
||||
|
||||
clean: docs_clean
|
||||
|
||||
docs_compile:
|
||||
poetry run nbdoc_build --srcdir $(srcdir)
|
||||
|
||||
docs_build:
|
||||
cd docs && poetry run make html
|
||||
|
||||
@@ -35,16 +32,11 @@ lint lint_diff:
|
||||
poetry run black $(PYTHON_FILES) --check
|
||||
poetry run ruff .
|
||||
|
||||
TEST_FILE ?= tests/unit_tests/
|
||||
|
||||
test:
|
||||
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
|
||||
poetry run pytest tests/unit_tests
|
||||
|
||||
tests:
|
||||
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
|
||||
|
||||
extended_tests:
|
||||
poetry run pytest --disable-socket --allow-unix-socket --only-extended tests/unit_tests
|
||||
tests:
|
||||
poetry run pytest tests/unit_tests
|
||||
|
||||
test_watch:
|
||||
poetry run ptw --now . -- tests/unit_tests
|
||||
@@ -58,16 +50,13 @@ docker_tests:
|
||||
|
||||
help:
|
||||
@echo '----'
|
||||
@echo 'coverage - run unit tests and generate coverage report'
|
||||
@echo 'docs_build - build the documentation'
|
||||
@echo 'docs_clean - clean the documentation build artifacts'
|
||||
@echo 'docs_linkcheck - run linkchecker on the documentation'
|
||||
@echo 'format - run code formatters'
|
||||
@echo 'lint - run linters'
|
||||
@echo 'test - run unit tests'
|
||||
@echo 'tests - run unit tests'
|
||||
@echo 'test TEST_FILE=<test_file> - run all tests in file'
|
||||
@echo 'extended_tests - run only extended unit tests'
|
||||
@echo 'test_watch - run unit tests in watch mode'
|
||||
@echo 'integration_tests - run integration tests'
|
||||
@echo 'docker_tests - run unit tests in docker'
|
||||
@echo 'coverage - run unit tests and generate coverage report'
|
||||
@echo 'docs_build - build the documentation'
|
||||
@echo 'docs_clean - clean the documentation build artifacts'
|
||||
@echo 'docs_linkcheck - run linkchecker on the documentation'
|
||||
@echo 'format - run code formatters'
|
||||
@echo 'lint - run linters'
|
||||
@echo 'test - run unit tests'
|
||||
@echo 'test_watch - run unit tests in watch mode'
|
||||
@echo 'integration_tests - run integration tests'
|
||||
@echo 'docker_tests - run unit tests in docker'
|
||||
|
||||
26
README.md
@@ -2,21 +2,7 @@
|
||||
|
||||
⚡ Building applications with LLMs through composability ⚡
|
||||
|
||||
[](https://github.com/hwchase17/langchain/releases)
|
||||
[](https://github.com/hwchase17/langchain/actions/workflows/lint.yml)
|
||||
[](https://github.com/hwchase17/langchain/actions/workflows/test.yml)
|
||||
[](https://pepy.tech/project/langchain)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://twitter.com/langchainai)
|
||||
[](https://discord.gg/6adMQxSpJS)
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
[](https://codespaces.new/hwchase17/langchain)
|
||||
[](https://star-history.com/#hwchase17/langchain)
|
||||
[](https://libraries.io/github/hwchase17/langchain)
|
||||
[](https://github.com/hwchase17/langchain/issues)
|
||||
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
|
||||
[](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml) [](https://pepy.tech/project/langchain) [](https://opensource.org/licenses/MIT) [](https://twitter.com/langchainai) [](https://discord.gg/6adMQxSpJS)
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
|
||||
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
|
||||
@@ -35,22 +21,22 @@ This library aims to assist in the development of those types of applications. C
|
||||
|
||||
**❓ Question Answering over specific documents**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
|
||||
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/question_answering.html)
|
||||
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
|
||||
|
||||
**💬 Chatbots**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots/)
|
||||
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/chatbots.html)
|
||||
- End-to-end Example: [Chat-LangChain](https://github.com/hwchase17/chat-langchain)
|
||||
|
||||
**🤖 Agents**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/modules/agents/)
|
||||
- [Documentation](https://langchain.readthedocs.io/en/latest/modules/agents.html)
|
||||
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
|
||||
|
||||
## 📖 Documentation
|
||||
|
||||
Please see [here](https://python.langchain.com) for full documentation on:
|
||||
Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documentation on:
|
||||
|
||||
- Getting started (installation, setting up the environment, simple examples)
|
||||
- How-To examples (demos, integrations, helper functions)
|
||||
@@ -86,7 +72,7 @@ Memory refers to persisting state between calls of a chain/agent. LangChain prov
|
||||
|
||||
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
|
||||
For more information on these concepts, please see our [full documentation](https://langchain.readthedocs.io/en/latest/).
|
||||
|
||||
## 💁 Contributing
|
||||
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
# This is a Dockerfile for the Development Container
|
||||
|
||||
# Use the Python base image
|
||||
ARG VARIANT="3.11-bullseye"
|
||||
FROM mcr.microsoft.com/devcontainers/python:0-${VARIANT} AS langchain-dev-base
|
||||
|
||||
USER vscode
|
||||
|
||||
# Define the version of Poetry to install (default is 1.4.2)
|
||||
# Define the directory of python virtual environment
|
||||
ARG PYTHON_VIRTUALENV_HOME=/home/vscode/langchain-py-env \
|
||||
POETRY_VERSION=1.3.2
|
||||
|
||||
ENV POETRY_VIRTUALENVS_IN_PROJECT=false \
|
||||
POETRY_NO_INTERACTION=true
|
||||
|
||||
# Create a Python virtual environment for Poetry and install it
|
||||
RUN python3 -m venv ${PYTHON_VIRTUALENV_HOME} && \
|
||||
$PYTHON_VIRTUALENV_HOME/bin/pip install --upgrade pip && \
|
||||
$PYTHON_VIRTUALENV_HOME/bin/pip install poetry==${POETRY_VERSION}
|
||||
|
||||
ENV PATH="$PYTHON_VIRTUALENV_HOME/bin:$PATH" \
|
||||
VIRTUAL_ENV=$PYTHON_VIRTUALENV_HOME
|
||||
|
||||
# Setup for bash
|
||||
RUN poetry completions bash >> /home/vscode/.bash_completion && \
|
||||
echo "export PATH=$PYTHON_VIRTUALENV_HOME/bin:$PATH" >> ~/.bashrc
|
||||
|
||||
# Set the working directory for the app
|
||||
WORKDIR /workspaces/langchain
|
||||
|
||||
# Use a multi-stage build to install dependencies
|
||||
FROM langchain-dev-base AS langchain-dev-dependencies
|
||||
|
||||
ARG PYTHON_VIRTUALENV_HOME
|
||||
|
||||
# Copy only the dependency files for installation
|
||||
COPY pyproject.toml poetry.toml ./
|
||||
|
||||
# Install the Poetry dependencies (this layer will be cached as long as the dependencies don't change)
|
||||
RUN poetry install --no-interaction --no-ansi --with dev,test,docs
|
||||
@@ -1,12 +0,0 @@
|
||||
mkdir _dist
|
||||
cp -r {docs_skeleton,snippets} _dist
|
||||
mkdir -p _dist/docs_skeleton/static/api_reference
|
||||
cd api_reference
|
||||
poetry run make html
|
||||
cp -r _build/* ../_dist/docs_skeleton/static/api_reference
|
||||
cd ..
|
||||
cp -r extras/* _dist/docs_skeleton/docs
|
||||
cd _dist/docs_skeleton
|
||||
poetry run nbdoc_build
|
||||
yarn install
|
||||
yarn start
|
||||
|
Before Width: | Height: | Size: 559 KiB After Width: | Height: | Size: 559 KiB |
|
Before Width: | Height: | Size: 157 KiB After Width: | Height: | Size: 157 KiB |
|
Before Width: | Height: | Size: 235 KiB After Width: | Height: | Size: 235 KiB |
|
Before Width: | Height: | Size: 148 KiB After Width: | Height: | Size: 148 KiB |
|
Before Width: | Height: | Size: 3.5 MiB After Width: | Height: | Size: 3.5 MiB |
@@ -13,5 +13,5 @@ pre {
|
||||
}
|
||||
|
||||
#my-component-root *, #headlessui-portal-root * {
|
||||
z-index: 10000;
|
||||
z-index: 1000000000000;
|
||||
}
|
||||
@@ -30,7 +30,10 @@ document.addEventListener('DOMContentLoaded', () => {
|
||||
const icon = React.createElement('p', {
|
||||
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
|
||||
}, [iconSpan1, iconSpan2]);
|
||||
|
||||
|
||||
|
||||
|
||||
const mendableFloatingButton = React.createElement(
|
||||
MendableFloatingButton,
|
||||
{
|
||||
@@ -39,7 +42,6 @@ document.addEventListener('DOMContentLoaded', () => {
|
||||
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
|
||||
messageSettings: {
|
||||
openSourcesInNewTab: false,
|
||||
prettySources: true // Prettify the sources displayed now
|
||||
},
|
||||
icon: icon,
|
||||
}
|
||||
@@ -50,7 +52,7 @@ document.addEventListener('DOMContentLoaded', () => {
|
||||
|
||||
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
|
||||
loadScript('https://unpkg.com/@mendable/search@0.0.83/dist/umd/mendable.min.js', initializeMendable);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,57 +0,0 @@
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Load the external dependencies
|
||||
function loadScript(src, onLoadCallback) {
|
||||
const script = document.createElement('script');
|
||||
script.src = src;
|
||||
script.onload = onLoadCallback;
|
||||
document.head.appendChild(script);
|
||||
}
|
||||
|
||||
function createRootElement() {
|
||||
const rootElement = document.createElement('div');
|
||||
rootElement.id = 'my-component-root';
|
||||
document.body.appendChild(rootElement);
|
||||
return rootElement;
|
||||
}
|
||||
|
||||
|
||||
|
||||
function initializeMendable() {
|
||||
const rootElement = createRootElement();
|
||||
const { MendableFloatingButton } = Mendable;
|
||||
|
||||
|
||||
const iconSpan1 = React.createElement('span', {
|
||||
}, '🦜');
|
||||
|
||||
const iconSpan2 = React.createElement('span', {
|
||||
}, '🔗');
|
||||
|
||||
const icon = React.createElement('p', {
|
||||
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
|
||||
}, [iconSpan1, iconSpan2]);
|
||||
|
||||
const mendableFloatingButton = React.createElement(
|
||||
MendableFloatingButton,
|
||||
{
|
||||
style: { darkMode: false, accentColor: '#010810' },
|
||||
floatingButtonStyle: { color: '#ffffff', backgroundColor: '#010810' },
|
||||
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
|
||||
cmdShortcutKey:'j',
|
||||
messageSettings: {
|
||||
openSourcesInNewTab: false,
|
||||
prettySources: true // Prettify the sources displayed now
|
||||
},
|
||||
icon: icon,
|
||||
}
|
||||
);
|
||||
|
||||
ReactDOM.render(mendableFloatingButton, rootElement);
|
||||
}
|
||||
|
||||
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,29 +0,0 @@
|
||||
API Reference
|
||||
==========================
|
||||
|
||||
| Full documentation on all methods, classes, and APIs in the LangChain Python package.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Abstractions
|
||||
|
||||
./modules/base_classes.rst
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Core
|
||||
|
||||
./model_io.rst
|
||||
./data_connection.rst
|
||||
./modules/chains.rst
|
||||
./agents.rst
|
||||
./modules/memory.rst
|
||||
./modules/callbacks.rst
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Additional
|
||||
|
||||
./modules/utilities.rst
|
||||
./modules/experimental.rst
|
||||
@@ -1,12 +0,0 @@
|
||||
Model I/O
|
||||
==============
|
||||
|
||||
LangChain provides interfaces and integrations for working with language models.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./prompts.rst
|
||||
./models.rst
|
||||
./modules/output_parsers.rst
|
||||
@@ -1,5 +0,0 @@
|
||||
Base classes
|
||||
========================
|
||||
|
||||
.. automodule:: langchain.schema
|
||||
:inherited-members:
|
||||
@@ -1,7 +0,0 @@
|
||||
Callbacks
|
||||
=======================
|
||||
|
||||
.. automodule:: langchain.callbacks
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
Retrievers
|
||||
===============================
|
||||
|
||||
.. automodule:: langchain.retrievers
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
Document compressors
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: langchain.retrievers.document_compressors
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
|
||||
import toml
|
||||
|
||||
with open("../../pyproject.toml") as f:
|
||||
with open("../pyproject.toml") as f:
|
||||
data = toml.load(f)
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
@@ -49,31 +49,19 @@ extensions = [
|
||||
"sphinx_copybutton",
|
||||
"sphinx_panels",
|
||||
"IPython.sphinxext.ipython_console_highlighting",
|
||||
"sphinx_tabs.tabs",
|
||||
]
|
||||
source_suffix = [".rst"]
|
||||
source_suffix = [".ipynb", ".html", ".md", ".rst"]
|
||||
|
||||
autodoc_pydantic_model_show_json = False
|
||||
autodoc_pydantic_field_list_validators = False
|
||||
autodoc_pydantic_config_members = False
|
||||
autodoc_pydantic_model_show_config_summary = False
|
||||
autodoc_pydantic_model_show_validator_members = False
|
||||
autodoc_pydantic_model_show_validator_summary = False
|
||||
autodoc_pydantic_model_show_field_summary = False
|
||||
autodoc_pydantic_model_members = False
|
||||
autodoc_pydantic_model_undoc_members = False
|
||||
autodoc_pydantic_model_hide_paramlist = False
|
||||
autodoc_pydantic_model_signature_prefix = "class"
|
||||
autodoc_pydantic_field_signature_prefix = "attribute"
|
||||
autodoc_pydantic_model_summary_list_order = "bysource"
|
||||
autodoc_member_order = "bysource"
|
||||
autodoc_default_options = {
|
||||
"members": True,
|
||||
"show-inheritance": True,
|
||||
"undoc_members": True,
|
||||
"inherited_members": "BaseModel",
|
||||
}
|
||||
autodoc_typehints = "description"
|
||||
# autodoc_typehints = "signature"
|
||||
# autodoc_typehints = "description"
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["_templates"]
|
||||
@@ -89,13 +77,12 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = "sphinx_rtd_theme"
|
||||
html_theme = "sphinx_book_theme"
|
||||
|
||||
html_theme_options = {
|
||||
"path_to_docs": "docs",
|
||||
"repository_url": "https://github.com/hwchase17/langchain",
|
||||
"use_repository_button": True,
|
||||
# "style_nav_header_background": "white"
|
||||
}
|
||||
|
||||
html_context = {
|
||||
@@ -103,7 +90,7 @@ html_context = {
|
||||
"github_user": "hwchase17", # Username
|
||||
"github_repo": "langchain", # Repo name
|
||||
"github_version": "master", # Version
|
||||
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
|
||||
"conf_py_path": "/docs/", # Path in the checkout to the docs root
|
||||
}
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
@@ -1,10 +1,14 @@
|
||||
# Template repos
|
||||
# Deployments
|
||||
|
||||
So, you've created a really cool chain - now what? How do you deploy it and make it easily shareable with the world?
|
||||
So you've made a really cool chain - now what? How do you deploy it and make it easily sharable with the world?
|
||||
|
||||
This section covers several options for that. Note that these options are meant for quick deployment of prototypes and demos, not for production systems. If you need help with the deployment of a production system, please contact us directly.
|
||||
This section covers several options for that.
|
||||
Note that these are meant as quick deployment options for prototypes and demos, and not for production systems.
|
||||
If you are looking for help with deployment of a production system, please contact us directly.
|
||||
|
||||
What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here.
|
||||
What follows is a list of template GitHub repositories aimed that are intended to be
|
||||
very easy to fork and modify to use your chain.
|
||||
This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here.
|
||||
|
||||
## [Streamlit](https://github.com/hwchase17/langchain-streamlit-template)
|
||||
|
||||
@@ -19,12 +23,6 @@ It implements a chatbot interface, with a "Bring-Your-Own-Token" approach (nice
|
||||
It also contains instructions for how to deploy this app on the Hugging Face platform.
|
||||
This is heavily influenced by James Weaver's [excellent examples](https://huggingface.co/JavaFXpert).
|
||||
|
||||
## [Chainlit](https://github.com/Chainlit/cookbook)
|
||||
|
||||
This repo is a cookbook explaining how to visualize and deploy LangChain agents with Chainlit.
|
||||
You create ChatGPT-like UIs with Chainlit. Some of the key features include intermediary steps visualisation, element management & display (images, text, carousel, etc.) as well as cloud deployment.
|
||||
Chainlit [doc](https://docs.chainlit.io/langchain) on the integration with LangChain
|
||||
|
||||
## [Beam](https://github.com/slai-labs/get-beam/tree/main/examples/langchain-question-answering)
|
||||
|
||||
This repo serves as a template for how deploy a LangChain with [Beam](https://beam.cloud).
|
||||
@@ -35,14 +33,6 @@ It implements a Question Answering app and contains instructions for deploying t
|
||||
|
||||
A minimal example on how to run LangChain on Vercel using Flask.
|
||||
|
||||
## [FastAPI + Vercel](https://github.com/msoedov/langcorn)
|
||||
|
||||
A minimal example on how to run LangChain on Vercel using FastAPI and LangCorn/Uvicorn.
|
||||
|
||||
## [Kinsta](https://github.com/kinsta/hello-world-langchain)
|
||||
|
||||
A minimal example on how to deploy LangChain to [Kinsta](https://kinsta.com) using Flask.
|
||||
|
||||
## [Fly.io](https://github.com/fly-apps/hello-fly-langchain)
|
||||
|
||||
A minimal example of how to deploy LangChain to [Fly.io](https://fly.io/) using Flask.
|
||||
@@ -57,21 +47,17 @@ A minimal example on how to deploy LangChain to Google Cloud Run.
|
||||
|
||||
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
|
||||
|
||||
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship. This includes: production-ready endpoints, horizontal scaling across dependencies, persistent storage of app state, multi-tenancy support, etc.
|
||||
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
|
||||
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.
|
||||
|
||||
## [Langchain-serve](https://github.com/jina-ai/langchain-serve)
|
||||
|
||||
This repository allows users to serve local chains and agents as RESTful, gRPC, or WebSocket APIs, thanks to [Jina](https://docs.jina.ai/). Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
|
||||
This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to [Jina](https://docs.jina.ai/). Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
|
||||
|
||||
## [BentoML](https://github.com/ssheng/BentoChain)
|
||||
|
||||
This repository provides an example of how to deploy a LangChain application with [BentoML](https://github.com/bentoml/BentoML). BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.
|
||||
|
||||
## [OpenLLM](https://github.com/bentoml/OpenLLM)
|
||||
|
||||
OpenLLM is a platform for operating large language models (LLMs) in production. With OpenLLM, you can run inference with any open-source LLM, deploy to the cloud or on-premises, and build powerful AI apps. It supports a wide range of open-source LLMs, offers flexible APIs, and first-class support for LangChain and BentoML.
|
||||
See OpenLLM's [integration doc](https://github.com/bentoml/OpenLLM#%EF%B8%8F-integrations) for usage with LangChain.
|
||||
|
||||
## [Databutton](https://databutton.com/home?new-data-app=true)
|
||||
|
||||
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Personal search engine, and a starter template for LangChain apps. Deploying and sharing is just one click away.
|
||||
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include Chatbot interface with conversational memory, Personal search engine, and a starter template for LangChain apps. Deploying and sharing is one click.
|
||||
7
docs/docs_skeleton/.gitignore
vendored
@@ -1,7 +0,0 @@
|
||||
.yarn/
|
||||
|
||||
node_modules/
|
||||
|
||||
.docusaurus
|
||||
.cache-loader
|
||||
docs/api
|
||||
@@ -1,49 +0,0 @@
|
||||
# Website
|
||||
|
||||
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
|
||||
|
||||
### Installation
|
||||
|
||||
```
|
||||
$ yarn
|
||||
```
|
||||
|
||||
### Local Development
|
||||
|
||||
```
|
||||
$ yarn start
|
||||
```
|
||||
|
||||
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
|
||||
|
||||
### Build
|
||||
|
||||
```
|
||||
$ yarn build
|
||||
```
|
||||
|
||||
This command generates static content into the `build` directory and can be served using any static contents hosting service.
|
||||
|
||||
### Deployment
|
||||
|
||||
Using SSH:
|
||||
|
||||
```
|
||||
$ USE_SSH=true yarn deploy
|
||||
```
|
||||
|
||||
Not using SSH:
|
||||
|
||||
```
|
||||
$ GIT_USER=<Your GitHub username> yarn deploy
|
||||
```
|
||||
|
||||
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
|
||||
|
||||
```
|
||||
$ yarn ci
|
||||
```
|
||||
@@ -1,12 +0,0 @@
|
||||
/**
|
||||
* Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*
|
||||
* @format
|
||||
*/
|
||||
|
||||
module.exports = {
|
||||
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
|
||||
};
|
||||
@@ -1,76 +0,0 @@
|
||||
/* eslint-disable prefer-template */
|
||||
/* eslint-disable no-param-reassign */
|
||||
// eslint-disable-next-line import/no-extraneous-dependencies
|
||||
const babel = require("@babel/core");
|
||||
const path = require("path");
|
||||
const fs = require("fs");
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {string|Buffer} content Content of the resource file
|
||||
* @param {object} [map] SourceMap data consumable by https://github.com/mozilla/source-map
|
||||
* @param {any} [meta] Meta data, could be anything
|
||||
*/
|
||||
async function webpackLoader(content, map, meta) {
|
||||
const cb = this.async();
|
||||
|
||||
if (!this.resourcePath.endsWith(".ts")) {
|
||||
cb(null, JSON.stringify({ content, imports: [] }), map, meta);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await babel.parseAsync(content, {
|
||||
sourceType: "module",
|
||||
filename: this.resourcePath,
|
||||
});
|
||||
|
||||
const imports = [];
|
||||
|
||||
result.program.body.forEach((node) => {
|
||||
if (node.type === "ImportDeclaration") {
|
||||
const source = node.source.value;
|
||||
|
||||
if (!source.startsWith("langchain")) {
|
||||
return;
|
||||
}
|
||||
|
||||
node.specifiers.forEach((specifier) => {
|
||||
if (specifier.type === "ImportSpecifier") {
|
||||
const local = specifier.local.name;
|
||||
const imported = specifier.imported.name;
|
||||
imports.push({ local, imported, source });
|
||||
} else {
|
||||
throw new Error("Unsupported import type");
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
imports.forEach((imp) => {
|
||||
const { imported, source } = imp;
|
||||
const moduleName = source.split("/").slice(1).join("_");
|
||||
const docsPath = path.resolve(__dirname, "docs", "api", moduleName);
|
||||
const available = fs.readdirSync(docsPath, { withFileTypes: true });
|
||||
const found = available.find(
|
||||
(dirent) =>
|
||||
dirent.isDirectory() &&
|
||||
fs.existsSync(path.resolve(docsPath, dirent.name, imported + ".md"))
|
||||
);
|
||||
if (found) {
|
||||
imp.docs =
|
||||
"/" + path.join("docs", "api", moduleName, found.name, imported);
|
||||
} else {
|
||||
throw new Error(
|
||||
`Could not find docs for ${source}.${imported} in docs/api/`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
cb(null, JSON.stringify({ content, imports }), map, meta);
|
||||
} catch (err) {
|
||||
cb(err);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = webpackLoader;
|
||||
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 85 KiB |
BIN
docs/docs_skeleton/docs/_static/apple-touch-icon.png
vendored
|
Before Width: | Height: | Size: 16 KiB |
21
docs/docs_skeleton/docs/_static/css/custom.css
vendored
@@ -1,21 +0,0 @@
|
||||
pre {
|
||||
white-space: break-spaces;
|
||||
}
|
||||
|
||||
@media (min-width: 1200px) {
|
||||
.container,
|
||||
.container-lg,
|
||||
.container-md,
|
||||
.container-sm,
|
||||
.container-xl {
|
||||
max-width: 2560px !important;
|
||||
}
|
||||
}
|
||||
|
||||
#my-component-root *, #headlessui-portal-root * {
|
||||
z-index: 10000;
|
||||
}
|
||||
|
||||
.content-container p {
|
||||
margin: revert;
|
||||
}
|
||||
BIN
docs/docs_skeleton/docs/_static/favicon-16x16.png
vendored
|
Before Width: | Height: | Size: 542 B |
BIN
docs/docs_skeleton/docs/_static/favicon-32x32.png
vendored
|
Before Width: | Height: | Size: 1.2 KiB |
BIN
docs/docs_skeleton/docs/_static/favicon.ico
vendored
|
Before Width: | Height: | Size: 15 KiB |
BIN
docs/docs_skeleton/docs/_static/lc_modules.jpg
vendored
|
Before Width: | Height: | Size: 103 KiB |
|
Before Width: | Height: | Size: 136 KiB |
BIN
docs/docs_skeleton/docs/_static/parrot-icon.png
vendored
|
Before Width: | Height: | Size: 34 KiB |
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# Integrations
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,5 +0,0 @@
|
||||
# Installation
|
||||
|
||||
import Installation from "@snippets/get_started/installation.mdx"
|
||||
|
||||
<Installation/>
|
||||
@@ -1,65 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Introduction
|
||||
|
||||
**LangChain** is a framework for developing applications powered by language models. It enables applications that are:
|
||||
- **Data-aware**: connect a language model to other sources of data
|
||||
- **Agentic**: allow a language model to interact with its environment
|
||||
|
||||
The main value props of LangChain are:
|
||||
1. **Components**: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
|
||||
2. **Off-the-shelf chains**: a structured assembly of components for accomplishing specific higher-level tasks
|
||||
|
||||
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
|
||||
|
||||
## Get started
|
||||
|
||||
[Here’s](/docs/get_started/installation.html) how to install LangChain, set up your environment, and start building.
|
||||
|
||||
We recommend following our [Quickstart](/docs/get_started/quickstart.html) guide to familiarize yourself with the framework by building your first LangChain application.
|
||||
|
||||
_**Note**: These docs are for the LangChain [Python package](https://github.com/hwchase17/langchain). For documentation on [LangChain.js](https://github.com/hwchase17/langchainjs), the JS/TS version, [head here](https://js.langchain.com/docs)._
|
||||
|
||||
## Modules
|
||||
|
||||
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
|
||||
|
||||
#### [Model I/O](/docs/modules/model_io/)
|
||||
Interface with language models
|
||||
#### [Data connection](/docs/modules/data_connection/)
|
||||
Interface with application-specific data
|
||||
#### [Chains](/docs/modules/chains/)
|
||||
Construct sequences of calls
|
||||
#### [Agents](/docs/modules/agents/)
|
||||
Let chains choose which tools to use given high-level directives
|
||||
#### [Memory](/docs/modules/memory/)
|
||||
Persist application state between runs of a chain
|
||||
#### [Callbacks](/docs/modules/callbacks/)
|
||||
Log and stream intermediate steps of any chain
|
||||
|
||||
## Examples, ecosystem, and resources
|
||||
### [Use cases](/docs/use_cases/)
|
||||
Walkthroughs and best-practices for common end-to-end use cases, like:
|
||||
- [Chatbots](/docs/use_cases/chatbots/)
|
||||
- [Answering questions using sources](/docs/use_cases/question_answering/)
|
||||
- [Analyzing structured data](/docs/use_cases/tabular.html)
|
||||
- and much more...
|
||||
|
||||
### [Guides](/docs/guides/)
|
||||
Learn best practices for developing with LangChain.
|
||||
|
||||
### [Ecosystem](/docs/ecosystem/)
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
|
||||
|
||||
### [Additional resources](/docs/additional_resources/)
|
||||
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
|
||||
|
||||
<h3><span style={{color:"#2e8555"}}> Support </span></h3>
|
||||
|
||||
Join us on [GitHub](https://github.com/hwchase17/langchain) or [Discord](https://discord.gg/6adMQxSpJS) to ask questions, share feedback, meet other developers building with LangChain, and dream about the future of LLM’s.
|
||||
|
||||
## API reference
|
||||
|
||||
Head to the [reference](https://api.python.langchain.com) section for full documentation of all classes and methods in the LangChain Python package.
|
||||
@@ -1,158 +0,0 @@
|
||||
# Quickstart
|
||||
|
||||
## Installation
|
||||
|
||||
To install LangChain run:
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import Install from "@snippets/get_started/quickstart/installation.mdx"
|
||||
|
||||
<Install/>
|
||||
|
||||
For more details, see our [Installation guide](/docs/get_started/installation.html).
|
||||
|
||||
## Environment setup
|
||||
|
||||
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
|
||||
|
||||
import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
|
||||
|
||||
<OpenAISetup/>
|
||||
|
||||
## Building an application
|
||||
|
||||
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
|
||||
|
||||
## LLMs
|
||||
#### Get predictions from a language model
|
||||
|
||||
The basic building block of LangChain is the LLM, which takes in text and generates more text.
|
||||
|
||||
As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.
|
||||
|
||||
import LLM from "@snippets/get_started/quickstart/llm.mdx"
|
||||
|
||||
<LLM/>
|
||||
|
||||
## Chat models
|
||||
|
||||
Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
|
||||
|
||||
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
|
||||
|
||||
import ChatModel from "@snippets/get_started/quickstart/chat_model.mdx"
|
||||
|
||||
<ChatModel/>
|
||||
|
||||
## Prompt templates
|
||||
|
||||
Most LLM applications do not pass user input directly into to an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
|
||||
|
||||
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
|
||||
|
||||
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
|
||||
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="llms" label="LLMs" default>
|
||||
|
||||
With PromptTemplates this is easy! In this case our template would be very simple:
|
||||
|
||||
<PromptTemplateLLM/>
|
||||
</TabItem>
|
||||
<TabItem value="chat_models" label="Chat models">
|
||||
|
||||
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_messages` method to generate the formatted messages.
|
||||
|
||||
Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.
|
||||
|
||||
<PromptTemplateChatModel/>
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Chains
|
||||
|
||||
Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.
|
||||
|
||||
import ChainLLM from "@snippets/get_started/quickstart/chains_llms.mdx"
|
||||
import ChainChatModel from "@snippets/get_started/quickstart/chains_chat_models.mdx"
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="llms" label="LLMs" default>
|
||||
|
||||
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.
|
||||
|
||||
<ChainLLM/>
|
||||
|
||||
There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="chat_models" label="Chat models">
|
||||
|
||||
The `LLMChain` can be used with chat models as well:
|
||||
|
||||
<ChainChatModel/>
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Agents
|
||||
|
||||
import AgentLLM from "@snippets/get_started/quickstart/agents_llms.mdx"
|
||||
import AgentChatModel from "@snippets/get_started/quickstart/agents_chat_models.mdx"
|
||||
|
||||
Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.
|
||||
|
||||
Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.
|
||||
|
||||
To load an agent, you need to choose a(n):
|
||||
- LLM/Chat model: The language model powering the agent.
|
||||
- Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the [Tools documentation](/docs/modules/agents/tools/).
|
||||
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents/how_to/custom_agent.html). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/).
|
||||
|
||||
For this example, we'll be using SerpAPI to query a search engine.
|
||||
|
||||
You'll need to install the SerpAPI Python package:
|
||||
|
||||
```bash
|
||||
pip install google-search-results
|
||||
```
|
||||
|
||||
And set the `SERPAPI_API_KEY` environment variable.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="llms" label="LLMs" default>
|
||||
<AgentLLM/>
|
||||
</TabItem>
|
||||
<TabItem value="chat_models" label="Chat models">
|
||||
|
||||
Agents can also be used with chat models, you can initialize one using `AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION` as the agent type.
|
||||
|
||||
<AgentChatModel/>
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Memory
|
||||
|
||||
The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions. This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.
|
||||
|
||||
The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.
|
||||
|
||||
There are a number of built-in memory systems. The simplest of these are is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
|
||||
|
||||
import MemoryLLM from "@snippets/get_started/quickstart/memory_llms.mdx"
|
||||
import MemoryChatModel from "@snippets/get_started/quickstart/memory_chat_models.mdx"
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="llms" label="LLMs" default>
|
||||
|
||||
<MemoryLLM/>
|
||||
</TabItem>
|
||||
<TabItem value="chat_models" label="Chat models">
|
||||
|
||||
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
|
||||
|
||||
<MemoryChatModel/>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
@@ -1,13 +0,0 @@
|
||||
# Conversational
|
||||
|
||||
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
|
||||
|
||||
import Example from "@snippets/modules/agents/agent_types/conversational_agent.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
import ChatExample from "@snippets/modules/agents/agent_types/chat_conversation_agent.mdx"
|
||||
|
||||
## Using a chat model
|
||||
|
||||
<ChatExample/>
|
||||
@@ -1,57 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Agent types
|
||||
|
||||
## Action agents
|
||||
|
||||
Agents use an LLM to determine which actions to take and in what order.
|
||||
An action can either be using a tool and observing its output, or returning a response to the user.
|
||||
Here are the agents available in LangChain.
|
||||
|
||||
### [Zero-shot ReAct](/docs/modules/agents/agent_types/react.html)
|
||||
|
||||
This agent uses the [ReAct](https://arxiv.org/pdf/2205.00445.pdf) framework to determine which tool to use
|
||||
based solely on the tool's description. Any number of tools can be provided.
|
||||
This agent requires that a description is provided for each tool.
|
||||
|
||||
**Note**: This is the most general purpose action agent.
|
||||
|
||||
### [Structured input ReAct](/docs/modules/agents/agent_types/structured_chat.html)
|
||||
|
||||
The structured tool chat agent is capable of using multi-input tools.
|
||||
Older agents are configured to specify an action input as a single string, but this agent can use a tools' argument
|
||||
schema to create a structured action input. This is useful for more complex tool usage, like precisely
|
||||
navigating around a browser.
|
||||
|
||||
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
|
||||
function should to be called and respond with the inputs that should be passed to the function.
|
||||
The OpenAI Functions Agent is designed to work with these models.
|
||||
|
||||
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)
|
||||
|
||||
This agent is designed to be used in conversational settings.
|
||||
The prompt is designed to make the agent helpful and conversational.
|
||||
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
|
||||
|
||||
### [Self ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
|
||||
|
||||
This agent utilizes a single tool that should be named `Intermediate Answer`.
|
||||
This tool should be able to lookup factual answers to questions. This agent
|
||||
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
|
||||
where a Google search API was provided as the tool.
|
||||
|
||||
### [ReAct document store](/docs/modules/agents/agent_types/react_docstore.html)
|
||||
|
||||
This agent uses the ReAct framework to interact with a docstore. Two tools must
|
||||
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
|
||||
The `Search` tool should search for a document, while the `Lookup` tool should lookup
|
||||
a term in the most recently found document.
|
||||
This agent is equivalent to the
|
||||
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
|
||||
|
||||
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
|
||||
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
@@ -1,11 +0,0 @@
|
||||
# OpenAI functions
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
|
||||
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
|
||||
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
|
||||
|
||||
The OpenAI Functions Agent is designed to work with these models.
|
||||
|
||||
import Example from "@snippets/modules/agents/agent_types/openai_functions_agent.mdx";
|
||||
|
||||
<Example/>
|
||||
@@ -1,11 +0,0 @@
|
||||
# Plan and execute
|
||||
|
||||
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
|
||||
|
||||
The planning is almost always done by an LLM.
|
||||
|
||||
The execution is usually done by a separate agent (equipped with tools).
|
||||
|
||||
import Example from "@snippets/modules/agents/agent_types/plan_and_execute.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,15 +0,0 @@
|
||||
# ReAct
|
||||
|
||||
This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.
|
||||
|
||||
import Example from "@snippets/modules/agents/agent_types/react.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
## Using chat models
|
||||
|
||||
You can also create ReAct agents that use chat models instead of LLMs as the agent driver.
|
||||
|
||||
import ChatExample from "@snippets/modules/agents/agent_types/react_chat.mdx"
|
||||
|
||||
<ChatExample/>
|
||||
@@ -1,10 +0,0 @@
|
||||
# Structured tool chat
|
||||
|
||||
The structured tool chat agent is capable of using multi-input tools.
|
||||
|
||||
Older agents are configured to specify an action input as a single string, but this agent can use the provided tools' `args_schema` to populate the action input.
|
||||
|
||||
|
||||
import Example from "@snippets/modules/agents/agent_types/structured_chat.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 1
|
||||
@@ -1,14 +0,0 @@
|
||||
# Custom LLM Agent
|
||||
|
||||
This notebook goes through how to create your own custom LLM agent.
|
||||
|
||||
An LLM agent consists of three parts:
|
||||
|
||||
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- LLM: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
|
||||
|
||||
import Example from "@snippets/modules/agents/how_to/custom_llm_agent.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,14 +0,0 @@
|
||||
# Custom LLM Agent (with a ChatModel)
|
||||
|
||||
This notebook goes through how to create your own custom agent based on a chat model.
|
||||
|
||||
An LLM chat agent consists of three parts:
|
||||
|
||||
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- ChatModel: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
|
||||
|
||||
import Example from "@snippets/modules/agents/how_to/custom_llm_chat_agent.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,16 +0,0 @@
|
||||
# Replicating MRKL
|
||||
|
||||
This walkthrough demonstrates how to replicate the [MRKL](https://arxiv.org/pdf/2205.00445.pdf) system using agents.
|
||||
|
||||
This uses the example Chinook database.
|
||||
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.
|
||||
|
||||
import Example from "@snippets/modules/agents/how_to/mrkl.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
## With a chat model
|
||||
|
||||
import ChatExample from "@snippets/modules/agents/how_to/mrkl_chat.mdx"
|
||||
|
||||
<ChatExample/>
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Agents
|
||||
|
||||
Some applications require a flexible chain of calls to LLMs and other tools based on user input. The **Agent** interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.
|
||||
|
||||
There are two main types of agents:
|
||||
|
||||
- **Action agents**: at each timestep, decide on the next action using the outputs of all previous actions
|
||||
- **Plan-and-execute agents**: decide on the full sequence of actions up front, then execute them all without updating the plan
|
||||
|
||||
Action agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.
|
||||
|
||||
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/). Additional abstractions involved in agents are:
|
||||
- [**Tools**](/docs/modules/agents/tools/): the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
|
||||
- [**Toolkits**](/docs/modules/agents/toolkits/): wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
|
||||
interact with a SQL database it will likely need one tool to execute queries and another to inspect tables
|
||||
|
||||
## Action agents
|
||||
|
||||
At a high-level an action agent:
|
||||
1. Receives user input
|
||||
2. Decides which tool, if any, to use and the tool input
|
||||
3. Calls the tool and records the output (also known as an "observation")
|
||||
4. Decides the next step using the history of tools, tool inputs, and observations
|
||||
5. Repeats 3-4 until it determines it can respond directly to the user
|
||||
|
||||
Action agents are wrapped in **agent executors**, which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.
|
||||
|
||||
Although an agent can be constructed in many ways, it typically involves these components:
|
||||
|
||||
- **Prompt template**: Responsible for taking the user input and previous steps and constructing a prompt
|
||||
to send to the language model
|
||||
- **Language model**: Takes the prompt with use input and action history and decides what to do next
|
||||
- **Output parser**: Takes the output of the language model and parses it into the next action or a final answer
|
||||
|
||||
## Plan-and-execute agents
|
||||
|
||||
At a high-level a plan-and-execute agent:
|
||||
1. Receives user input
|
||||
2. Plans the full sequence of steps to take
|
||||
3. Executes the steps in order, passing the outputs of past steps as inputs to future steps
|
||||
|
||||
The most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more [here](/docs/modules/agents/agent_types/plan_and_execute.html).
|
||||
|
||||
## Get started
|
||||
|
||||
import GetStarted from "@snippets/modules/agents/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Toolkits
|
||||
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# Tools
|
||||
|
||||
Tools are interfaces that an agent can use to interact with the world.
|
||||
|
||||
## Get started
|
||||
|
||||
Tools are functions that agents can use to interact with the world.
|
||||
These tools can be generic utilities (e.g. search), other chains, or even other agents.
|
||||
|
||||
Currently, tools can be loaded with the following snippet:
|
||||
|
||||
import GetStarted from "@snippets/modules/agents/tools/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
@@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
||||
@@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
sidebar_position: 5
|
||||
---
|
||||
# Callbacks
|
||||
|
||||
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
|
||||
|
||||
import GetStarted from "@snippets/modules/callbacks/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
@@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
||||
@@ -1,7 +0,0 @@
|
||||
# Analyze Document
|
||||
|
||||
The AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/analyze_document.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,7 +0,0 @@
|
||||
# Self-critique chain with constitutional AI
|
||||
The ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.
|
||||
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/constitutional_chain.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Additional
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,8 +0,0 @@
|
||||
# Moderation
|
||||
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
|
||||
|
||||
If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this walkthrough.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/moderation.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,7 +0,0 @@
|
||||
# Dynamically selecting from multiple prompts
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/multi_prompt_router.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,7 +0,0 @@
|
||||
# Dynamically selecting from multiple retrievers
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the `MultiRetrievalQAChain` to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/multi_retrieval_qa_router.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,13 +0,0 @@
|
||||
# Document QA
|
||||
|
||||
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/question_answering.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
## Document QA with sources
|
||||
|
||||
import ExampleWithSources from "@snippets/modules/chains/additional/qa_with_sources.mdx"
|
||||
|
||||
<ExampleWithSources/>
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# Documents
|
||||
|
||||
These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
|
||||
|
||||
These chains all implement a common interface:
|
||||
|
||||
import Interface from "@snippets/modules/chains/document/combine_docs.mdx"
|
||||
|
||||
<Interface/>
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,5 +0,0 @@
|
||||
# Map reduce
|
||||
|
||||
The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.
|
||||
|
||||

|
||||
@@ -1,5 +0,0 @@
|
||||
# Map re-rank
|
||||
|
||||
The map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.
|
||||
|
||||

|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Refine
|
||||
|
||||
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
|
||||
|
||||
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.
|
||||
The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.
|
||||
There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
|
||||
|
||||

|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# Stuff
|
||||
|
||||
The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.
|
||||
|
||||
This chain is well-suited for applications where documents are small and only a few are passed in for most calls.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Foundational
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,11 +0,0 @@
|
||||
# LLM
|
||||
|
||||
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
|
||||
|
||||
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
|
||||
|
||||
## Get started
|
||||
|
||||
import Example from "@snippets/modules/chains/foundational/llm_chain.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,14 +0,0 @@
|
||||
# Sequential
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
|
||||
|
||||
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.. There are two types of sequential chains:
|
||||
|
||||
- `SimpleSequentialChain`: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
|
||||
- `SequentialChain`: A more general form of sequential chains, allowing for multiple inputs/outputs.
|
||||
|
||||
import Example from "@snippets/modules/chains/foundational/sequential_chains.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,8 +0,0 @@
|
||||
# Debugging chains
|
||||
|
||||
It can be hard to debug a `Chain` object solely from its output as most `Chain` objects involve a fair amount of input prompt preprocessing and LLM output post-processing.
|
||||
|
||||
import Example from "@snippets/modules/chains/how_to/debugging.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# How to
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,10 +0,0 @@
|
||||
# Adding memory (state)
|
||||
|
||||
Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.
|
||||
|
||||
## Get started
|
||||
|
||||
import GetStarted from "@snippets/modules/chains/how_to/memory.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Chains
|
||||
|
||||
Using an LLM in isolation is fine for simple applications,
|
||||
but more complex applications require chaining LLMs - either with each other or with other components.
|
||||
|
||||
LangChain provides the **Chain** interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:
|
||||
|
||||
import BaseClass from "@snippets/modules/chains/base_class.mdx"
|
||||
|
||||
<BaseClass/>
|
||||
|
||||
This idea of composing components together in a chain is simple but powerful. It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.
|
||||
|
||||
For more specifics check out:
|
||||
- [How-to](/docs/modules/chains/how_to/) for walkthroughs of different chain features
|
||||
- [Foundational](/docs/modules/chains/foundational/) to get acquainted with core building block chains
|
||||
- [Document](/docs/modules/chains/document/) to learn how to incorporate documents into chains
|
||||
- [Popular](/docs/modules/chains/popular/) chains for the most common use cases
|
||||
- [Additional](/docs/modules/chains/additional/) to see some of the more advanced chains and integrations that you can use out of the box
|
||||
|
||||
## Why do we need chains?
|
||||
|
||||
Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.
|
||||
|
||||
## Get started
|
||||
|
||||
import GetStarted from "@snippets/modules/chains/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# API chains
|
||||
APIChain enables using LLMs to interact with APIs to retrieve relevant information. Construct the chain by providing a question relevant to the provided API documentation.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/api.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Conversational Retrieval QA
|
||||
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
|
||||
|
||||
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.
|
||||
|
||||
To create one, you will need a retriever. In the below example, we will create one from a vector store, which can be created from embeddings.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/chat_vector_db.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Popular
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,7 +0,0 @@
|
||||
# SQL
|
||||
|
||||
This example demonstrates the use of the `SQLDatabaseChain` for answering questions over a SQL database.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/sqlite.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,8 +0,0 @@
|
||||
# Summarization
|
||||
|
||||
A summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/summarize.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Retrieval QA
|
||||
|
||||
This example showcases question answering over an index.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/vector_db_qa.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
import ExampleWithSources from "@snippets/modules/chains/popular/vector_db_qa_with_sources.mdx"
|
||||
|
||||
<ExampleWithSources/>
|
||||
@@ -1,2 +0,0 @@
|
||||
label: 'How-to'
|
||||
position: 0
|
||||