Compare commits

..

1 Commits

Author SHA1 Message Date
Erick Friis
7bc4a5964c wip 2023-12-15 17:46:48 -08:00
2650 changed files with 82397 additions and 152735 deletions

View File

@@ -3,17 +3,31 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
### 👩‍💻 Contributing Code
There are many ways to contribute to LangChain. Here are some common ways people contribute:
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
- [**Documentation**](https://python.langchain.com/docs/contributing/documentation): Help improve our docs, including this one!
- [**Code**](https://python.langchain.com/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](https://python.langchain.com/docs/contributing/integration): Help us integrate with your favorite vendors and tools.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/docs/`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
### 🚩GitHub Issues
@@ -40,6 +54,327 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
### Contributor Documentation
## 🚀 Quick Start
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
This quick start guide explains how to run the repository locally.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
### Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
### Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain:
```bash
cd libs/langchain
```
### Local Development Dependencies
Install langchain development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
poetry install --with test
```
Then verify dependency installation:
```bash
make test
```
If the tests don't pass, you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running
Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases.
If you are still seeing this bug on v1.6.1, you may also try disabling "modern installation"
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
### Testing
_some test dependencies are optional; see section about optional dependencies_.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
### Only develop langchain_core or langchain_experimental
If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:
```bash
cd libs/core
poetry install --with test
make test
```
Or:
```bash
cd libs/experimental
poetry install --with test
make test
```
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
#### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
To run formatting for docs, cookbook and templates:
```bash
make format
```
To run formatting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
#### Linting
Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).
To run linting for docs, cookbook and templates:
```bash
make lint
```
To run linting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
```python
[tool.codespell]
...
# Add here:
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
## Working with Optional Dependencies
Langchain relies heavily on optional dependencies to keep the Langchain package lightweight.
You only need to add a new dependency if a **unit test** relies on the package.
If your package is only required for **integration tests**, then you can skip these
steps and leave all pyproject.toml and poetry.lock files alone.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook
If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Documentation
While the code is split between `langchain` and `langchain.experimental`, the documentation is one holistic thing.
This covers how to get started contributing to documentation.
From the top-level of this repo, install documentation dependencies:
```bash
poetry install
```
### Contribute Documentation
The docs directory contains Documentation and API Reference.
Documentation is built using [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, run the link checker to ensure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
### Verify Documentation changes
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).
## 📕 Releases & Versioning
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a maintainer and published to [PyPI](https://pypi.org/).
The different packages are versioned slightly differently.
### `langchain-core`
`langchain-core` is currently on version `0.1.x`.
As `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in `langchain_core.beta`. The reason for `langchain_core.beta` is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.
Minor version increases will occur for:
- Breaking changes for any public interfaces NOT in `langchain_core.beta`
Patch version increases will occur for:
- Bug fixes
- New features
- Any changes to private interfaces
- Any changes to `langchain_core.beta`
### `langchain`
`langchain` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase. Any changes to public interfaces are nearly always done in a backwards compatible way and will be communicated ahead of time when they are not backwards compatible.
We are targeting January 2024 for a release of `langchain` v0.1, at which point `langchain` will adopt the same versioning policy as `langchain-core`.
### `langchain-community`
`langchain-community` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
### `langchain-experimental`
`langchain-experimental` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
## 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -5,84 +5,60 @@ body:
- type: markdown
attributes:
value: >
Thank you for taking the time to file a bug report.
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
If you're including an error message, please include the full stack trace not just the last error.
**Important!** Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
Thank you for taking the time to file a bug report. Before creating a new
issue, please make sure to take a few moments to check the issue tracker
for existing issues about the bug.
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
Include both the error and the full stack trace if reporting an exception!
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description telling what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: Please share your system info with us.
placeholder: |
"pip freeze | grep langchain"
platform
python version
placeholder: LangChain version, platform, python version, ...
validations:
required: true
- type: textarea
id: who-can-help
attributes:
label: Who can help?
description: |
Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
The core maintainers strive to read all issues, but tagging them will help them prioritize.
Please tag fewer than 3 people.
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoader Abstractions
- @eyurtsev
LLM/Chat Wrappers
- @hwchase17
- @agola11
Tools / Toolkits
- ...
placeholder: "@Username ..."
- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: "The problem arises when using:"
options:
- label: "The official example notebooks/scripts"
- label: "My own modified scripts"
- type: checkboxes
id: related-components
attributes:
@@ -101,3 +77,30 @@ body:
- label: "Chains"
- label: "Callbacks/Tracing"
- label: "Async"
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide a [code sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
Steps to reproduce the behavior:
1.
2.
3.
- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."

View File

@@ -1,9 +1,6 @@
blank_issues_enabled: true
version: 2.1
contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://github.com/langchain-ai/langchain/discussions
- name: Discord
url: https://discord.gg/6adMQxSpJS
about: General community discussions

View File

@@ -27,4 +27,4 @@ body:
attributes:
label: Your contribution
description: |
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the [Contributing Guide](https://python.langchain.com/docs/contributing/)
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md)

18
.github/ISSUE_TEMPLATE/other.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Other Issue
description: Raise an issue that wouldn't be covered by the other templates.
title: "Issue: <Please write a comprehensive title after the 'Issue: ' prefix>"
labels: [04 - Other]
body:
- type: textarea
attributes:
label: "Issue you'd like to raise."
description: >
Please describe the issue you'd like to raise as clearly as possible.
Make sure to include any relevant links or references.
- type: textarea
attributes:
label: "Suggestion:"
description: >
Please outline a suggestion to improve the issue here.

View File

@@ -1,20 +1,20 @@
<!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->

View File

@@ -26,9 +26,8 @@ inputs:
runs:
using: composite
steps:
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
name: Setup python ${{ inputs.python-version }}
id: setup-python
with:
python-version: ${{ inputs.python-version }}
@@ -75,8 +74,7 @@ runs:
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
run: pipx install "poetry==$POETRY_VERSION" --python "python$PYTHON_VERSION" --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v3

View File

@@ -1,6 +1,5 @@
import json
import sys
import os
LANGCHAIN_DIRS = {
"libs/core",
@@ -13,10 +12,6 @@ if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run = set()
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
for file in files:
if any(
file.startswith(dir_)
@@ -35,15 +30,9 @@ if __name__ == "__main__":
)
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}"):
dirs_to_run.update(
(
f"libs/partners/{partner_dir}",
"libs/langchain",
"libs/experimental",
)
)
# Skip if the directory was deleted
dirs_to_run.update(
(f"libs/partners/{partner_dir}", "libs/langchain", "libs/experimental")
)
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
@@ -52,5 +41,4 @@ if __name__ == "__main__":
dirs_to_run.update(LANGCHAIN_DIRS)
else:
pass
json_output = json.dumps(list(dirs_to_run))
print(f"dirs-to-run={json_output}")
print(json.dumps(list(dirs_to_run)))

View File

@@ -37,20 +37,10 @@ jobs:
shell: bash
run: poetry install --with test,test_integration
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
make integration_tests

View File

@@ -1,5 +1,5 @@
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
on:
workflow_call:
inputs:
@@ -11,8 +11,15 @@ on:
inputs:
working-directory:
required: true
type: string
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
- libs/community
- libs/partners/google-genai
- libs/partners/nvidia-aiplay
env:
PYTHON_VERSION: "3.10"
@@ -149,29 +156,13 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
- name: Run unit tests with minimum dependency versions
if: ${{ (inputs.working-directory == 'libs/langchain') || (inputs.working-directory == 'libs/community') || (inputs.working-directory == 'libs/experimental') }}
run: |
poetry run pip install -r _test_minimum_requirements.txt
make tests
working-directory: ${{ inputs.working-directory }}
publish:
needs:

View File

@@ -5,6 +5,11 @@ on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/**"
- ".github/tools/**"
- ".github/workflows/**"
- "libs/**"
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -21,14 +26,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
run: echo "dirs-to-run=$(python .github/scripts/check_diff.py ${{ steps.files.outputs.all }})" >> $GITHUB_OUTPUT
outputs:
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:

View File

@@ -36,7 +36,7 @@ jobs:
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: 'google-github-actions/auth@v1'
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'

View File

@@ -10,10 +10,9 @@ build:
tools:
python: "3.11"
commands:
- python -m virtualenv $READTHEDOCS_VIRTUALENV_PATH
- python -mvirtualenv $READTHEDOCS_VIRTUALENV_PATH
- python -m pip install --upgrade --no-cache-dir pip setuptools
- python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
- python -m pip install ./libs/partners/*
- python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- python docs/api_reference/create_api_rst.py
- cat docs/api_reference/conf.py

View File

@@ -95,7 +95,7 @@ Agents involve an LLM making decisions about which Actions to take, taking that
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/) and [integrations](https://python.langchain.com/docs/integrations/providers)
- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)
- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews
- [Reference](https://api.python.langchain.com): full API docs
@@ -105,7 +105,7 @@ Please see [here](https://python.langchain.com) for full documentation, which in
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
For detailed information on how to contribute, see [here](.github/CONTRIBUTING.md).
## 🌟 Contributors

View File

@@ -61,13 +61,13 @@
],
"source": [
"# Local\n",
"from langchain_community.chat_models import ChatOllama\n",
"from langchain.chat_models import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
"\n",
"# API\n",
"from langchain_community.llms import Replicate\n",
"from langchain.llms import Replicate\n",
"\n",
"# REPLICATE_API_TOKEN = getpass()\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
@@ -107,7 +107,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
"\n",
@@ -125,7 +125,7 @@
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
"metadata": {},
"source": [
"## Query a SQL Database \n",
"## Query a SQL DB \n",
"\n",
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
]
@@ -149,9 +149,8 @@
],
"source": [
"# Prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain.prompts import ChatPromptTemplate\n",
"\n",
"# Update the template based on the type of SQL Database like MySQL, Microsoft SQL Server and so on\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
@@ -218,7 +217,7 @@
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",
@@ -278,7 +277,7 @@
"source": [
"# Prompt\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"template = \"\"\"Given an input question, convert it to a SQL query. No pre-amble. Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
@@ -346,7 +345,7 @@
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
" \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", template),\n",
" ]\n",

View File

@@ -46,7 +46,7 @@
"\n",
"---\n",
"\n",
"A separate cookbook highlights `Option 1` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/multi_modal_RAG_chroma.ipynb).\n",
"A seperate cookbook highlights `Option 1` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/multi_modal_RAG_chroma.ipynb).\n",
"\n",
"And option `Option 2` is appropriate for cases when a multi-modal LLM cannot be used for answer synthesis (e.g., cost, etc).\n",
"\n",
@@ -101,7 +101,7 @@
"If you want to use the provided folder, then simply opt for a [pdf loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf) for the document:\n",
"\n",
"```\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"loader = PyPDFLoader(path + fname)\n",
"docs = loader.load()\n",
"tables = [] # Ignore w/ basic pdf loader\n",
@@ -198,9 +198,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"# Generate summaries of text elements\n",
@@ -341,7 +341,7 @@
"Add raw docs and doc summaries to [Multi Vector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector#summary): \n",
"\n",
"* Store the raw texts, tables, and images in the `docstore`.\n",
"* Store the texts, table summaries, and image summaries in the `vectorstore` for efficient semantic retrieval."
"* Store the texts, table summaries, and image summaries in the `vectorstore` for semantic retrieval."
]
},
{
@@ -353,11 +353,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",

View File

@@ -93,7 +93,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"\n",
"loader = PyPDFLoader(\"./cj/cj.pdf\")\n",
"docs = loader.load()\n",
@@ -158,11 +158,11 @@
}
],
"source": [
"from langchain.chat_models import ChatVertexAI\n",
"from langchain.llms import VertexAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.chat_models import ChatVertexAI\n",
"from langchain_community.llms import VertexAI\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain_core.messages import AIMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"\n",
@@ -243,7 +243,7 @@
"import base64\n",
"import os\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain.schema.messages import HumanMessage\n",
"\n",
"\n",
"def encode_image(image_path):\n",
@@ -342,11 +342,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import VertexAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.embeddings import VertexAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",
@@ -440,7 +440,7 @@
"import re\n",
"\n",
"from IPython.display import HTML, display\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"from PIL import Image\n",
"\n",
"\n",

View File

@@ -235,9 +235,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -318,11 +318,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",

View File

@@ -211,9 +211,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -373,11 +373,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",

View File

@@ -209,9 +209,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
"from langchain.chat_models import ChatOllama\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -376,10 +376,10 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import GPT4AllEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",

View File

@@ -62,7 +62,7 @@
"path = \"/Users/rlm/Desktop/cpi/\"\n",
"\n",
"# Load\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"\n",
"loader = PyPDFLoader(path + \"cpi.pdf\")\n",
"pdf_pages = loader.load()\n",
@@ -132,8 +132,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"baseline = Chroma.from_texts(\n",
" texts=all_splits_pypdf_texts,\n",
@@ -160,9 +160,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Prompt\n",
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text for retrieval. \\\n",

View File

@@ -29,7 +29,7 @@
"outputs": [],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]

View File

@@ -28,9 +28,9 @@
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain.tools.file_management.read import ReadFileTool\n",
"from langchain.tools.file_management.write import WriteFileTool\n",
"from langchain.utilities import SerpAPIWrapper\n",
"\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
@@ -62,8 +62,8 @@
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS"
]
},
{
@@ -100,8 +100,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_experimental.autonomous_agents import AutoGPT"
]
},
{
@@ -167,7 +167,7 @@
},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import FileChatMessageHistory\n",
"from langchain.memory.chat_message_histories import FileChatMessageHistory\n",
"\n",
"agent = AutoGPT.from_llm_and_tools(\n",
" ai_name=\"Tom\",\n",

View File

@@ -39,10 +39,10 @@
"\n",
"import nest_asyncio\n",
"import pandas as pd\n",
"from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.docstore.document import Document\n",
"from langchain_community.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Needed synce jupyter runs an async eventloop\n",
"nest_asyncio.apply()"
@@ -93,8 +93,8 @@
"from typing import Optional\n",
"\n",
"from langchain.agents import tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"from langchain.tools.file_management.read import ReadFileTool\n",
"from langchain.tools.file_management.write import WriteFileTool\n",
"\n",
"ROOT_DIR = \"./data/\"\n",
"\n",
@@ -311,8 +311,8 @@
"# Memory\n",
"import faiss\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"embeddings_model = OpenAIEmbeddings()\n",
"embedding_size = 1536\n",

View File

@@ -31,8 +31,9 @@
"source": [
"from typing import Optional\n",
"\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain_experimental.autonomous_agents import BabyAGI"
]
},
{
@@ -53,7 +54,7 @@
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
"from langchain.vectorstores import FAISS"
]
},
{

View File

@@ -28,9 +28,10 @@
"from typing import Optional\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain_experimental.autonomous_agents import BabyAGI"
]
},
{
@@ -62,7 +63,7 @@
"%pip install faiss-cpu > /dev/null\n",
"%pip install google-search-results > /dev/null\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
"from langchain.vectorstores import FAISS"
]
},
{
@@ -107,8 +108,8 @@
"source": [
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities import SerpAPIWrapper\n",
"\n",
"todo_prompt = PromptTemplate.from_template(\n",
" \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",

View File

@@ -36,6 +36,7 @@
"source": [
"from typing import List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
@@ -45,8 +46,7 @@
" BaseMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -47,9 +47,9 @@
"outputs": [],
"source": [
"from IPython.display import SVG\n",
"from langchain.llms import OpenAI\n",
"from langchain_experimental.cpal.base import CPALChain\n",
"from langchain_experimental.pal_chain import PALChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0, max_tokens=512)\n",
"cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)\n",

View File

@@ -23,9 +23,9 @@
"metadata": {},
"source": [
"1. Prepare data:\n",
" 1. Upload all python project files using the `langchain_community.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 1. Upload all python project files using the `langchain.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 2. Split all documents to chunks using the `langchain.text_splitter.CharacterTextSplitter`.\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain_community.vectorstores.DeepLake`\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain.vectorstores.DeepLake`\n",
"2. Question-Answering:\n",
" 1. Build a chain from `langchain.chat_models.ChatOpenAI` and `langchain.chains.ConversationalRetrievalChain`\n",
" 2. Prepare questions.\n",
@@ -166,7 +166,7 @@
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"root_dir = \"../../../../../../libs\"\n",
"\n",
@@ -657,7 +657,7 @@
}
],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"embeddings"
@@ -706,7 +706,7 @@
{
"data": {
"text/plain": [
"<langchain_community.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
"<langchain.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
]
},
"execution_count": 15,
@@ -715,7 +715,7 @@
}
],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain.vectorstores import DeepLake\n",
"\n",
"username = \"<USERNAME_OR_ORG>\"\n",
"\n",
@@ -740,7 +740,7 @@
"metadata": {},
"outputs": [],
"source": [
"# from langchain_community.vectorstores import DeepLake\n",
"# from langchain.vectorstores import DeepLake\n",
"\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=f\"hub://{<org_id>}/langchain-code\", runtime={\"tensor_db\": True}\n",
@@ -834,7 +834,7 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo-0613\"\n",

View File

@@ -40,12 +40,12 @@
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_openai import OpenAI"
"from langchain.tools.plugin import AIPlugin"
]
},
{
@@ -114,9 +114,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.vectorstores import FAISS"
]
},
{

View File

@@ -65,12 +65,12 @@
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_openai import OpenAI"
"from langchain.tools.plugin import AIPlugin"
]
},
{
@@ -138,9 +138,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.vectorstores import FAISS"
]
},
{

View File

@@ -80,7 +80,7 @@
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
@@ -93,7 +93,7 @@
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
@@ -115,7 +115,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabaseChain\n",
"from langchain.utilities import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
@@ -177,7 +177,7 @@
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"

View File

@@ -52,12 +52,13 @@
"import os\n",
"\n",
"from langchain.chains import RetrievalQA\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.text_splitter import (\n",
" CharacterTextSplitter,\n",
" RecursiveCharacterTextSplitter,\n",
")\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain.vectorstores import DeepLake\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",

View File

@@ -470,13 +470,13 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -545,11 +545,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain.vectorstores.chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"\n",
"def build_retriever(text_elements, tables, table_summaries):\n",

View File

@@ -39,7 +39,7 @@
"source": [
"from elasticsearch import Elasticsearch\n",
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI"
]
},
{

View File

@@ -22,8 +22,8 @@
"from typing import List, Optional\n",
"\n",
"from langchain.chains.openai_tools import create_extraction_chain_pydantic\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.pydantic_v1 import BaseModel"
]
},
{
@@ -153,7 +153,7 @@
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.runnables import Runnable\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_core.language_models import BaseLanguageModel\n",
"\n",

View File

@@ -20,7 +20,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.fake import FakeListLLM"
"from langchain.llms.fake import FakeListLLM"
]
},
{

View File

@@ -73,9 +73,10 @@
" AsyncCallbackManagerForRetrieverRun,\n",
" CallbackManagerForRetrieverRun,\n",
")\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.schema import BaseRetriever, Document\n",
"from langchain_community.utilities import GoogleSerperAPIWrapper\n",
"from langchain_openai import ChatOpenAI, OpenAI"
"from langchain.utilities import GoogleSerperAPIWrapper"
]
},
{

View File

@@ -47,10 +47,11 @@
"from datetime import datetime, timedelta\n",
"from typing import List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers import TimeWeightedVectorStoreRetriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS\n",
"from termcolor import colored"
]
},

View File

@@ -75,8 +75,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.autonomous_agents import HuggingGPT\n",
"from langchain_openai import OpenAI\n",
"\n",
"# %env OPENAI_API_BASE=http://localhost:8000/v1"
]

View File

@@ -20,7 +20,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.human import HumanInputChatModel"
"from langchain.chat_models.human import HumanInputChatModel"
]
},
{

View File

@@ -19,7 +19,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.human import HumanInputLLM"
"from langchain.llms.human import HumanInputLLM"
]
},
{

View File

@@ -21,8 +21,9 @@
"outputs": [],
"source": [
"from langchain.chains import HypotheticalDocumentEmbedder, LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate"
]
},
{
@@ -171,7 +172,7 @@
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",

View File

@@ -49,9 +49,9 @@
"source": [
"# pick and configure the LLM of your choice\n",
"\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")"
"llm = OpenAI(model=\"text-davinci-003\")"
]
},
{

View File

@@ -43,8 +43,8 @@
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",

View File

@@ -42,7 +42,7 @@
],
"source": [
"from langchain.chains import LLMCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0.7)\n",
"\n",

View File

@@ -46,7 +46,7 @@
],
"source": [
"from langchain.chains import LLMMathChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",

View File

@@ -331,7 +331,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=2)\n",
@@ -822,7 +822,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=3)\n",
@@ -1096,7 +1096,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=3, verbose=True)\n",

View File

@@ -14,8 +14,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.llm_symbolic_math.base import LLMSymbolicMathChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)"

View File

@@ -57,9 +57,9 @@
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.memory import ConversationBufferWindowMemory\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI"
"from langchain.prompts import PromptTemplate"
]
},
{

View File

@@ -91,8 +91,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
]
},
{

View File

@@ -187,7 +187,7 @@
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",
@@ -315,10 +315,10 @@
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",

View File

@@ -43,8 +43,8 @@
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain.tools import SteamshipImageGenerationTool\n",
"from langchain_openai import OpenAI"
"from langchain.llms import OpenAI\n",
"from langchain.tools import SteamshipImageGenerationTool"
]
},
{

View File

@@ -28,11 +28,11 @@
"source": [
"from typing import Callable, List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -33,6 +33,7 @@
"from typing import Callable, List\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.prompts import (\n",
" PromptTemplate,\n",
@@ -40,8 +41,7 @@
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -27,13 +27,13 @@
"from typing import Callable, List\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -31,10 +31,10 @@
"from os import environ\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
"from langchain_openai import OpenAI\n",
"from sqlalchemy import MetaData, create_engine\n",
"\n",
"MYSCALE_HOST = \"msc-4a9e710a.us-east-1.aws.staging.myscale.cloud\"\n",
@@ -57,7 +57,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import HuggingFaceInstructEmbeddings\n",
"from langchain.embeddings import HuggingFaceInstructEmbeddings\n",
"from langchain_experimental.sql.vector_sql import VectorSQLOutputParser\n",
"\n",
"output_parser = VectorSQLOutputParser.from_embeddings(\n",
@@ -75,10 +75,10 @@
"outputs": [],
"source": [
"from langchain.callbacks import StdOutCallbackHandler\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities.sql_database import SQLDatabase\n",
"from langchain_experimental.sql.prompt import MYSCALE_PROMPT\n",
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"chain = VectorSQLDatabaseChain(\n",
" llm_chain=LLMChain(\n",
@@ -117,6 +117,7 @@
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_experimental.retrievers.vector_sql_database import (\n",
" VectorSQLDatabaseChainRetriever,\n",
")\n",
@@ -125,7 +126,6 @@
" VectorSQLDatabaseChain,\n",
" VectorSQLRetrieveAllOutputParser,\n",
")\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"output_parser_retrieve_all = VectorSQLRetrieveAllOutputParser.from_embeddings(\n",
" output_parser.model\n",

View File

@@ -20,10 +20,10 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.document_loaders import TextLoader\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.vectorstores import Chroma"
]
},
{
@@ -52,8 +52,8 @@
"source": [
"from langchain.chains import create_qa_with_sources_chain\n",
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate"
]
},
{

View File

@@ -28,8 +28,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
]
},
{
@@ -414,7 +414,7 @@
"BREAKING CHANGES:\n",
"- To use Azure embeddings with OpenAI V1, you'll need to use the new `AzureOpenAIEmbeddings` instead of the existing `OpenAIEmbeddings`. `OpenAIEmbeddings` continue to work when using Azure with `openai<1`.\n",
"```python\n",
"from langchain_openai import AzureOpenAIEmbeddings\n",
"from langchain.embeddings import AzureOpenAIEmbeddings\n",
"```\n",
"\n",
"\n",
@@ -456,8 +456,8 @@
"from typing import Literal\n",
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",

View File

@@ -47,12 +47,12 @@
"import inspect\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -30,14 +30,15 @@
"outputs": [],
"source": [
"from langchain.chains import LLMMathChain\n",
"from langchain_community.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain_core.tools import Tool\n",
"from langchain_experimental.plan_and_execute import (\n",
" PlanAndExecute,\n",
" load_agent_executor,\n",
" load_chat_planner,\n",
")\n",
"from langchain_openai import ChatOpenAI, OpenAI"
")"
]
},
{

View File

@@ -81,8 +81,8 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.retrievers import KayAiRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
"retriever = KayAiRetriever.create(\n",

View File

@@ -17,8 +17,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.pal_chain import PALChain\n",
"from langchain_openai import OpenAI"
"from langchain.llms import OpenAI\n",
"from langchain_experimental.pal_chain import PALChain"
]
},
{

View File

@@ -27,7 +27,7 @@
],
"source": [
"from langchain.chains import create_citation_fuzzy_match_chain\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI"
]
},
{

View File

@@ -59,13 +59,11 @@
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from langchain.chains.retrieval_qa import RetrievalQA\n",
"from langchain.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader\n",
"from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders.baiducloud_bos_directory import (\n",
" BaiduBOSDirectoryLoader,\n",
")\n",
"from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain_community.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
"from langchain_community.vectorstores import BESVectorStore"
"from langchain.vectorstores import BESVectorStore"
]
},
{

View File

@@ -30,8 +30,8 @@
"outputs": [],
"source": [
"import pinecone\n",
"from langchain_community.vectorstores import Pinecone\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import Pinecone\n",
"\n",
"pinecone.init(api_key=\"...\", environment=\"...\")"
]
@@ -86,8 +86,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{

View File

@@ -42,8 +42,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.sql_database import SQLDatabase\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"CONNECTION_STRING = \"postgresql+psycopg2://postgres:test@localhost:5432/vectordb\" # Replace with your own\n",
"db = SQLDatabase.from_uri(CONNECTION_STRING)"
@@ -88,7 +88,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"\n",
"embeddings_model = OpenAIEmbeddings()"
]
@@ -219,7 +219,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain.prompts import ChatPromptTemplate\n",
"\n",
"template = \"\"\"You are a Postgres expert. Given an input question, first create a syntactically correct Postgres query to run, then look at the results of the query and return the answer to the input question.\n",
"Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per Postgres. You can order the results to return the most informative data in the database.\n",
@@ -267,9 +267,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"db = SQLDatabase.from_uri(\n",
" CONNECTION_STRING\n",

View File

@@ -31,11 +31,11 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.runnables import RunnablePassthrough"
]
},
{

View File

@@ -49,13 +49,14 @@
"from langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS\n",
"from langchain.chains import LLMChain, RetrievalQA\n",
"from langchain.chains.base import Chain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.llms import BaseLLM, OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.base import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.llms import BaseLLM\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import ChatOpenAI, OpenAI, OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from pydantic import BaseModel, Field"
]
},

View File

@@ -17,10 +17,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompt_values import PromptValue\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.prompt_values import PromptValue"
]
},
{

View File

@@ -255,7 +255,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")\n",
"res = model.predict(\n",
@@ -1083,8 +1083,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import ElasticsearchStore\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import ElasticsearchStore\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]

View File

@@ -51,9 +51,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_experimental.smart_llm import SmartLLMChain\n",
"from langchain_openai import ChatOpenAI"
"from langchain_experimental.smart_llm import SmartLLMChain"
]
},
{

View File

@@ -9,8 +9,8 @@ To set it up, follow the instructions on https://database.guide/2-sample-databas
```python
from langchain_openai import OpenAI
from langchain_community.utilities import SQLDatabase
from langchain.llms import OpenAI
from langchain.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
```
@@ -200,8 +200,8 @@ result["intermediate_steps"]
How to add memory to a SQLDatabaseChain:
```python
from langchain_openai import OpenAI
from langchain_community.utilities import SQLDatabase
from langchain.llms import OpenAI
from langchain.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
```
@@ -647,7 +647,7 @@ Sometimes you may not have the luxury of using OpenAI or other service-hosted la
import logging
import torch
from transformers import AutoTokenizer, GPT2TokenizerFast, pipeline, AutoModelForSeq2SeqLM, AutoModelForCausalLM
from langchain_community.llms import HuggingFacePipeline
from langchain.llms import HuggingFacePipeline
# Note: This model requires a large GPU, e.g. an 80GB A100. See documentation for other ways to run private non-OpenAI models.
model_id = "google/flan-ul2"
@@ -679,7 +679,7 @@ local_llm = HuggingFacePipeline(pipeline=pipe)
```python
from langchain_community.utilities import SQLDatabase
from langchain.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db", include_tables=['Customer'])
@@ -994,9 +994,9 @@ Now that you have some examples (with manually corrected output SQL), you can do
```python
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
from langchain.chains.sql_database.prompt import _sqlite_prompt, PROMPT_SUFFIX
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain_community.vectorstores import Chroma
from langchain.vectorstores import Chroma
example_prompt = PromptTemplate(
input_variables=["table_info", "input", "sql_cmd", "sql_result", "answer"],

View File

@@ -23,10 +23,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.runnables import RunnableLambda"
]
},
{
@@ -129,7 +129,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"\n",
"search = DuckDuckGoSearchAPIWrapper(max_results=4)\n",
"\n",

View File

@@ -1,156 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
"metadata": {},
"source": [
"## Together AI + RAG\n",
" \n",
"[Together AI](https://python.langchain.com/docs/integrations/llms/together) has a broad set of OSS LLMs via inference API.\n",
"\n",
"See [here](https://api.together.xyz/playground). We use `\"mistralai/Mixtral-8x7B-Instruct-v0.1` for RAG on the Mixtral paper.\n",
"\n",
"Download the paper:\n",
"https://arxiv.org/pdf/2401.04088.pdf"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet pypdf chromadb tiktoken openai langchain-together"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab49327-0532-4480-804c-d066c302a322",
"metadata": {},
"outputs": [],
"source": [
"# Load\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"loader = PyPDFLoader(\"~/Desktop/mixtral.pdf\")\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_community.embeddings import OpenAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"\"\"\"\n",
"from langchain_together.embeddings import TogetherEmbeddings\n",
"embeddings = TogetherEmbeddings(model=\"togethercomputer/m2-bert-80M-8k-retrieval\")\n",
"\"\"\"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=all_splits,\n",
" collection_name=\"rag-chroma\",\n",
" embedding=OpenAIEmbeddings(),\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"# RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"# LLM\n",
"from langchain_community.llms import Together\n",
"\n",
"llm = Together(\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
" temperature=0.0,\n",
" max_tokens=2000,\n",
" top_k=1,\n",
")\n",
"\n",
"# RAG chain\n",
"chain = (\n",
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
]
},
{
"cell_type": "markdown",
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -24,9 +24,9 @@
}
],
"source": [
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=1, max_tokens=512, model=\"gpt-3.5-turbo-instruct\")"
"llm = OpenAI(temperature=1, max_tokens=512, model=\"text-davinci-003\")"
]
},
{

View File

@@ -37,8 +37,8 @@
"import getpass\n",
"import os\n",
"\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import DeepLake\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",
@@ -110,7 +110,7 @@
"source": [
"import os\n",
"\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"root_dir = \"./the-algorithm\"\n",
"docs = []\n",
@@ -3809,7 +3809,7 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"

View File

@@ -24,13 +24,13 @@
"source": [
"from typing import Callable, List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -24,11 +24,11 @@
"source": [
"from typing import Callable, List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -599,7 +599,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)"
]

View File

@@ -13,11 +13,11 @@ rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --excl
cd ../_dist
poetry run python scripts/model_feat_table.py
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
yarn
poetry run quarto preview docs
quarto preview docs

View File

@@ -1,3 +1,49 @@
# LangChain Documentation
# Website
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
### Continuous Integration
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open-source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
```
$ yarn ci
```

View File

@@ -72,8 +72,8 @@ def setup(app):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain, Inc."
author = "LangChain, Inc."
copyright = "2023, Harrison Chase"
author = "Harrison Chase"
version = data["tool"]["poetry"]["version"]
release = version
@@ -141,20 +141,13 @@ redirects = {
for old_link in redirects:
html_additional_pages[old_link] = "redirects.html"
partners_dir = Path(__file__).parent.parent.parent / "libs/partners"
partners = [
(p.name, p.name.replace("-", "_") + "_api_reference")
for p in partners_dir.iterdir()
]
html_context = {
"display_github": True, # Integrate GitHub
"github_user": "langchain-ai", # Username
"github_user": "hwchase17", # Username
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
"redirects": redirects,
"partners": partners,
}
# Add any paths that contain custom static files (such as style sheets) here,

View File

@@ -2,6 +2,7 @@
-e libs/langchain
-e libs/core
-e libs/community
-e libs/partners/google-genai
pydantic<2
autodoc_pydantic==1.8.0
myst_parser

View File

@@ -6,6 +6,11 @@
{%- set top_container_cls = "sk-landing-container" %}
{%- endif %}
{# title, link, link_attrs #}
{%- set drop_down_navigation = [
('Google Generative AI', pathto('google_genai_api_reference'), ''),]
-%}
<nav id="navbar" class="{{ nav_bar_class }} navbar navbar-expand-md navbar-light bg-light py-0">
<div class="container-fluid {{ top_container_cls }} px-0">
{%- if logo_url %}
@@ -43,16 +48,16 @@
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
{%- for title, pathname in partners %}
{%- for title, link, link_attrs in drop_down_navigation %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ pathto(pathname) }}">{{ title }}</a>
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ link }}" {{ link_attrs }}>{{ title }}</a>
</li>
{%- endfor %}
<li class="nav-item dropdown nav-more-item-dropdown">
<a class="sk-nav-link nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Partner libs</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
{%- for title, pathname in partners %}
<a class="sk-nav-dropdown-item dropdown-item" href="{{ pathto(pathname) }}">{{ title }}</a>
{%- for title, link, link_attrs in drop_down_navigation %}
<a class="sk-nav-dropdown-item dropdown-item" href="{{ link }}" {{ link_attrs }}>{{ title}}</a>
{%- endfor %}
</div>
</li>

View File

@@ -32,7 +32,7 @@ There isn't any special setup for it.
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
```python
from langchain_community.llms import integration_class_REPLACE_ME
from langchain.llms import integration_class_REPLACE_ME
```
## Text Embedding Models
@@ -40,7 +40,7 @@ from langchain_community.llms import integration_class_REPLACE_ME
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
```python
from langchain_community.embeddings import integration_class_REPLACE_ME
from langchain.embeddings import integration_class_REPLACE_ME
```
## Chat models
@@ -48,7 +48,7 @@ from langchain_community.embeddings import integration_class_REPLACE_ME
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
```python
from langchain_community.chat_models import integration_class_REPLACE_ME
from langchain.chat_models import integration_class_REPLACE_ME
```
## Document Loader
@@ -56,5 +56,5 @@ from langchain_community.chat_models import integration_class_REPLACE_ME
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
```python
from langchain_community.document_loaders import integration_class_REPLACE_ME
from langchain.document_loaders import integration_class_REPLACE_ME
```

View File

@@ -6,12 +6,7 @@ Below are links to tutorials and courses on LangChain. For written guides on com
---------------------
### [LangChain](https://en.wikipedia.org/wiki/LangChain) on Wikipedia
### Books
#### ⛓[Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
### [LangChain on Wikipedia](https://en.wikipedia.org/wiki/LangChain)
### DeepLearning.AI courses
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)

View File

@@ -1,27 +0,0 @@
# langchain-core
## 0.1.7 (Jan 5, 2024)
#### Deleted
No deletions.
#### Deprecated
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
#### Fixed
- Restrict recursive URL scraping: [#15559](https://github.com/langchain-ai/langchain/pull/15559)
#### Added
No additions.
#### Beta
- Marked `langchain_core.load.load` and `langchain_core.load.loads` as beta.
- Marked `langchain_core.beta.runnables.context.ContextGet` and `langchain_core.beta.runnables.context.ContextSet` as beta.

View File

@@ -1,36 +0,0 @@
# langchain
## 0.1.0 (Jan 5, 2024)
#### Deleted
No deletions.
#### Deprecated
Deprecated classes and methods will be removed in 0.2.0
| Deprecated | Alternative | Reason |
|---------------------------------|-----------------------------------|------------------------------------------------|
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |
| created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
| NatBotChain | | Not used |
| create_openai_fn_chain | create_openai_fn_runnable | Use LCEL under the hood |
| create_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
| load_query_constructor_chain | load_query_constructor_runnable | Use LCEL under the hood |
| VectorDBQA | RetrievalQA | More general to all retrievers |
| Sequential Chain | LCEL | Obviated by LCEL |
| SimpleSequentialChain | LCEL | Obviated by LCEL |
| TransformChain | LCEL/RunnableLambda | Obviated by LCEL |
| create_tagging_chain | create_structured_output_runnable | Use LCEL under the hood |
| ChatAgent | create_react_agent | Use LCEL builder over a class |
| ConversationalAgent | create_react_agent | Use LCEL builder over a class |
| ConversationalChatAgent | create_json_chat_agent | Use LCEL builder over a class |
| initialize_agent | Individual create agent methods | Individual create agent methods are more clear |
| ZeroShotAgent | create_react_agent | Use LCEL builder over a class |
| OpenAIFunctionsAgent | create_openai_functions_agent | Use LCEL builder over a class |
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
| XMLAgent | create_xml_agent | Use LCEL builder over a class |

53
docs/docs/community.md Normal file
View File

@@ -0,0 +1,53 @@
# Community navigator
Hi! Thanks for being here. Were lucky to have a community of so many passionate developers building with LangChainwe have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each others work, become each other's customers and collaborators, and so much more.
Whether youre new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction.
- **🦜 Contribute to LangChain**
- **🌍 Meetups, Events, and Hackathons**
- **📣 Help Us Amplify Your Work**
- **💬 Stay in the loop**
# 🦜 Contribute to LangChain
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is ******still****** so much to do together. Here are some ways to get involved:
- **[Open a pull request](https://github.com/langchain-ai/langchain/issues):** Wed appreciate all forms of contributionsnew features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, wed love to work on it with you.
- **[Read our contributor guidelines:](https://github.com/langchain-ai/langchain/blob/bbd22b9b761389a5e40fc45b0570e1830aabb707/.github/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
- **First time contributor?** [Try one of these PRs with the “good first issue” tag](https://github.com/langchain-ai/langchain/contribute).
- **Become an expert:** Our experts help the community by answering product questions in Discord. If thats a role youd like to play, wed be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at hello@langchain.dev and well take it from there!
- **Integrate with LangChain:** If your product integrates with LangChainor aspires towe want to help make sure the experience is as smooth as possible for you and end users. Send us an email at hello@langchain.dev and tell us what youre working on.
- **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at hello@langchain.dev if youd like to explore this role.
# 🌍 Meetups, Events, and Hackathons
One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible!
- **Find a meetup, hackathon, or webinar:** You can find the one for you on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f).
- **Submit an event to our calendar:** Email us at events@langchain.dev with a link to your event page! We can also help you spread the word with our local communities.
- **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share it with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at events@langchain.dev to tell us about your event!
- **Become a meetup sponsor:** We often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If youd like to help, send us an email to events@langchain.dev we can share more about how it works!
- **Speak at an event:** Meetup hosts are always looking for great speakers, presenters, and panelists. If youd like to do that at an event, send us an email to hello@langchain.dev with more information about yourself, what you want to talk about, and what city youre based in and well try to match you with an upcoming event!
- **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at hello@langchain.dev and let us know how we can help.
# 📣 Help Us Amplify Your Work
If youre working on something youre proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off.
- **Post about your work and mention us:** We love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), well almost certainly see it and can show you some love.
- **Publish something on our blog:** If youre writing about your experience building with LangChain, wed love to post (or crosspost) it on our blog! E-mail hello@langchain.dev with a draft of your post! Or even an idea for something you want to write about.
- **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at hello@langchain.dev.
# ☀️ Stay in the loop
Heres where our team hangs out, talks shop, spotlights cool work, and shares what were up to. Wed love to see you there too.
- **[Twitter](https://twitter.com/LangChainAI):** We post about what were working on and what cool things were seeing in the space. If you tag @langchainai in your post, well almost certainly see it, and can show you some love!
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with over 30,000 developers who are building with LangChain.
- **[GitHub](https://github.com/langchain-ai/langchain):** Open pull requests, contribute to a discussion, and/or contribute
- **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit

View File

@@ -1,250 +0,0 @@
---
sidebar_position: 1
---
# Contribute Code
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/docs/`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
## 🚀 Quick Start
This quick start guide explains how to run the repository locally.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
### Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.6.1+ as a dependency manager.
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
### Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
- Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain-community:
```bash
cd libs/community
```
### Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
poetry install --with lint,typing,test,test_integration
```
Then verify dependency installation:
```bash
make test
```
If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running
Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases.
If you are still seeing this bug on v1.6.1, you may also try disabling "modern installation"
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
### Testing
_In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional; see section about optional dependencies_.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
There are also [integration tests and code-coverage](./testing) available.
### Only develop langchain_core or langchain_experimental
If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:
```bash
cd libs/core
poetry install --with test
make test
```
Or:
```bash
cd libs/experimental
poetry install --with test
make test
```
### Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
#### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
To run formatting for docs, cookbook and templates:
```bash
make format
```
To run formatting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
#### Linting
Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).
To run linting for docs, cookbook and templates:
```bash
make lint
```
To run linting for a library, run the same command from the relevant library directory:
```bash
cd libs/{LIBRARY}
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
```python
[tool.codespell]
...
# Add here:
ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure'
```
## Working with Optional Dependencies
`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight.
`langchain-core` and partner packages **do not use** optional dependencies in this way.
You only need to add a new dependency if a **unit test** relies on the package.
If your package is only required for **integration tests**, then you can skip these
steps and leave all pyproject.toml and poetry.lock files alone.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook
If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.

View File

@@ -1,67 +0,0 @@
---
sidebar_position: 3
---
# Contribute Documentation
The docs directory contains Documentation and API Reference.
Documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and are hosted by [Read the Docs](https://readthedocs.org/).
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
## Build Documentation Locally
### Install dependencies
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus.
- `poetry install` from the monorepo root
### Building
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, run the link checker to ensure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
### Linting and Formatting
The docs are linted from the monorepo root. To lint the docs, run the following from there:
```bash
poetry install --with lint,typing
make lint
```
If you have formatting-related errors, you can fix them automatically with:
```bash
make format
```
## Verify Documentation changes
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).

View File

@@ -1,26 +0,0 @@
---
sidebar_position: 6
sidebar_label: FAQ
---
# Frequently Asked Questions
## Pull Requests (PRs)
### How do I allow maintainers to edit my PR?
When you submit a pull request, there may be additional changes
necessary before merging it. Oftentimes, it is more efficient for the
maintainers to make these changes themselves before merging, rather than asking you
to do so in code review.
By default, most pull requests will have a
`✅ Maintainers are allowed to edit this pull request.`
badge in the right-hand sidebar.
If you do not see this badge, you may have this setting off for the fork you are
pull-requesting from. See [this Github docs page](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
for more information.
Notably, Github doesn't allow this setting to be enabled for forks in **organizations** ([issue](https://github.com/orgs/community/discussions/5634)).
If you are working in an organization, we recommend submitting your PR from a personal
fork in order to enable this setting.

View File

@@ -1,47 +0,0 @@
---
sidebar_position: 0
---
# Welcome Contributors
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](./documentation.mdx): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
### 🚩GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -1,145 +0,0 @@
---
sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).
There are a few different places you can contribute integrations for LangChain:
- **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community.
- **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner.
For the most part, new integrations should be added to the Community package. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
## Community Package
The `langchain-community` package is in `libs/community` and contains most integrations.
It is installed by users with `pip install langchain-community`, and exported members can be imported with code like
```python
from langchain_community.chat_models import ParrotLinkLLM
from langchain_community.llms import ChatParrotLink
from langchain_community.vectorstores import ParrotLinkVectorStore
```
The community package relies on manually-installed dependent packages, so you will see errors if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
```python
from langchain_core.language_models.chat_models import BaseChatModel
class ChatParrotLink(BaseChatModel):
"""ChatParrotLink chat model.
Example:
.. code-block:: python
from langchain_parrot_link import ChatParrotLink
model = ChatParrotLink()
"""
...
```
And we would write tests in:
- Unit tests: `libs/community/tests/unit_tests/chat_models/test_parrot_link.py`
- Integration tests: `libs/community/tests/integration_tests/chat_models/test_parrot_link.py`
And add documentation to:
- `docs/docs/integrations/chat/parrot_link.ipynb`
## Partner Packages
Partner packages are in `libs/partners/*` and are installed by users with `pip install langchain-{partner}`, and exported members can be imported with code like
```python
from langchain_{partner} import X
```
### Set up a new package
To set up a new partner package, use the latest version of the LangChain CLI. You can install or update it with:
```bash
pip install -U langchain-cli
```
Let's say you want to create a new partner package working for a company called Parrot Link AI.
Then, run the following command to create a new partner package:
```bash
cd libs/partners
langchain-cli integration new
> Name: parrot-link
> Name of integration in PascalCase [ParrotLink]: ParrotLink
```
This will create a new package in `libs/partners/parrot-link` with the following structure:
```
libs/partners/parrot-link/
langchain_parrot_link/ # folder containing your package
...
tests/
...
docs/ # bootstrapped docs notebooks, must be moved to /docs in monorepo root
...
scripts/ # scripts for CI
...
LICENSE
README.md # fill out with information about your package
Makefile # default commands for CI
pyproject.toml # package metadata, mostly managed by Poetry
poetry.lock # package lockfile, managed by Poetry
.gitignore
```
### Implement your package
First, add any dependencies your package needs, such as your company's SDK:
```bash
poetry add parrot-link-sdk
```
If you need separate dependencies for type checking, you can add them to the `typing` group with:
```bash
poetry add --group typing types-parrot-link-sdk
```
Then, implement your package in `libs/partners/parrot-link/langchain_parrot_link`.
By default, this will include stubs for a Chat Model, an LLM, and/or a Vector Store. You should delete any of the files you won't use and remove them from `__init__.py`.
### Write Unit and Integration Tests
Some basic tests are generated in the tests/ directory. You should add more tests to cover your package's functionality.
For information on running and implementing tests, see the [Testing guide](./testing).
### Write documentation
Documentation is generated from Jupyter notebooks in the `docs/` directory. You should move the generated notebooks to the relevant `docs/docs/integrations` directory in the monorepo root.
### Additional steps
Contributor steps:
- [ ] Add secret names to manual integrations workflow in `.github/workflows/_integration_test.yml`
- [ ] Add secrets to release workflow (for pre-release testing) in `.github/workflows/_release.yml`
Maintainer steps (Contributors should **not** do these):
- [ ] set up pypi and test pypi projects
- [ ] add credential secrets to Github Actions
- [ ] add package to conda-forge

View File

@@ -1,147 +0,0 @@
---
sidebar_position: 2
---
# Testing
All of our packages have unit tests and integration tests, and we favor unit tests over integration tests.
Unit tests run on every pull request, so they should be fast and reliable.
Integration tests run once a day, and they require more setup, so they should be reserved for confirming interface points with external services.
## Unit Tests
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
To install dependencies for unit tests:
```bash
poetry install --with test
```
To run unit tests:
```bash
make test
```
To run unit tests in Docker:
```bash
make docker_tests
```
To run a specific test:
```bash
TEST_FILE=tests/unit_tests/test_imports.py make test
```
## Integration Tests
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
If you add support for a new external API, please add a new integration test.
**Warning:** Almost no tests should be integration tests.
Tests that require making network connections make it difficult for other
developers to test the code.
Instead favor relying on `responses` library and/or mock.patch to mock
requests using small fixtures.
To install dependencies for integration tests:
```bash
poetry install --with test,test_integration
```
To run integration tests:
```bash
make integration_tests
```
### Prepare
The integration tests use several search engines and databases. The tests
aim to verify the correct behavior of the engines and databases according to
their specifications and requirements.
To run some integration tests, such as tests located in
`tests/integration_tests/vectorstores/`, you will need to install the following
software:
- Docker
- Python 3.8.1 or later
Any new dependencies should be added by running:
```bash
# add package and install it after adding:
poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration
```
Before running any tests, you should start a specific Docker container that has all the
necessary dependencies installed. For instance, we use the `elasticsearch.yml` container
for `test_elasticsearch.py`:
```bash
cd tests/integration_tests/vectorstores/docker-compose
docker-compose -f elasticsearch.yml up
```
For environments that requires more involving preparation, look for `*.sh`. For instance,
`opensearch.sh` builds a required docker image and then launch opensearch.
### Prepare environment variables for local testing:
- copy `tests/integration_tests/.env.example` to `tests/integration_tests/.env`
- set variables in `tests/integration_tests/.env` file, e.g `OPENAI_API_KEY`
Additionally, it's important to note that some integration tests may require certain
environment variables to be set, such as `OPENAI_API_KEY`. Be sure to set any required
environment variables before running the tests to ensure they run correctly.
### Recording HTTP interactions with pytest-vcr
Some of the integration tests in this repository involve making HTTP requests to
external services. To prevent these requests from being made every time the tests are
run, we use pytest-vcr to record and replay HTTP interactions.
When running tests in a CI/CD pipeline, you may not want to modify the existing
cassettes. You can use the --vcr-record=none command-line option to disable recording
new cassettes. Here's an example:
```bash
pytest --log-cli-level=10 tests/integration_tests/vectorstores/test_pinecone.py --vcr-record=none
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none
```
### Run some tests with coverage:
```bash
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html
start "" htmlcov/index.html || open htmlcov/index.html
```
## Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
Coverage requires the dependencies for integration tests:
```bash
poetry install --with test_integration
```
To get a report of current coverage, run the following:
```bash
make coverage
```

View File

@@ -12,20 +12,18 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 1,
"id": "af4381de",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, tool\n",
"from langchain.agents.output_parsers import XMLAgentOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic"
"from langchain.agents import AgentExecutor, XMLAgent, tool\n",
"from langchain.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "24cc8134",
"metadata": {},
"outputs": [],
@@ -35,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "67c0b0e4",
"metadata": {},
"outputs": [],
@@ -48,7 +46,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "7203b101",
"metadata": {},
"outputs": [],
@@ -58,18 +56,18 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "b68e756d",
"metadata": {},
"outputs": [],
"source": [
"# Get the prompt to use - you can modify this!\n",
"prompt = hub.pull(\"hwchase17/xml-agent-convo\")"
"# Get prompt to use\n",
"prompt = XMLAgent.get_default_prompt()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "61ab3e9a",
"metadata": {},
"outputs": [],
@@ -109,27 +107,27 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"id": "e92f1d6f",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"intermediate_steps\": lambda x: convert_intermediate_steps(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt.partial(tools=convert_tools(tool_list))\n",
" | model.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
" | XMLAgentOutputParser()\n",
" | XMLAgent.get_default_output_parser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 8,
"id": "6ce6ec7a",
"metadata": {},
"outputs": [],
@@ -139,7 +137,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 9,
"id": "fb5cb2e3",
"metadata": {},
"outputs": [
@@ -150,8 +148,10 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m <tool>search</tool><tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <tool>search</tool>\n",
"<tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>The weather in New York is 32 degrees\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m <tool>search</tool>\n",
"<tool_input>weather in new york\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"\n",
"<final_answer>The weather in New York is 32 degrees\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -159,17 +159,17 @@
{
"data": {
"text/plain": [
"{'input': 'whats the weather in New york?',\n",
"{'question': 'whats the weather in New york?',\n",
" 'output': 'The weather in New York is 32 degrees'}"
]
},
"execution_count": 14,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in New york?\"})"
"agent_executor.invoke({\"question\": \"whats the weather in New york?\"})"
]
},
{

View File

@@ -10,16 +10,6 @@
"Example of how to use LCEL to write Python code."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0653c7c7",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-experimental langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -27,12 +17,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import (\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import (\n",
" ChatPromptTemplate,\n",
")\n",
"from langchain_experimental.utilities import PythonREPL\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_experimental.utilities import PythonREPL"
]
},
{

View File

@@ -12,16 +12,6 @@
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's a very simple example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b793a0aa",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -29,11 +19,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",

View File

@@ -10,16 +10,6 @@
"This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18753dee",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
@@ -29,10 +19,10 @@
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",

Some files were not shown because too many files have changed in this diff Show More