Compare commits
122 Commits
eugene/doc
...
erick/skip
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
948e3eaf53 | ||
|
|
9545f0666d | ||
|
|
7c3066f9ec | ||
|
|
3596be5210 | ||
|
|
d63d4994c0 | ||
|
|
2ebd167dba | ||
|
|
344cab0739 | ||
|
|
fc886cc303 | ||
|
|
f5bf3bdf14 | ||
|
|
c0e6045c0b | ||
|
|
927824b7cb | ||
|
|
2f6fe6ddf3 | ||
|
|
58f5a4d30a | ||
|
|
be854225c7 | ||
|
|
4b7a85887e | ||
|
|
5a920e14c0 | ||
|
|
1c67db4c18 | ||
|
|
8006919e52 | ||
|
|
c3f94f4c12 | ||
|
|
5f60439221 | ||
|
|
2ff30b50f2 | ||
|
|
280ecfd8eb | ||
|
|
a591cdb67d | ||
|
|
9b4974871d | ||
|
|
39852dffd2 | ||
|
|
50a5c919f0 | ||
|
|
b46f88d364 | ||
|
|
ff19a62afc | ||
|
|
2e42ed5de6 | ||
|
|
1e43025bf5 | ||
|
|
9169d77cf6 | ||
|
|
32c493e3df | ||
|
|
f22f273f93 | ||
|
|
971d2b2e34 | ||
|
|
3ad78e48e2 | ||
|
|
18acc22f29 | ||
|
|
46af56dc4f | ||
|
|
2aa13f1e10 | ||
|
|
4da2faba41 | ||
|
|
700293cae9 | ||
|
|
cc55d2fcee | ||
|
|
545b76b0fd | ||
|
|
9024593468 | ||
|
|
f55f67055f | ||
|
|
f70aa82c84 | ||
|
|
0f31cd8b49 | ||
|
|
e1c020dfe1 | ||
|
|
96b56a4d4f | ||
|
|
64e11592bb | ||
|
|
339973db47 | ||
|
|
e89e830c55 | ||
|
|
c40973814d | ||
|
|
8f81703d76 | ||
|
|
ea6dd3a550 | ||
|
|
a837b03e55 | ||
|
|
7f1d26160d | ||
|
|
8d6faf5665 | ||
|
|
7f1964b264 | ||
|
|
937d7c41f3 | ||
|
|
9c7afa8adb | ||
|
|
180657ca7a | ||
|
|
1a1a1a883f | ||
|
|
8fdf15c023 | ||
|
|
72ad448daa | ||
|
|
8fa960641a | ||
|
|
e165daa0ae | ||
|
|
93ae589f1b | ||
|
|
0dc4ab0be1 | ||
|
|
bf8cf7e042 | ||
|
|
d266b3ea4a | ||
|
|
52f34de9b7 | ||
|
|
b0e8cbe0b3 | ||
|
|
869df62736 | ||
|
|
8313c218da | ||
|
|
a26105de8e | ||
|
|
24386e0860 | ||
|
|
d2e50b3108 | ||
|
|
55912868da | ||
|
|
362a446999 | ||
|
|
b2b94424db | ||
|
|
dd7959f4ac | ||
|
|
86b93b5810 | ||
|
|
fbf7047468 | ||
|
|
0a2b1c7471 | ||
|
|
850336bcf1 | ||
|
|
cf271784fa | ||
|
|
ee3ceb0fb8 | ||
|
|
defd4b4f11 | ||
|
|
d9e493e96c | ||
|
|
e76ff63125 | ||
|
|
fceae456b9 | ||
|
|
c63eb9d797 | ||
|
|
28cc60b347 | ||
|
|
555ce600ef | ||
|
|
ff43cd6701 | ||
|
|
8ad3b255dc | ||
|
|
eb51150557 | ||
|
|
b298f550fe | ||
|
|
84e65533e9 | ||
|
|
1311450646 | ||
|
|
8b2a82b5ce | ||
|
|
58da6e0d47 | ||
|
|
150d58304d | ||
|
|
f04cc4b7e1 | ||
|
|
b346d4a455 | ||
|
|
5f38770161 | ||
|
|
c52725bdc5 | ||
|
|
0fc8fd12bd | ||
|
|
3dbaaf59b2 | ||
|
|
76283e9625 | ||
|
|
18601bd4c8 | ||
|
|
72e12f6bcf | ||
|
|
1703f132c6 | ||
|
|
9fdfac22c2 | ||
|
|
1f85ec34d5 | ||
|
|
9f077270c8 | ||
|
|
f15f8e01cf | ||
|
|
37561d8986 | ||
|
|
06c503f672 | ||
|
|
8030dc90be | ||
|
|
366be2936a | ||
|
|
e5b078d5f7 |
@@ -17,13 +17,16 @@ For more info, check out the [GitHub documentation](https://docs.github.com/en/f
|
||||
## VS Code Dev Containers
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
|
||||
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
|
||||
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
|
||||
```
|
||||
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
|
||||
|
||||
```
|
||||
Then you will have a local cloned repo where you can contribute and then create pull requests.
|
||||
|
||||
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
|
||||
|
||||
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
|
||||
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
|
||||
|
||||
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
|
||||
|
||||
|
||||
94
.github/workflows/_release.yml
vendored
@@ -14,7 +14,7 @@ env:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.ref == 'refs/heads/master'
|
||||
# if: github.ref == 'refs/heads/master'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
outputs:
|
||||
@@ -70,59 +70,59 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
pre-release-checks:
|
||||
needs:
|
||||
- build
|
||||
- test-pypi-publish
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
# We explicitly *don't* set up caching here. This ensures our tests are
|
||||
# maximally sensitive to catching breakage.
|
||||
#
|
||||
# For example, here's a way that caching can cause a falsely-passing test:
|
||||
# - Make the langchain package manifest no longer list a dependency package
|
||||
# as a requirement. This means it won't be installed by `pip install`,
|
||||
# and attempting to use it would cause a crash.
|
||||
# - That dependency used to be required, so it may have been cached.
|
||||
# When restoring the venv packages from cache, that dependency gets included.
|
||||
# - Tests pass, because the dependency is present even though it wasn't specified.
|
||||
# - The package is published, and it breaks on the missing dependency when
|
||||
# used in the real world.
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
# pre-release-checks:
|
||||
# needs:
|
||||
# - build
|
||||
# - test-pypi-publish
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# # We explicitly *don't* set up caching here. This ensures our tests are
|
||||
# # maximally sensitive to catching breakage.
|
||||
# #
|
||||
# # For example, here's a way that caching can cause a falsely-passing test:
|
||||
# # - Make the langchain package manifest no longer list a dependency package
|
||||
# # as a requirement. This means it won't be installed by `pip install`,
|
||||
# # and attempting to use it would cause a crash.
|
||||
# # - That dependency used to be required, so it may have been cached.
|
||||
# # When restoring the venv packages from cache, that dependency gets included.
|
||||
# # - Tests pass, because the dependency is present even though it wasn't specified.
|
||||
# # - The package is published, and it breaks on the missing dependency when
|
||||
# # used in the real world.
|
||||
# - uses: actions/setup-python@v4
|
||||
# with:
|
||||
# python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Test published package
|
||||
shell: bash
|
||||
env:
|
||||
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
||||
VERSION: ${{ needs.build.outputs.version }}
|
||||
# Here we specify:
|
||||
# - The test PyPI index as the *primary* index, meaning that it takes priority.
|
||||
# - The regular PyPI index as an extra index, so that any dependencies that
|
||||
# are not found on test PyPI can be resolved and installed anyway.
|
||||
#
|
||||
# Without the former, we might install the wrong langchain release.
|
||||
# Without the latter, we might not be able to install langchain's dependencies.
|
||||
#
|
||||
# TODO: add more in-depth pre-publish tests after testing that importing works
|
||||
run: |
|
||||
pip install \
|
||||
--index-url https://test.pypi.org/simple/ \
|
||||
--extra-index-url https://pypi.org/simple/ \
|
||||
"$PKG_NAME==$VERSION"
|
||||
# - name: Test published package
|
||||
# shell: bash
|
||||
# env:
|
||||
# PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
||||
# VERSION: ${{ needs.build.outputs.version }}
|
||||
# # Here we specify:
|
||||
# # - The test PyPI index as the *primary* index, meaning that it takes priority.
|
||||
# # - The regular PyPI index as an extra index, so that any dependencies that
|
||||
# # are not found on test PyPI can be resolved and installed anyway.
|
||||
# #
|
||||
# # Without the former, we might install the wrong langchain release.
|
||||
# # Without the latter, we might not be able to install langchain's dependencies.
|
||||
# #
|
||||
# # TODO: add more in-depth pre-publish tests after testing that importing works
|
||||
# run: |
|
||||
# pip install \
|
||||
# --index-url https://test.pypi.org/simple/ \
|
||||
# --extra-index-url https://pypi.org/simple/ \
|
||||
# "$PKG_NAME==$VERSION"
|
||||
|
||||
# Replace all dashes in the package name with underscores,
|
||||
# since that's how Python imports packages with dashes in the name.
|
||||
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
|
||||
# # Replace all dashes in the package name with underscores,
|
||||
# # since that's how Python imports packages with dashes in the name.
|
||||
# IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
|
||||
|
||||
python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
|
||||
# python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
|
||||
|
||||
publish:
|
||||
needs:
|
||||
- build
|
||||
- test-pypi-publish
|
||||
- pre-release-checks
|
||||
# - pre-release-checks
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
@@ -163,7 +163,7 @@ jobs:
|
||||
needs:
|
||||
- build
|
||||
- test-pypi-publish
|
||||
- pre-release-checks
|
||||
# - pre-release-checks
|
||||
- publish
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
|
||||
2
.github/workflows/_test_release.yml
vendored
@@ -14,7 +14,7 @@ env:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.ref == 'refs/heads/master'
|
||||
# if: github.ref == 'refs/heads/master'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
outputs:
|
||||
|
||||
@@ -3,6 +3,8 @@ import toml
|
||||
pyproject_toml = toml.load("pyproject.toml")
|
||||
|
||||
# Extract the ignore words list (adjust the key as per your TOML structure)
|
||||
ignore_words_list = pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
|
||||
ignore_words_list = (
|
||||
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
|
||||
)
|
||||
|
||||
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
|
||||
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
|
||||
|
||||
1
.gitignore
vendored
@@ -178,3 +178,4 @@ docs/docs/build
|
||||
docs/docs/node_modules
|
||||
docs/docs/yarn.lock
|
||||
_dist
|
||||
docs/docs/templates
|
||||
21
MIGRATE.md
@@ -1,9 +1,18 @@
|
||||
# Migrating to `langchain_experimental`
|
||||
# Migrating
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
|
||||
|
||||
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
|
||||
This migration has already started, but we are remaining backwards compatible until 7/28.
|
||||
On that date, we will remove functionality from `langchain`.
|
||||
Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043).
|
||||
|
||||
### Migrating to `langchain_experimental`
|
||||
|
||||
We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.
|
||||
This guide covers how to migrate.
|
||||
|
||||
## Installation
|
||||
### Installation
|
||||
|
||||
Previously:
|
||||
|
||||
@@ -13,7 +22,7 @@ Now (only if you want to access things in experimental):
|
||||
|
||||
`pip install -U langchain langchain_experimental`
|
||||
|
||||
## Things in `langchain.experimental`
|
||||
### Things in `langchain.experimental`
|
||||
|
||||
Previously:
|
||||
|
||||
@@ -23,7 +32,7 @@ Now:
|
||||
|
||||
`from langchain_experimental import ...`
|
||||
|
||||
## PALChain
|
||||
### PALChain
|
||||
|
||||
Previously:
|
||||
|
||||
@@ -33,7 +42,7 @@ Now:
|
||||
|
||||
`from langchain_experimental.pal_chain import PALChain`
|
||||
|
||||
## SQLDatabaseChain
|
||||
### SQLDatabaseChain
|
||||
|
||||
Previously:
|
||||
|
||||
@@ -47,7 +56,7 @@ Alternatively, if you are just interested in using the query generation part of
|
||||
|
||||
`from langchain.chains import create_sql_query_chain`
|
||||
|
||||
## `load_prompt` for Python files
|
||||
### `load_prompt` for Python files
|
||||
|
||||
Note: this only applies if you want to load Python files as prompts.
|
||||
If you want to load json/yaml files, no change is needed.
|
||||
|
||||
4
Makefile
@@ -43,10 +43,10 @@ spell_fix:
|
||||
|
||||
lint:
|
||||
poetry run ruff docs templates cookbook
|
||||
poetry run black docs templates cookbook --diff
|
||||
poetry run ruff format docs templates cookbook --diff
|
||||
|
||||
format format_diff:
|
||||
poetry run black docs templates cookbook
|
||||
poetry run ruff format docs templates cookbook
|
||||
poetry run ruff --select I --fix docs templates cookbook
|
||||
|
||||
######################
|
||||
|
||||
96
README.md
@@ -15,71 +15,72 @@
|
||||
[](https://libraries.io/github/langchain-ai/langchain)
|
||||
[](https://github.com/langchain-ai/langchain/issues)
|
||||
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
||||
|
||||
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
|
||||
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
|
||||
|
||||
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
|
||||
This migration has already started, but we are remaining backwards compatible until 7/28.
|
||||
On that date, we will remove functionality from `langchain`.
|
||||
Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043).
|
||||
Read how to migrate your code [here](MIGRATE.md).
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team.
|
||||
|
||||
## Quick Install
|
||||
|
||||
`pip install langchain`
|
||||
or
|
||||
`pip install langsmith && conda install langchain -c conda-forge`
|
||||
With pip:
|
||||
```bash
|
||||
pip install langchain
|
||||
```
|
||||
|
||||
## 🤔 What is this?
|
||||
With conda:
|
||||
```bash
|
||||
pip install langsmith && conda install langchain -c conda-forge
|
||||
```
|
||||
|
||||
Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
|
||||
## 🤔 What is LangChain?
|
||||
|
||||
This library aims to assist in the development of those types of applications. Common examples of these applications include:
|
||||
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
|
||||
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
|
||||
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
|
||||
|
||||
**❓ Question Answering over specific documents**
|
||||
This framework consists of several parts.
|
||||
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
|
||||
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
|
||||
**This repo contains the `langchain` ([here](libs/langchain)), `langchain-experimental` ([here](libs/experimental)), and `langchain-cli` ([here](libs/cli)) Python packages, as well as [LangChain Templates](templates).**
|
||||
|
||||

|
||||
|
||||
## 🧱 What can you build with LangChain?
|
||||
**❓ Retrieval augmented generation**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
|
||||
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
|
||||
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
|
||||
|
||||
**💬 Chatbots**
|
||||
**💬 Analyzing structured data**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots/)
|
||||
- End-to-end Example: [Chat-LangChain](https://github.com/langchain-ai/chat-langchain)
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql)
|
||||
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2)
|
||||
|
||||
**🤖 Agents**
|
||||
**🤖 Chatbots**
|
||||
|
||||
- [Documentation](https://python.langchain.com/docs/modules/agents/)
|
||||
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
|
||||
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
|
||||
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
|
||||
|
||||
## 📖 Documentation
|
||||
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
|
||||
|
||||
Please see [here](https://python.langchain.com) for full documentation on:
|
||||
## 🚀 How does LangChain help?
|
||||
The main value props of the LangChain libraries are:
|
||||
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
|
||||
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
|
||||
|
||||
- Getting started (installation, setting up the environment, simple examples)
|
||||
- How-To examples (demos, integrations, helper functions)
|
||||
- Reference (full API docs)
|
||||
- Resources (high-level explanation of core concepts)
|
||||
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
|
||||
|
||||
## 🚀 What can this help with?
|
||||
Components fall into the following **modules**:
|
||||
|
||||
There are six main areas that LangChain is designed to help with.
|
||||
These are, in increasing order of complexity:
|
||||
|
||||
**📃 LLMs and Prompts:**
|
||||
**📃 Model I/O:**
|
||||
|
||||
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
|
||||
|
||||
**🔗 Chains:**
|
||||
|
||||
Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
|
||||
|
||||
**📚 Data Augmented Generation:**
|
||||
**📚 Retrieval:**
|
||||
|
||||
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
|
||||
|
||||
@@ -87,15 +88,16 @@ Data Augmented Generation involves specific types of chains that first interact
|
||||
|
||||
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
|
||||
|
||||
**🧠 Memory:**
|
||||
## 📖 Documentation
|
||||
|
||||
Memory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
|
||||
Please see [here](https://python.langchain.com) for full documentation, which includes:
|
||||
|
||||
**🧐 Evaluation:**
|
||||
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
|
||||
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/) and [integrations](https://python.langchain.com/docs/integrations/providers)
|
||||
- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)
|
||||
- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews
|
||||
- [Reference](https://api.python.langchain.com): full API docs
|
||||
|
||||
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is by using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
|
||||
|
||||
## 💁 Contributing
|
||||
|
||||
|
||||
@@ -67,7 +67,6 @@
|
||||
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
|
||||
"\n",
|
||||
"# API\n",
|
||||
"from getpass import getpass\n",
|
||||
"from langchain.llms import Replicate\n",
|
||||
"\n",
|
||||
"# REPLICATE_API_TOKEN = getpass()\n",
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
"source": [
|
||||
"## Multi-modal RAG\n",
|
||||
"\n",
|
||||
"[See Trace of Option 3](https://smith.langchain.com/public/db0441a8-2c17-4070-bdf7-45d4fdf8f517/r/80cb0f89-1766-4caf-8959-fc43ec4b071c)\n",
|
||||
"\n",
|
||||
"Many documents contain a mixture of content types, including text and images. \n",
|
||||
"\n",
|
||||
"Yet, information captured in images is lost in most RAG applications.\n",
|
||||
@@ -106,6 +108,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Extract images, tables, and chunk text\n",
|
||||
"raw_pdf_elements = partition_pdf(\n",
|
||||
" filename=path + \"wildfire_stats.pdf\",\n",
|
||||
@@ -187,8 +190,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Apply to text\n",
|
||||
"# Typically this is reccomended only if you have large text chunks \n",
|
||||
"text_summaries = texts # Skip it\n",
|
||||
"# Typically this is reccomended only if you have large text chunks\n",
|
||||
"text_summaries = texts # Skip it\n",
|
||||
"\n",
|
||||
"# Apply to tables\n",
|
||||
"table_summaries = summarize_chain.batch(tables, {\"max_concurrency\": 5})"
|
||||
@@ -202,7 +205,13 @@
|
||||
"source": [
|
||||
"### Image summaries \n",
|
||||
"\n",
|
||||
"We will use [GPT4-V](https://openai.com/research/gpt-4v-system-card) to produce the image summaries."
|
||||
"We will use [GPT4-V](https://openai.com/research/gpt-4v-system-card) to produce the image summaries.\n",
|
||||
"\n",
|
||||
"See the traces for each of the 5 ingested images here ([1](https://smith.langchain.com/public/f5548212-2e70-4fa8-91d6-c3e7d768d52b/r), \n",
|
||||
"[2](https://smith.langchain.com/public/8b198178-5b83-4960-bbc1-c10516779208/r), \n",
|
||||
"[3](https://smith.langchain.com/public/c4fcbcd5-38fb-462a-9ed1-e90b1d009fa9/r), \n",
|
||||
"[4](https://smith.langchain.com/public/1df53c23-63b8-4f87-b5ae-e9d59b2a54ab/r), \n",
|
||||
"[5](https://smith.langchain.com/public/f93efd6c-f9f6-46c9-b169-29270d33ad63/r))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -212,34 +221,34 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import base64\n",
|
||||
"import io\n",
|
||||
"import os\n",
|
||||
"import base64\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"from PIL import Image\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.messages import HumanMessage, SystemMessage\n",
|
||||
"from PIL import Image\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def encode_image(image_path):\n",
|
||||
" ''' Getting the base64 string '''\n",
|
||||
" \"\"\"Getting the base64 string\"\"\"\n",
|
||||
" with open(image_path, \"rb\") as image_file:\n",
|
||||
" return base64.b64encode(image_file.read()).decode('utf-8')\n",
|
||||
" return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def image_summarize(img_base64, prompt):\n",
|
||||
" \"\"\"Image summary\"\"\"\n",
|
||||
" chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
|
||||
"\n",
|
||||
"def image_summarize(img_base64,prompt):\n",
|
||||
" ''' Image summary '''\n",
|
||||
" chat = ChatOpenAI(model=\"gpt-4-vision-preview\",\n",
|
||||
" max_tokens=1024)\n",
|
||||
" \n",
|
||||
" msg = chat.invoke(\n",
|
||||
" [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\"type\": \"text\", \"text\":prompt},\n",
|
||||
" {\"type\": \"text\", \"text\": prompt},\n",
|
||||
" {\n",
|
||||
" \"type\": \"image_url\",\n",
|
||||
" \"image_url\": {\n",
|
||||
" \"url\": f\"data:image/jpeg;base64,{img_base64}\"\n",
|
||||
" },\n",
|
||||
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{img_base64}\"},\n",
|
||||
" },\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
@@ -247,6 +256,7 @@
|
||||
" )\n",
|
||||
" return msg.content\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Store base64 encoded images\n",
|
||||
"img_base64_list = []\n",
|
||||
"\n",
|
||||
@@ -254,15 +264,15 @@
|
||||
"image_summaries = []\n",
|
||||
"\n",
|
||||
"# Prompt\n",
|
||||
"prompt = \"Describe the image in detail. Be specific about graphs, such as bar plots.\" \n",
|
||||
"prompt = \"Describe the image in detail. Be specific about graphs, such as bar plots.\"\n",
|
||||
"\n",
|
||||
"# Read images, encode to base64 strings\n",
|
||||
"for img_file in sorted(os.listdir(path)):\n",
|
||||
" if img_file.endswith('.jpg'):\n",
|
||||
" if img_file.endswith(\".jpg\"):\n",
|
||||
" img_path = os.path.join(path, img_file)\n",
|
||||
" base64_image = encode_image(img_path)\n",
|
||||
" img_base64_list.append(base64_image)\n",
|
||||
" image_summaries.append(image_summarize(base64_image,prompt))"
|
||||
" image_summaries.append(image_summarize(base64_image, prompt))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -285,16 +295,17 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from IPython.display import display, HTML\n",
|
||||
"from IPython.display import HTML, display\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def plt_img_base64(img_base64):\n",
|
||||
"\n",
|
||||
" # Create an HTML img tag with the base64 string as the source\n",
|
||||
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
|
||||
" \n",
|
||||
"\n",
|
||||
" # Display the image by rendering the HTML\n",
|
||||
" display(HTML(image_html))\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"plt_img_base64(img_base64_list[1])"
|
||||
]
|
||||
},
|
||||
@@ -337,15 +348,17 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import uuid\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(collection_name=\"multi_modal_rag\", \n",
|
||||
" embedding_function=OpenAIEmbeddings())\n",
|
||||
"vectorstore = Chroma(\n",
|
||||
" collection_name=\"multi_modal_rag\", embedding_function=OpenAIEmbeddings()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# The storage layer for the parent documents\n",
|
||||
"store = InMemoryStore()\n",
|
||||
@@ -470,8 +483,10 @@
|
||||
],
|
||||
"source": [
|
||||
"from base64 import b64decode\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def split_image_text_types(docs):\n",
|
||||
" ''' Split base64-encoded images and texts '''\n",
|
||||
" \"\"\"Split base64-encoded images and texts\"\"\"\n",
|
||||
" b64 = []\n",
|
||||
" text = []\n",
|
||||
" for doc in docs:\n",
|
||||
@@ -480,10 +495,9 @@
|
||||
" b64.append(doc)\n",
|
||||
" except Exception as e:\n",
|
||||
" text.append(doc)\n",
|
||||
" return {\n",
|
||||
" \"images\": b64,\n",
|
||||
" \"texts\": text\n",
|
||||
" }\n",
|
||||
" return {\"images\": b64, \"texts\": text}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"docs_by_type = split_image_text_types(docs)\n",
|
||||
"plt_img_base64(docs_by_type[\"images\"][0])"
|
||||
]
|
||||
@@ -512,29 +526,43 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"\n",
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def prompt_func(dict):\n",
|
||||
" format_texts = \"\\n\".join(dict[\"context\"][\"texts\"])\n",
|
||||
" return [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\"type\": \"text\", \"text\": f\"\"\"Answer the question based only on the following context, which can include text, tables, and the below image:\n",
|
||||
" {\n",
|
||||
" \"type\": \"text\",\n",
|
||||
" \"text\": f\"\"\"Answer the question based only on the following context, which can include text, tables, and the below image:\n",
|
||||
"Question: {dict[\"question\"]}\n",
|
||||
"\n",
|
||||
"Text and tables:\n",
|
||||
"{format_texts}\n",
|
||||
"\"\"\"},\n",
|
||||
" {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{dict['context']['images'][0]}\"}},\n",
|
||||
"\"\"\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"type\": \"image_url\",\n",
|
||||
" \"image_url\": {\n",
|
||||
" \"url\": f\"data:image/jpeg;base64,{dict['context']['images'][0]}\"\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(temperature=0, model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
|
||||
"\n",
|
||||
"# RAG pipeline\n",
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever | RunnableLambda(split_image_text_types), \"question\": RunnablePassthrough()}\n",
|
||||
" {\n",
|
||||
" \"context\": retriever | RunnableLambda(split_image_text_types),\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | RunnableLambda(prompt_func)\n",
|
||||
" | model\n",
|
||||
" | StrOutputParser()\n",
|
||||
@@ -566,9 +594,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\n",
|
||||
" \"What is the change in wild fires from 1993 to 2022?\"\n",
|
||||
")"
|
||||
"chain.invoke(\"What is the change in wild fires from 1993 to 2022?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -576,7 +602,7 @@
|
||||
"id": "dea241f1-bd11-45cb-bb33-c4e2e8286855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here is the [trace](https://smith.langchain.com/public/43d955ff-212f-4043-8d63-986b0e4e4eed/r). "
|
||||
"Here is the [trace](https://smith.langchain.com/public/db0441a8-2c17-4070-bdf7-45d4fdf8f517/r/80cb0f89-1766-4caf-8959-fc43ec4b071c). "
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -20,6 +20,7 @@ Notebook | Description
|
||||
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
|
||||
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
|
||||
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools
|
||||
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
|
||||
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
|
||||
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
|
||||
|
||||
@@ -102,9 +102,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from lxml import html\n",
|
||||
"from typing import Any\n",
|
||||
"\n",
|
||||
"from pydantic import BaseModel\n",
|
||||
"from typing import Any, Optional\n",
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Get elements\n",
|
||||
@@ -317,11 +317,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import uuid\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
|
||||
@@ -373,7 +374,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"# Prompt template\n",
|
||||
|
||||
@@ -92,9 +92,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from lxml import html\n",
|
||||
"from typing import Any\n",
|
||||
"\n",
|
||||
"from pydantic import BaseModel\n",
|
||||
"from typing import Any, Optional\n",
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Get elements\n",
|
||||
@@ -224,7 +224,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt\n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\\n",
|
||||
"Give a concise summary of the table or text. Table or text chunk: {element} \"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text)\n",
|
||||
"\n",
|
||||
@@ -313,7 +313,7 @@
|
||||
" # Execute the command and save the output to the defined output file\n",
|
||||
" /Users/rlm/Desktop/Code/llama.cpp/bin/llava -m ../models/llava-7b/ggml-model-q5_k.gguf --mmproj ../models/llava-7b/mmproj-model-f16.gguf --temp 0.1 -p \"Describe the image in detail. Be specific about graphs, such as bar plots.\" --image \"$img\" > \"$output_file\"\n",
|
||||
"\n",
|
||||
"done"
|
||||
"done\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -337,7 +337,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os, glob\n",
|
||||
"import glob\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# Get all .txt file summaries\n",
|
||||
"file_paths = glob.glob(os.path.expanduser(os.path.join(path, \"*.txt\")))\n",
|
||||
@@ -371,11 +372,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import uuid\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",
|
||||
@@ -644,7 +646,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"# Prompt template\n",
|
||||
|
||||
@@ -82,10 +82,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"from lxml import html\n",
|
||||
"from typing import Any\n",
|
||||
"\n",
|
||||
"from pydantic import BaseModel\n",
|
||||
"from typing import Any, Optional\n",
|
||||
"from unstructured.partition.pdf import partition_pdf\n",
|
||||
"\n",
|
||||
"# Path to save images\n",
|
||||
@@ -223,7 +222,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt\n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\ \n",
|
||||
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text. \\\n",
|
||||
"Give a concise summary of the table or text. Table or text chunk: {element} \"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(prompt_text)\n",
|
||||
"\n",
|
||||
@@ -312,7 +311,7 @@
|
||||
" # Execute the command and save the output to the defined output file\n",
|
||||
" /Users/rlm/Desktop/Code/llama.cpp/bin/llava -m ../models/llava-7b/ggml-model-q5_k.gguf --mmproj ../models/llava-7b/mmproj-model-f16.gguf --temp 0.1 -p \"Describe the image in detail. Be specific about graphs, such as bar plots.\" --image \"$img\" > \"$output_file\"\n",
|
||||
"\n",
|
||||
"done"
|
||||
"done\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -322,7 +321,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os, glob\n",
|
||||
"import glob\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# Get all .txt files in the directory\n",
|
||||
"file_paths = glob.glob(os.path.expanduser(os.path.join(path, \"*.txt\")))\n",
|
||||
@@ -375,11 +375,12 @@
|
||||
],
|
||||
"source": [
|
||||
"import uuid\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"\n",
|
||||
"from langchain.embeddings import GPT4AllEmbeddings\n",
|
||||
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
|
||||
"from langchain.schema.document import Document\n",
|
||||
"from langchain.storage import InMemoryStore\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"# The vectorstore to use to index the child chunks\n",
|
||||
"vectorstore = Chroma(\n",
|
||||
@@ -531,7 +532,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"# Prompt template\n",
|
||||
|
||||
833
cookbook/advanced_rag_eval.ipynb
Normal file
@@ -27,10 +27,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import SerpAPIWrapper\n",
|
||||
"from langchain.agents import Tool\n",
|
||||
"from langchain.tools.file_management.write import WriteFileTool\n",
|
||||
"from langchain.tools.file_management.read import ReadFileTool\n",
|
||||
"from langchain.tools.file_management.write import WriteFileTool\n",
|
||||
"from langchain.utilities import SerpAPIWrapper\n",
|
||||
"\n",
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
@@ -61,9 +61,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.docstore import InMemoryDocstore\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings"
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -100,8 +100,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.autonomous_agents import AutoGPT\n",
|
||||
"from langchain.chat_models import ChatOpenAI"
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain_experimental.autonomous_agents import AutoGPT"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -34,16 +34,15 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# General\n",
|
||||
"import os\n",
|
||||
"import pandas as pd\n",
|
||||
"from langchain_experimental.autonomous_agents import AutoGPT\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
|
||||
"from langchain.docstore.document import Document\n",
|
||||
"import asyncio\n",
|
||||
"import nest_asyncio\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"import nest_asyncio\n",
|
||||
"import pandas as pd\n",
|
||||
"from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.docstore.document import Document\n",
|
||||
"from langchain_experimental.autonomous_agents import AutoGPT\n",
|
||||
"\n",
|
||||
"# Needed synce jupyter runs an async eventloop\n",
|
||||
"nest_asyncio.apply()"
|
||||
@@ -92,6 +91,7 @@
|
||||
"import os\n",
|
||||
"from contextlib import contextmanager\n",
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"from langchain.agents import tool\n",
|
||||
"from langchain.tools.file_management.read import ReadFileTool\n",
|
||||
"from langchain.tools.file_management.write import WriteFileTool\n",
|
||||
@@ -223,14 +223,13 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"\n",
|
||||
"from pydantic import Field\n",
|
||||
"from langchain.chains.qa_with_sources.loading import (\n",
|
||||
" load_qa_with_sources_chain,\n",
|
||||
" BaseCombineDocumentsChain,\n",
|
||||
" load_qa_with_sources_chain,\n",
|
||||
")\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
|
||||
"from pydantic import Field\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _get_text_splitter():\n",
|
||||
@@ -311,10 +310,9 @@
|
||||
"source": [
|
||||
"# Memory\n",
|
||||
"import faiss\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.docstore import InMemoryDocstore\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.tools.human.tool import HumanInputRun\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"\n",
|
||||
"embeddings_model = OpenAIEmbeddings()\n",
|
||||
"embedding_size = 1536\n",
|
||||
|
||||
@@ -29,16 +29,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from collections import deque\n",
|
||||
"from typing import Dict, List, Optional, Any\n",
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"from langchain.chains import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.llms import BaseLLM\n",
|
||||
"from langchain.schema.vectorstore import VectorStore\n",
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"from langchain.chains.base import Chain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_experimental.autonomous_agents import BabyAGI"
|
||||
]
|
||||
},
|
||||
@@ -59,8 +53,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.docstore import InMemoryDocstore"
|
||||
"from langchain.docstore import InMemoryDocstore\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -25,16 +25,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from collections import deque\n",
|
||||
"from typing import Dict, List, Optional, Any\n",
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"from langchain.chains import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.llms import BaseLLM\n",
|
||||
"from langchain.schema.vectorstore import VectorStore\n",
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"from langchain.chains.base import Chain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain_experimental.autonomous_agents import BabyAGI"
|
||||
]
|
||||
},
|
||||
@@ -66,8 +62,8 @@
|
||||
"source": [
|
||||
"%pip install faiss-cpu > /dev/null\n",
|
||||
"%pip install google-search-results > /dev/null\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.docstore import InMemoryDocstore"
|
||||
"from langchain.docstore import InMemoryDocstore\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -110,8 +106,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
|
||||
"from langchain.llms import OpenAI\nfrom langchain.utilities import SerpAPIWrapper\nfrom langchain.chains import LLMChain\n",
|
||||
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import SerpAPIWrapper\n",
|
||||
"\n",
|
||||
"todo_prompt = PromptTemplate.from_template(\n",
|
||||
" \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",
|
||||
|
||||
@@ -35,16 +35,17 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts.chat import (\n",
|
||||
" SystemMessagePromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
" SystemMessagePromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -47,10 +47,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from IPython.display import SVG\n",
|
||||
"\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_experimental.cpal.base import CPALChain\n",
|
||||
"from langchain_experimental.pal_chain import PALChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0, max_tokens=512)\n",
|
||||
"cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)\n",
|
||||
|
||||
@@ -177,7 +177,7 @@
|
||||
" try:\n",
|
||||
" loader = TextLoader(os.path.join(dirpath, file), encoding=\"utf-8\")\n",
|
||||
" docs.extend(loader.load_and_split())\n",
|
||||
" except Exception as e:\n",
|
||||
" except Exception:\n",
|
||||
" pass\n",
|
||||
"print(f\"{len(docs)}\")"
|
||||
]
|
||||
@@ -717,7 +717,6 @@
|
||||
"source": [
|
||||
"from langchain.vectorstores import DeepLake\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"username = \"<USERNAME_OR_ORG>\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -834,8 +833,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(\n",
|
||||
" model_name=\"gpt-3.5-turbo-0613\"\n",
|
||||
|
||||
@@ -32,19 +32,20 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from typing import Union\n",
|
||||
"\n",
|
||||
"from langchain.agents import (\n",
|
||||
" Tool,\n",
|
||||
" AgentExecutor,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
" AgentOutputParser,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
")\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.llms import OpenAI\nfrom langchain.utilities import SerpAPIWrapper\nfrom langchain.chains import LLMChain\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.agents.agent_toolkits import NLAToolkit\n",
|
||||
"from langchain.tools.plugin import AIPlugin\n",
|
||||
"import re"
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.tools.plugin import AIPlugin"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -113,9 +114,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema import Document"
|
||||
"from langchain.schema import Document\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -56,20 +56,21 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import (\n",
|
||||
" Tool,\n",
|
||||
" AgentExecutor,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
" AgentOutputParser,\n",
|
||||
")\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.llms import OpenAI\nfrom langchain.utilities import SerpAPIWrapper\nfrom langchain.chains import LLMChain\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.agents.agent_toolkits import NLAToolkit\n",
|
||||
"from langchain.tools.plugin import AIPlugin\n",
|
||||
"import re\n",
|
||||
"import plugnplai"
|
||||
"from typing import Union\n",
|
||||
"\n",
|
||||
"import plugnplai\n",
|
||||
"from langchain.agents import (\n",
|
||||
" AgentExecutor,\n",
|
||||
" AgentOutputParser,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
")\n",
|
||||
"from langchain.agents.agent_toolkits import NLAToolkit\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.tools.plugin import AIPlugin"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -137,9 +138,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema import Document"
|
||||
"from langchain.schema import Document\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -48,18 +48,17 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
"from langchain.document_loaders import PyPDFLoader, TextLoader\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.text_splitter import (\n",
|
||||
" RecursiveCharacterTextSplitter,\n",
|
||||
" CharacterTextSplitter,\n",
|
||||
" RecursiveCharacterTextSplitter,\n",
|
||||
")\n",
|
||||
"from langchain.vectorstores import DeepLake\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain, RetrievalQA\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
|
||||
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",
|
||||
|
||||
@@ -38,9 +38,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from elasticsearch import Elasticsearch\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain"
|
||||
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,7 +111,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATE\n",
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"\n",
|
||||
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
|
||||
|
||||
214
cookbook/extraction_openai_tools.ipynb
Normal file
@@ -0,0 +1,214 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2def22ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Extraction with OpenAI Tools\n",
|
||||
"\n",
|
||||
"Performing extraction has never been easier! OpenAI's tool calling ability is the perfect thing to use as it allows for extracting multiple different elements from text that are different types. \n",
|
||||
"\n",
|
||||
"Models after 1106 use tools and support \"parallel function calling\" which makes this super easy."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "5c628496",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List, Optional\n",
|
||||
"\n",
|
||||
"from langchain.chains.openai_tools import create_extraction_chain_pydantic\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.pydantic_v1 import BaseModel"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "afe9657b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Make sure to use a recent model that supports tools\n",
|
||||
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "bc0ca3b6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Pydantic is an easy way to define a schema\n",
|
||||
"class Person(BaseModel):\n",
|
||||
" \"\"\"Information about people to extract.\"\"\"\n",
|
||||
"\n",
|
||||
" name: str\n",
|
||||
" age: Optional[int] = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "2036af68",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = create_extraction_chain_pydantic(Person, model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "1748ad21",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Person(name='jane', age=2), Person(name='bob', age=3)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"jane is 2 and bob is 3\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "c8262ce5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's define another element\n",
|
||||
"class Class(BaseModel):\n",
|
||||
" \"\"\"Information about classes to extract.\"\"\"\n",
|
||||
"\n",
|
||||
" teacher: str\n",
|
||||
" students: List[str]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "4973c104",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = create_extraction_chain_pydantic([Person, Class], model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "e976a15e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Person(name='jane', age=2),\n",
|
||||
" Person(name='bob', age=3),\n",
|
||||
" Class(teacher='Mrs Sampson', students=['jane', 'bob'])]"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"input\": \"jane is 2 and bob is 3 and they are in Mrs Sampson's class\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6575a7d6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Under the hood\n",
|
||||
"\n",
|
||||
"Under the hood, this is a simple chain:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8ba83e5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"from typing import Union, List, Type, Optional\n",
|
||||
"\n",
|
||||
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
|
||||
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
|
||||
"from langchain.schema.runnable import Runnable\n",
|
||||
"from langchain.pydantic_v1 import BaseModel\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.messages import SystemMessage\n",
|
||||
"from langchain.schema.language_model import BaseLanguageModel\n",
|
||||
"\n",
|
||||
"_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned \\\n",
|
||||
"in the following passage together with their properties.\n",
|
||||
"\n",
|
||||
"If a property is not present and is not required in the function parameters, do not include it in the output.\"\"\" # noqa: E501\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def create_extraction_chain_pydantic(\n",
|
||||
" pydantic_schemas: Union[List[Type[BaseModel]], Type[BaseModel]],\n",
|
||||
" llm: BaseLanguageModel,\n",
|
||||
" system_message: str = _EXTRACTION_TEMPLATE,\n",
|
||||
") -> Runnable:\n",
|
||||
" if not isinstance(pydantic_schemas, list):\n",
|
||||
" pydantic_schemas = [pydantic_schemas]\n",
|
||||
" prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", system_message),\n",
|
||||
" (\"user\", \"{input}\")\n",
|
||||
" ])\n",
|
||||
" tools = [convert_pydantic_to_openai_tool(p) for p in pydantic_schemas]\n",
|
||||
" model = llm.bind(tools=tools)\n",
|
||||
" chain = prompt | model | PydanticToolsParser(tools=pydantic_schemas)\n",
|
||||
" return chain\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2eac6b68",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -30,9 +30,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType"
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -56,7 +56,8 @@
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"SERPER_API_KEY\"] = \"\"os.environ[\"OPENAI_API_KEY\"] = \"\""
|
||||
"os.environ[\"SERPER_API_KEY\"] = \"\"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -66,21 +67,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from typing import Any, List\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"from langchain.schema import BaseRetriever\n",
|
||||
"from langchain.callbacks.manager import (\n",
|
||||
" AsyncCallbackManagerForRetrieverRun,\n",
|
||||
" CallbackManagerForRetrieverRun,\n",
|
||||
")\n",
|
||||
"from langchain.utilities import GoogleSerperAPIWrapper\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.schema import Document\n",
|
||||
"from typing import Any, List"
|
||||
"from langchain.schema import BaseRetriever, Document\n",
|
||||
"from langchain.utilities import GoogleSerperAPIWrapper"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -46,14 +46,13 @@
|
||||
"source": [
|
||||
"from datetime import datetime, timedelta\n",
|
||||
"from typing import List\n",
|
||||
"from termcolor import colored\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.docstore import InMemoryDocstore\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.retrievers import TimeWeightedVectorStoreRetriever\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from termcolor import colored"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -153,6 +152,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import math\n",
|
||||
"\n",
|
||||
"import faiss\n",
|
||||
"\n",
|
||||
"\n",
|
||||
|
||||
@@ -27,18 +27,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import gymnasium as gym\n",
|
||||
"import inspect\n",
|
||||
"import tenacity\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.output_parsers import RegexParser\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")\n",
|
||||
"from langchain.output_parsers import RegexParser"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -131,7 +125,7 @@
|
||||
" ):\n",
|
||||
" with attempt:\n",
|
||||
" action = self._act()\n",
|
||||
" except tenacity.RetryError as e:\n",
|
||||
" except tenacity.RetryError:\n",
|
||||
" action = self.random_action()\n",
|
||||
" return action"
|
||||
]
|
||||
|
||||
@@ -55,9 +55,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType"
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -28,9 +28,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType"
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -20,9 +20,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import HypotheticalDocumentEmbedder, LLMChain\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.chains import LLMChain, HypotheticalDocumentEmbedder\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -790,8 +790,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"from langchain.globals import set_debug\n",
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"\n",
|
||||
"set_debug(True)\n",
|
||||
"\n",
|
||||
|
||||
@@ -43,8 +43,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"\n",
|
||||
@@ -69,8 +69,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"from langchain.chains.llm_bash.prompt import BashOutputParser\n",
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"\n",
|
||||
"_PROMPT_TEMPLATE = \"\"\"If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\n",
|
||||
"Question: \"copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'\"\n",
|
||||
@@ -185,7 +185,6 @@
|
||||
"source": [
|
||||
"from langchain_experimental.llm_bash.bash import BashProcess\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"persistent_process = BashProcess(persistent=True)\n",
|
||||
"bash_chain = LLMBashChain.from_llm(llm, bash_process=persistent_process, verbose=True)\n",
|
||||
"\n",
|
||||
|
||||
@@ -45,7 +45,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\nfrom langchain.chains import LLMMathChain\n",
|
||||
"from langchain.chains import LLMMathChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",
|
||||
|
||||
@@ -56,8 +56,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.memory import ConversationBufferWindowMemory"
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.memory import ConversationBufferWindowMemory\n",
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -152,13 +154,13 @@
|
||||
" for j in range(max_iters):\n",
|
||||
" print(f\"(Step {j+1}/{max_iters})\")\n",
|
||||
" print(f\"Assistant: {output}\")\n",
|
||||
" print(f\"Human: \")\n",
|
||||
" print(\"Human: \")\n",
|
||||
" human_input = input()\n",
|
||||
" if any(phrase in human_input.lower() for phrase in key_phrases):\n",
|
||||
" break\n",
|
||||
" output = chain.predict(human_input=human_input)\n",
|
||||
" if success_phrase in human_input.lower():\n",
|
||||
" print(f\"You succeeded! Thanks for playing!\")\n",
|
||||
" print(\"You succeeded! Thanks for playing!\")\n",
|
||||
" return\n",
|
||||
" meta_chain = initialize_meta_chain()\n",
|
||||
" meta_output = meta_chain.predict(chat_history=get_chat_history(chain.memory))\n",
|
||||
@@ -166,7 +168,7 @@
|
||||
" instructions = get_new_instructions(meta_output)\n",
|
||||
" print(f\"New Instructions: {instructions}\")\n",
|
||||
" print(\"\\n\" + \"#\" * 80 + \"\\n\")\n",
|
||||
" print(f\"You failed! Thanks for playing!\")"
|
||||
" print(\"You failed! Thanks for playing!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=1\" \"langchain>=0.0.331rc2\" matplotlib pillow "
|
||||
"! pip install \"openai>=1\" \"langchain>=0.0.331rc2\" matplotlib pillow"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -40,29 +40,32 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import io\n",
|
||||
"import base64\n",
|
||||
"import io\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"from IPython.display import HTML, display\n",
|
||||
"from PIL import Image\n",
|
||||
"from IPython.display import display, HTML\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def encode_image(image_path):\n",
|
||||
" ''' Getting the base64 string '''\n",
|
||||
" \n",
|
||||
" \"\"\"Getting the base64 string\"\"\"\n",
|
||||
"\n",
|
||||
" with open(image_path, \"rb\") as image_file:\n",
|
||||
" return base64.b64encode(image_file.read()).decode('utf-8')\n",
|
||||
" return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def plt_img_base64(img_base64):\n",
|
||||
" ''' Display the base64 image '''\n",
|
||||
" \"\"\"Display the base64 image\"\"\"\n",
|
||||
"\n",
|
||||
" # Create an HTML img tag with the base64 string as the source\n",
|
||||
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
|
||||
" \n",
|
||||
"\n",
|
||||
" # Display the image by rendering the HTML\n",
|
||||
" display(HTML(image_html))\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Image for QA\n",
|
||||
"path = \"/Users/rlm/Desktop/Multimodal_Eval/qa/llm_strategies.jpeg\"\n",
|
||||
"img_base64 = encode_image(path)\n",
|
||||
@@ -99,19 +102,19 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\",\n",
|
||||
" max_tokens=1024)\n",
|
||||
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
|
||||
"\n",
|
||||
"msg = chat.invoke(\n",
|
||||
" [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=[\n",
|
||||
" {\"type\": \"text\", \"text\":\"Based on the image, what is the difference in training strategy between a small and a large base model?\"},\n",
|
||||
" {\n",
|
||||
" \"type\": \"text\",\n",
|
||||
" \"text\": \"Based on the image, what is the difference in training strategy between a small and a large base model?\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"type\": \"image_url\",\n",
|
||||
" \"image_url\": {\n",
|
||||
" \"url\": f\"data:image/jpeg;base64,{img_base64}\"\n",
|
||||
" },\n",
|
||||
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{img_base64}\"},\n",
|
||||
" },\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
|
||||
499
cookbook/multi_modal_RAG_chroma.ipynb
Normal file
@@ -29,9 +29,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from steamship import Block, Steamship\n",
|
||||
"import re\n",
|
||||
"from IPython.display import Image"
|
||||
"\n",
|
||||
"from IPython.display import Image\n",
|
||||
"from steamship import Block, Steamship"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -41,9 +42,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType\n",
|
||||
"from langchain.tools import SteamshipImageGenerationTool"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -26,13 +26,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List, Dict, Callable\n",
|
||||
"from typing import Callable, List\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -27,26 +27,20 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from collections import OrderedDict\n",
|
||||
"import functools\n",
|
||||
"import random\n",
|
||||
"import re\n",
|
||||
"import tenacity\n",
|
||||
"from typing import List, Dict, Callable\n",
|
||||
"from collections import OrderedDict\n",
|
||||
"from typing import Callable, List\n",
|
||||
"\n",
|
||||
"from langchain.prompts import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
" PromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"import tenacity\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.output_parsers import RegexParser\n",
|
||||
"from langchain.prompts import (\n",
|
||||
" PromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -24,17 +24,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"import re\n",
|
||||
"from typing import Callable, List\n",
|
||||
"\n",
|
||||
"import tenacity\n",
|
||||
"from typing import List, Dict, Callable\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.output_parsers import RegexParser\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -27,18 +27,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from os import environ\n",
|
||||
"import getpass\n",
|
||||
"from typing import Dict, Any\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"from os import environ\n",
|
||||
"\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
"from sqlalchemy import create_engine, Column, MetaData\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"from sqlalchemy import create_engine\n",
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
"from sqlalchemy import MetaData, create_engine\n",
|
||||
"\n",
|
||||
"MYSCALE_HOST = \"msc-4a9e710a.us-east-1.aws.staging.myscale.cloud\"\n",
|
||||
"MYSCALE_PORT = 443\n",
|
||||
@@ -77,9 +74,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.callbacks import StdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.utilities.sql_database import SQLDatabase\n",
|
||||
"from langchain_experimental.sql.prompt import MYSCALE_PROMPT\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
@@ -120,15 +116,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain\n",
|
||||
"\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain_experimental.retrievers.vector_sql_database import (\n",
|
||||
" VectorSQLDatabaseChainRetriever,\n",
|
||||
")\n",
|
||||
"from langchain_experimental.sql.prompt import MYSCALE_PROMPT\n",
|
||||
"from langchain_experimental.sql.vector_sql import VectorSQLRetrieveAllOutputParser\n",
|
||||
"from langchain_experimental.sql.vector_sql import (\n",
|
||||
" VectorSQLDatabaseChain,\n",
|
||||
" VectorSQLRetrieveAllOutputParser,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"output_parser_retrieve_all = VectorSQLRetrieveAllOutputParser.from_embeddings(\n",
|
||||
" output_parser.model\n",
|
||||
|
||||
@@ -50,10 +50,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import create_qa_with_sources_chain\n",
|
||||
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import create_qa_with_sources_chain"
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -230,9 +230,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain, LLMChain\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"\n",
|
||||
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
|
||||
"_template = \"\"\"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\\n",
|
||||
@@ -357,12 +356,10 @@
|
||||
"source": [
|
||||
"from typing import List\n",
|
||||
"\n",
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"\n",
|
||||
"from langchain.chains.openai_functions import create_qa_with_structure_chain\n",
|
||||
"\n",
|
||||
"from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\n",
|
||||
"from langchain.schema import SystemMessage, HumanMessage"
|
||||
"from langchain.schema import HumanMessage, SystemMessage\n",
|
||||
"from pydantic import BaseModel, Field"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -17,7 +17,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install -U openai \"langchain>=0.0.331rc2\" langchain-experimental"
|
||||
"# need openai>=1.1.0, langchain>=0.0.335, langchain-experimental>=0.0.39\n",
|
||||
"!pip install -U openai langchain langchain-experimental"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -108,7 +109,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.openai_assistant import OpenAIAssistantRunnable"
|
||||
"from langchain.agents.openai_assistant import OpenAIAssistantRunnable"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -133,7 +134,7 @@
|
||||
" name=\"langchain assistant\",\n",
|
||||
" instructions=\"You are a personal math tutor. Write and run code to answer math questions.\",\n",
|
||||
" tools=[{\"type\": \"code_interpreter\"}],\n",
|
||||
" model=\"gpt-4-1106-preview\"\n",
|
||||
" model=\"gpt-4-1106-preview\",\n",
|
||||
")\n",
|
||||
"output = interpreter_assistant.invoke({\"content\": \"What's 10 - 4 raised to the 2.7\"})\n",
|
||||
"output"
|
||||
@@ -166,7 +167,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.tools import E2BDataAnalysisTool, DuckDuckGoSearchRun\n",
|
||||
"from langchain.tools import DuckDuckGoSearchRun, E2BDataAnalysisTool\n",
|
||||
"\n",
|
||||
"tools = [E2BDataAnalysisTool(api_key=\"...\"), DuckDuckGoSearchRun()]"
|
||||
]
|
||||
@@ -183,7 +184,7 @@
|
||||
" instructions=\"You are a personal math tutor. Write and run code to answer math questions. You can also search the internet.\",\n",
|
||||
" tools=tools,\n",
|
||||
" model=\"gpt-4-1106-preview\",\n",
|
||||
" as_agent=True\n",
|
||||
" as_agent=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -240,7 +241,7 @@
|
||||
" instructions=\"You are a personal math tutor. Write and run code to answer math questions.\",\n",
|
||||
" tools=tools,\n",
|
||||
" model=\"gpt-4-1106-preview\",\n",
|
||||
" as_agent=True\n",
|
||||
" as_agent=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -253,6 +254,7 @@
|
||||
"source": [
|
||||
"from langchain.schema.agent import AgentFinish\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def execute_agent(agent, tools, input):\n",
|
||||
" tool_map = {tool.name: tool for tool in tools}\n",
|
||||
" response = agent.invoke(input)\n",
|
||||
@@ -261,9 +263,17 @@
|
||||
" for action in response:\n",
|
||||
" tool_output = tool_map[action.tool].invoke(action.tool_input)\n",
|
||||
" print(action.tool, action.tool_input, tool_output, end=\"\\n\\n\")\n",
|
||||
" tool_outputs.append({\"output\": tool_output, \"tool_call_id\": action.tool_call_id})\n",
|
||||
" response = agent.invoke({\"tool_outputs\": tool_outputs, \"run_id\": action.run_id, \"thread_id\": action.thread_id})\n",
|
||||
" \n",
|
||||
" tool_outputs.append(\n",
|
||||
" {\"output\": tool_output, \"tool_call_id\": action.tool_call_id}\n",
|
||||
" )\n",
|
||||
" response = agent.invoke(\n",
|
||||
" {\n",
|
||||
" \"tool_outputs\": tool_outputs,\n",
|
||||
" \"run_id\": action.run_id,\n",
|
||||
" \"thread_id\": action.thread_id,\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" return response"
|
||||
]
|
||||
},
|
||||
@@ -305,7 +315,9 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"next_response = execute_agent(agent, tools, {\"content\": \"now add 17.241\", \"thread_id\": response.thread_id})\n",
|
||||
"next_response = execute_agent(\n",
|
||||
" agent, tools, {\"content\": \"now add 17.241\", \"thread_id\": response.thread_id}\n",
|
||||
")\n",
|
||||
"print(next_response.return_values[\"output\"])"
|
||||
]
|
||||
},
|
||||
@@ -389,6 +401,85 @@
|
||||
")\n",
|
||||
"print(output.llm_output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aa6565be-985d-4127-848e-c3bca9d7b434",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Breaking changes to Azure classes\n",
|
||||
"\n",
|
||||
"OpenAI V1 rewrote their clients and separated Azure and OpenAI clients. This has led to some changes in LangChain interfaces when using OpenAI V1.\n",
|
||||
"\n",
|
||||
"BREAKING CHANGES:\n",
|
||||
"- To use Azure embeddings with OpenAI V1, you'll need to use the new `AzureOpenAIEmbeddings` instead of the existing `OpenAIEmbeddings`. `OpenAIEmbeddings` continue to work when using Azure with `openai<1`.\n",
|
||||
"```python\n",
|
||||
"from langchain.embeddings import AzureOpenAIEmbeddings\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"RECOMMENDED CHANGES:\n",
|
||||
"- When using `AzureChatOpenAI` or `AzureOpenAI`, if passing in an Azure endpoint (eg https://example-resource.azure.openai.com/) this should be specified via the `azure_endpoint` parameter or the `AZURE_OPENAI_ENDPOINT`. We're maintaining backwards compatibility for now with specifying this via `openai_api_base`/`base_url` or env var `OPENAI_API_BASE` but this shouldn't be relied upon.\n",
|
||||
"- When using Azure chat or embedding models, pass in API keys either via `openai_api_key` parameter or `AZURE_OPENAI_API_KEY` parameter. We're maintaining backwards compatibility for now with specifying this via `OPENAI_API_KEY` but this shouldn't be relied upon."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "49944887-3972-497e-8da2-6d32d44345a9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tools\n",
|
||||
"\n",
|
||||
"Use tools for parallel function calling."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "916292d8-0f89-40a6-af1c-5a1122327de8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[GetCurrentWeather(location='New York, NY', unit='fahrenheit'),\n",
|
||||
" GetCurrentWeather(location='Los Angeles, CA', unit='fahrenheit'),\n",
|
||||
" GetCurrentWeather(location='San Francisco, CA', unit='fahrenheit')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from typing import Literal\n",
|
||||
"\n",
|
||||
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.pydantic_v1 import BaseModel, Field\n",
|
||||
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class GetCurrentWeather(BaseModel):\n",
|
||||
" \"\"\"Get the current weather in a location.\"\"\"\n",
|
||||
"\n",
|
||||
" location: str = Field(description=\"The city and state, e.g. San Francisco, CA\")\n",
|
||||
" unit: Literal[\"celsius\", \"fahrenheit\"] = Field(\n",
|
||||
" default=\"fahrenheit\", description=\"The temperature unit, default to fahrenheit\"\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", \"You are a helpful assistant\"), (\"user\", \"{input}\")]\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(\n",
|
||||
" tools=[convert_pydantic_to_openai_tool(GetCurrentWeather)]\n",
|
||||
")\n",
|
||||
"chain = prompt | model | PydanticToolsParser(tools=[GetCurrentWeather])\n",
|
||||
"\n",
|
||||
"chain.invoke({\"input\": \"what's the weather in NYC, LA, and SF\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -45,14 +45,14 @@
|
||||
"source": [
|
||||
"import collections\n",
|
||||
"import inspect\n",
|
||||
"import tenacity\n",
|
||||
"\n",
|
||||
"import tenacity\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.output_parsers import RegexParser\n",
|
||||
"from langchain.schema import (\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
")\n",
|
||||
"from langchain.output_parsers import RegexParser"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -146,7 +146,7 @@
|
||||
" ):\n",
|
||||
" with attempt:\n",
|
||||
" action = self._act()\n",
|
||||
" except tenacity.RetryError as e:\n",
|
||||
" except tenacity.RetryError:\n",
|
||||
" action = self.random_action()\n",
|
||||
" return action"
|
||||
]
|
||||
|
||||
@@ -17,8 +17,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.pal_chain import PALChain\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain_experimental.pal_chain import PALChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
181
cookbook/qianfan_baidu_elasticesearch_RAG.ipynb
Normal file
@@ -0,0 +1,181 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RAG based on Qianfan and BES"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebook is an implementation of Retrieval augmented generation (RAG) using Baidu Qianfan Platform combined with Baidu ElasricSearch, where the original data is located on BOS.\n",
|
||||
"## Baidu Qianfan\n",
|
||||
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
|
||||
"\n",
|
||||
"## Baidu ElasticSearch\n",
|
||||
"[Baidu Cloud VectorSearch](https://cloud.baidu.com/doc/BES/index.html?from=productToDoc) is a fully managed, enterprise-level distributed search and analysis service which is 100% compatible to open source. Baidu Cloud VectorSearch provides low-cost, high-performance, and reliable retrieval and analysis platform level product services for structured/unstructured data. As a vector database , it supports multiple index types and similarity distance methods. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation and Setup\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#!pip install qianfan\n",
|
||||
"#!pip install bce-python-sdk\n",
|
||||
"#!pip install elasticsearch == 7.11.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from baidubce.auth.bce_credentials import BceCredentials\n",
|
||||
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
|
||||
"from langchain.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader\n",
|
||||
"from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
|
||||
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain.vectorstores import BESVectorStore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Document loading"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"bos_host = \"your bos eddpoint\"\n",
|
||||
"access_key_id = \"your bos access ak\"\n",
|
||||
"secret_access_key = \"your bos access sk\"\n",
|
||||
"\n",
|
||||
"# create BceClientConfiguration\n",
|
||||
"config = BceClientConfiguration(\n",
|
||||
" credentials=BceCredentials(access_key_id, secret_access_key), endpoint=bos_host\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"loader = BaiduBOSDirectoryLoader(conf=config, bucket=\"llm-test\", prefix=\"llm/\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=0)\n",
|
||||
"split_docs = text_splitter.split_documents(documents)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Embedding and VectorStore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = HuggingFaceEmbeddings(model_name=\"shibing624/text2vec-base-chinese\")\n",
|
||||
"embeddings.client = sentence_transformers.SentenceTransformer(embeddings.model_name)\n",
|
||||
"\n",
|
||||
"db = BESVectorStore.from_documents(\n",
|
||||
" documents=split_docs,\n",
|
||||
" embedding=embeddings,\n",
|
||||
" bes_url=\"your bes url\",\n",
|
||||
" index_name=\"test-index\",\n",
|
||||
" vector_query_field=\"vector\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"db.client.indices.refresh(index=\"test-index\")\n",
|
||||
"retriever = db.as_retriever()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## QA Retriever"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = QianfanLLMEndpoint(\n",
|
||||
" model=\"ERNIE-Bot\",\n",
|
||||
" qianfan_ak=\"your qianfan ak\",\n",
|
||||
" qianfan_sk=\"your qianfan sk\",\n",
|
||||
" streaming=True,\n",
|
||||
")\n",
|
||||
"qa = RetrievalQA.from_chain_type(\n",
|
||||
" llm=llm, chain_type=\"refine\", retriever=retriever, return_source_documents=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"query = \"什么是张量?\"\n",
|
||||
"print(qa.run(query))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> 张量(Tensor)是一个数学概念,用于表示多维数据。它是一个可以表示多个数值的数组,可以是标量、向量、矩阵等。在深度学习和人工智能领域中,张量常用于表示神经网络的输入、输出和权重等。"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.9.17"
|
||||
},
|
||||
"orig_nbformat": 4,
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -30,8 +30,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pinecone\n",
|
||||
"from langchain.vectorstores import Pinecone\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import Pinecone\n",
|
||||
"\n",
|
||||
"pinecone.init(api_key=\"...\", environment=\"...\")"
|
||||
]
|
||||
@@ -87,7 +87,6 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -28,8 +28,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\") or getpass.getpass(\n",
|
||||
" \"OpenAI API Key:\"\n",
|
||||
@@ -42,8 +42,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.sql_database import SQLDatabase\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.sql_database import SQLDatabase\n",
|
||||
"\n",
|
||||
"CONNECTION_STRING = \"postgresql+psycopg2://postgres:test@localhost:5432/vectordb\" # Replace with your own\n",
|
||||
"db = SQLDatabase.from_uri(CONNECTION_STRING)"
|
||||
@@ -323,6 +323,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"\n",
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"\n",
|
||||
|
||||
@@ -31,12 +31,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -42,22 +42,22 @@
|
||||
"OPENAI_API_KEY = \"sk-xx\"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n",
|
||||
"\n",
|
||||
"from typing import Dict, List, Any, Union, Callable\n",
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"from langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.llms import BaseLLM\n",
|
||||
"from langchain.chains.base import Chain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.agents import Tool, LLMSingleActionAgent, AgentExecutor\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts.base import StringPromptTemplate\n",
|
||||
"from typing import Any, Callable, Dict, List, Union\n",
|
||||
"\n",
|
||||
"from langchain.agents import AgentExecutor, LLMSingleActionAgent, Tool\n",
|
||||
"from langchain.agents.agent import AgentOutputParser\n",
|
||||
"from langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish"
|
||||
"from langchain.chains import LLMChain, RetrievalQA\n",
|
||||
"from langchain.chains.base import Chain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.llms import BaseLLM, OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.prompts.base import StringPromptTemplate\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from pydantic import BaseModel, Field"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -17,12 +17,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.prompt import PromptValue\n",
|
||||
"from langchain.schema.messages import BaseMessage\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from typing import Union, Sequence"
|
||||
"from langchain.schema.prompt import PromptValue"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1084,7 +1084,6 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema import Document\n",
|
||||
"from langchain.vectorstores import ElasticsearchStore\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings()"
|
||||
|
||||
@@ -51,8 +51,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain_experimental.smart_llm import SmartLLMChain"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -131,7 +131,6 @@
|
||||
"source": [
|
||||
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"search = DuckDuckGoSearchAPIWrapper(max_results=4)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
|
||||
@@ -84,10 +84,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from typing import Tuple\n",
|
||||
"\n",
|
||||
"from langchain_experimental.tot.checker import ToTChecker\n",
|
||||
"from langchain_experimental.tot.thought import ThoughtValidity\n",
|
||||
"import re\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class MyChecker(ToTChecker):\n",
|
||||
|
||||
@@ -34,8 +34,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import DeepLake\n",
|
||||
@@ -109,6 +109,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"\n",
|
||||
"root_dir = \"./the-algorithm\"\n",
|
||||
@@ -118,7 +119,7 @@
|
||||
" try:\n",
|
||||
" loader = TextLoader(os.path.join(dirpath, file), encoding=\"utf-8\")\n",
|
||||
" docs.extend(loader.load_and_split())\n",
|
||||
" except Exception as e:\n",
|
||||
" except Exception:\n",
|
||||
" pass"
|
||||
]
|
||||
},
|
||||
@@ -3807,8 +3808,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
|
||||
|
||||
@@ -22,17 +22,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List, Dict, Callable\n",
|
||||
"from langchain.chains import ConversationChain\n",
|
||||
"from typing import Callable, List\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"from langchain.schema import (\n",
|
||||
" AIMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" SystemMessage,\n",
|
||||
" BaseMessage,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -49,10 +46,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType\n",
|
||||
"from langchain.agents import load_tools"
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -22,7 +22,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List, Dict, Callable\n",
|
||||
"from typing import Callable, List\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema import (\n",
|
||||
" HumanMessage,\n",
|
||||
|
||||
@@ -192,10 +192,10 @@
|
||||
" return current\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def vocab_lookup(\n",
|
||||
" search: str,\n",
|
||||
@@ -319,9 +319,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"from typing import List, Dict, Any\n",
|
||||
"import json\n",
|
||||
"from typing import Any, Dict, List\n",
|
||||
"\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def run_sparql(\n",
|
||||
@@ -389,17 +390,18 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import (\n",
|
||||
" Tool,\n",
|
||||
" AgentExecutor,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
" AgentOutputParser,\n",
|
||||
")\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.llms import OpenAI\nfrom langchain.chains import LLMChain\n",
|
||||
"import re\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"import re"
|
||||
"\n",
|
||||
"from langchain.agents import (\n",
|
||||
" AgentExecutor,\n",
|
||||
" AgentOutputParser,\n",
|
||||
" LLMSingleActionAgent,\n",
|
||||
" Tool,\n",
|
||||
")\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
BIN
docs/docs/_static/ApifyActors.png
vendored
|
Before Width: | Height: | Size: 559 KiB |
BIN
docs/docs/_static/ChaindeskDashboard.png
vendored
|
Before Width: | Height: | Size: 157 KiB |
BIN
docs/docs/_static/HeliconeDashboard.png
vendored
|
Before Width: | Height: | Size: 235 KiB |
BIN
docs/docs/_static/HeliconeKeys.png
vendored
|
Before Width: | Height: | Size: 148 KiB |
BIN
docs/docs/_static/MetalDash.png
vendored
|
Before Width: | Height: | Size: 3.5 MiB |
BIN
docs/docs/_static/android-chrome-192x192.png
vendored
|
Before Width: | Height: | Size: 18 KiB |
BIN
docs/docs/_static/android-chrome-512x512.png
vendored
|
Before Width: | Height: | Size: 85 KiB |
BIN
docs/docs/_static/apple-touch-icon.png
vendored
|
Before Width: | Height: | Size: 16 KiB |
21
docs/docs/_static/css/custom.css
vendored
@@ -1,21 +0,0 @@
|
||||
pre {
|
||||
white-space: break-spaces;
|
||||
}
|
||||
|
||||
@media (min-width: 1200px) {
|
||||
.container,
|
||||
.container-lg,
|
||||
.container-md,
|
||||
.container-sm,
|
||||
.container-xl {
|
||||
max-width: 2560px !important;
|
||||
}
|
||||
}
|
||||
|
||||
#my-component-root *, #headlessui-portal-root * {
|
||||
z-index: 10000;
|
||||
}
|
||||
|
||||
.content-container p {
|
||||
margin: revert;
|
||||
}
|
||||
BIN
docs/docs/_static/favicon-16x16.png
vendored
|
Before Width: | Height: | Size: 542 B |
BIN
docs/docs/_static/favicon-32x32.png
vendored
|
Before Width: | Height: | Size: 1.2 KiB |
BIN
docs/docs/_static/favicon.ico
vendored
|
Before Width: | Height: | Size: 15 KiB |
56
docs/docs/_static/js/mendablesearch.js
vendored
@@ -1,56 +0,0 @@
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Load the external dependencies
|
||||
function loadScript(src, onLoadCallback) {
|
||||
const script = document.createElement('script');
|
||||
script.src = src;
|
||||
script.onload = onLoadCallback;
|
||||
document.head.appendChild(script);
|
||||
}
|
||||
|
||||
function createRootElement() {
|
||||
const rootElement = document.createElement('div');
|
||||
rootElement.id = 'my-component-root';
|
||||
document.body.appendChild(rootElement);
|
||||
return rootElement;
|
||||
}
|
||||
|
||||
|
||||
|
||||
function initializeMendable() {
|
||||
const rootElement = createRootElement();
|
||||
const { MendableFloatingButton } = Mendable;
|
||||
|
||||
|
||||
const iconSpan1 = React.createElement('span', {
|
||||
}, '🦜');
|
||||
|
||||
const iconSpan2 = React.createElement('span', {
|
||||
}, '🔗');
|
||||
|
||||
const icon = React.createElement('p', {
|
||||
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
|
||||
}, [iconSpan1, iconSpan2]);
|
||||
|
||||
const mendableFloatingButton = React.createElement(
|
||||
MendableFloatingButton,
|
||||
{
|
||||
style: { darkMode: false, accentColor: '#010810' },
|
||||
floatingButtonStyle: { color: '#ffffff', backgroundColor: '#010810' },
|
||||
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
|
||||
messageSettings: {
|
||||
openSourcesInNewTab: false,
|
||||
prettySources: true // Prettify the sources displayed now
|
||||
},
|
||||
icon: icon,
|
||||
}
|
||||
);
|
||||
|
||||
ReactDOM.render(mendableFloatingButton, rootElement);
|
||||
}
|
||||
|
||||
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
|
||||
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
|
||||
});
|
||||
});
|
||||
});
|
||||
BIN
docs/docs/_static/lc_modules.jpg
vendored
|
Before Width: | Height: | Size: 103 KiB |
BIN
docs/docs/_static/parrot-chainlink-icon.png
vendored
|
Before Width: | Height: | Size: 136 KiB |
BIN
docs/docs/_static/parrot-icon.png
vendored
|
Before Width: | Height: | Size: 34 KiB |
@@ -1,15 +1,18 @@
|
||||
# Tutorials
|
||||
|
||||
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases/qa_structured/sql).
|
||||
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
|
||||
|
||||
⛓ icon marks a new addition [last update 2023-09-21]
|
||||
|
||||
---------------------
|
||||
|
||||
### [LangChain on Wikipedia](https://en.wikipedia.org/wiki/LangChain)
|
||||
|
||||
### DeepLearning.AI courses
|
||||
by [Harrison Chase](https://github.com/hwchase17) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
|
||||
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
|
||||
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
|
||||
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
|
||||
- ⛓ [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
|
||||
|
||||
### Handbook
|
||||
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import XMLAgent, tool, AgentExecutor\n",
|
||||
"from langchain.agents import AgentExecutor, XMLAgent, tool\n",
|
||||
"from langchain.chat_models import ChatAnthropic"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -20,8 +20,6 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" SystemMessagePromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
")\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain_experimental.utilities import PythonREPL"
|
||||
|
||||
@@ -26,7 +26,6 @@
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
|
||||
"from langchain.utils.math import cosine_similarity\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
|
||||
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
|
||||
"When you don't know the answer to a question you admit that you don't know.\n",
|
||||
|
||||
@@ -18,10 +18,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
|
||||
@@ -69,7 +69,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"prompt1 = ChatPromptTemplate.from_template(\n",
|
||||
" \"generate a {attribute} color. Return the name of the color and nothing else:\"\n",
|
||||
|
||||
@@ -42,8 +42,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
|
||||
@@ -38,11 +38,11 @@
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough, RunnableLambda\n",
|
||||
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
|
||||
"from langchain.vectorstores import FAISS"
|
||||
]
|
||||
},
|
||||
@@ -170,8 +170,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableMap\n",
|
||||
"from langchain.schema import format_document"
|
||||
"from langchain.schema import format_document\n",
|
||||
"from langchain.schema.runnable import RunnableMap"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -231,7 +231,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Tuple, List\n",
|
||||
"from typing import List, Tuple\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def _format_chat_history(chat_history: List[Tuple]) -> str:\n",
|
||||
@@ -335,6 +335,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.memory import ConversationBufferMemory"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -12,6 +12,19 @@
|
||||
"Suppose we have a simple prompt + model sequence:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "950297ed-2d67-4091-8ea7-1d412d259d04",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
@@ -37,11 +50,6 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
@@ -105,31 +113,29 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 3,
|
||||
"id": "f66a0fe4-fde0-4706-8863-d60253f211c7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"solver\",\n",
|
||||
" \"description\": \"Formulates and solves an equation\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"equation\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The algebraic expression of the equation\",\n",
|
||||
" },\n",
|
||||
" \"solution\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The solution to the equation\",\n",
|
||||
" },\n",
|
||||
"function = {\n",
|
||||
" \"name\": \"solver\",\n",
|
||||
" \"description\": \"Formulates and solves an equation\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"equation\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The algebraic expression of the equation\",\n",
|
||||
" },\n",
|
||||
" \"solution\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The solution to the equation\",\n",
|
||||
" },\n",
|
||||
" \"required\": [\"equation\", \"solution\"],\n",
|
||||
" },\n",
|
||||
" }\n",
|
||||
"]"
|
||||
" \"required\": [\"equation\", \"solution\"],\n",
|
||||
" },\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -161,19 +167,70 @@
|
||||
" ]\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(model=\"gpt-4\", temperature=0).bind(\n",
|
||||
" function_call={\"name\": \"solver\"}, functions=functions\n",
|
||||
" function_call={\"name\": \"solver\"}, functions=[function]\n",
|
||||
")\n",
|
||||
"runnable = {\"equation_statement\": RunnablePassthrough()} | prompt | model\n",
|
||||
"runnable.invoke(\"x raised to the third plus seven equals 12\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f07d7528-9269-4d6f-b12e-3669592a9e03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Attaching OpenAI tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"id": "2cdeeb4c-0c1f-43da-bd58-4f591d9e0671",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"tools = [\n",
|
||||
" {\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"description\": \"Get the current weather in a given location\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"location\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
|
||||
" },\n",
|
||||
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
|
||||
" },\n",
|
||||
" \"required\": [\"location\"],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" }\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "2b65beab-48bb-46ff-a5a4-ef8ac95a513c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_zHN0ZHwrxM7nZDdqTp6dkPko', 'function': {'arguments': '{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_aqdMm9HBSlFW9c9rqxTa7eQv', 'function': {'arguments': '{\"location\": \"New York, NY\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_cx8E567zcLzYV2WSWVgO63f1', 'function': {'arguments': '{\"location\": \"Los Angeles, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}]})"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(tools=tools)\n",
|
||||
"model.invoke(\"What's the weather in SF, NYC and LA?\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "39eaf61b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Configuration\n",
|
||||
"# Configure chain internals at runtime\n",
|
||||
"\n",
|
||||
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
|
||||
"In order to make this experience as easy as possible, we have defined two methods.\n",
|
||||
@@ -262,9 +262,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI, ChatAnthropic\n",
|
||||
"from langchain.schema.runnable import ConfigurableField\n",
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
"from langchain.chat_models import ChatAnthropic, ChatOpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.runnable import ConfigurableField"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -594,7 +594,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI, ChatAnthropic"
|
||||
"from langchain.chat_models import ChatAnthropic, ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -50,6 +50,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from unittest.mock import patch\n",
|
||||
"\n",
|
||||
"from openai.error import RateLimitError"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "fbc4bf6e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run arbitrary functions\n",
|
||||
"# Run custom functions\n",
|
||||
"\n",
|
||||
"You can use arbitrary functions in the pipeline\n",
|
||||
"\n",
|
||||
@@ -19,11 +19,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from operator import itemgetter\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def length_function(text):\n",
|
||||
" return len(text)\n",
|
||||
@@ -91,8 +92,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.runnable import RunnableConfig\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser"
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnableConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -175,7 +176,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom generator functions\n",
|
||||
"# Stream custom generator functions\n",
|
||||
"\n",
|
||||
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
|
||||
"\n",
|
||||
@@ -21,15 +21,7 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"lion, tiger, wolf, gorilla, panda\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Iterator, List\n",
|
||||
"\n",
|
||||
@@ -37,22 +29,56 @@
|
||||
"from langchain.prompts.chat import ChatPromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\n",
|
||||
" \"Write a comma-separated list of 5 animals similar to: {animal}\"\n",
|
||||
")\n",
|
||||
"model = ChatOpenAI(temperature=0.0)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"str_chain = prompt | model | StrOutputParser()\n",
|
||||
"\n",
|
||||
"print(str_chain.invoke({\"animal\": \"bear\"}))"
|
||||
"str_chain = prompt | model | StrOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"lion, tiger, wolf, gorilla, panda"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for chunk in str_chain.stream({\"animal\": \"bear\"}):\n",
|
||||
" print(chunk, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'lion, tiger, wolf, gorilla, panda'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"str_chain.invoke({\"animal\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is a custom parser that splits an iterator of llm tokens\n",
|
||||
@@ -77,22 +103,61 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"list_chain = str_chain | split_into_list"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']\n"
|
||||
"['lion']\n",
|
||||
"['tiger']\n",
|
||||
"['wolf']\n",
|
||||
"['gorilla']\n",
|
||||
"['panda']\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"list_chain = str_chain | split_into_list\n",
|
||||
"\n",
|
||||
"print(list_chain.invoke({\"animal\": \"bear\"}))"
|
||||
"for chunk in list_chain.stream({\"animal\": \"bear\"}):\n",
|
||||
" print(chunk, flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"list_chain.invoke({\"animal\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -111,9 +176,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Use RunnableParallel/RunnableMap\n",
|
||||
"# Parallelize steps\n",
|
||||
"\n",
|
||||
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
|
||||
]
|
||||
@@ -33,7 +33,6 @@
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.schema.runnable import RunnableParallel\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
||||
"poem_chain = (\n",
|
||||
@@ -195,7 +194,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "4b47436a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Route between multiple Runnables\n",
|
||||
"# Dynamically route logic based on input\n",
|
||||
"\n",
|
||||
"This notebook covers how to do routing in the LangChain Expression Language.\n",
|
||||
"\n",
|
||||
@@ -40,8 +40,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -4,33 +4,30 @@ sidebar_class_name: hidden
|
||||
|
||||
# LangChain Expression Language (LCEL)
|
||||
|
||||
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
|
||||
There are several benefits to writing chains in this manner (as opposed to writing normal code):
|
||||
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
|
||||
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
|
||||
|
||||
**Async, Batch, and Streaming Support**
|
||||
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
|
||||
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.
|
||||
**Streaming support**
|
||||
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
|
||||
|
||||
**Fallbacks**
|
||||
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
|
||||
With LCEL you can easily attach fallbacks to any chain.
|
||||
**Async support**
|
||||
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
|
||||
|
||||
**Parallelism**
|
||||
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
|
||||
With LCEL syntax, any components that can be run in parallel automatically are.
|
||||
**Optimized parallel execution**
|
||||
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
|
||||
|
||||
**Seamless LangSmith Tracing Integration**
|
||||
**Retries and fallbacks**
|
||||
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
|
||||
|
||||
**Access intermediate results**
|
||||
For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](/docs/langserve) server.
|
||||
|
||||
**Input and output schemas**
|
||||
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
|
||||
|
||||
**Seamless LangSmith tracing integration**
|
||||
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
|
||||
With LCEL, **all** steps are automatically logged to [LangSmith](https://smith.langchain.com) for maximal observability and debuggability.
|
||||
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
|
||||
|
||||
#### [Interface](/docs/expression_language/interface)
|
||||
The base interface shared by all LCEL objects
|
||||
|
||||
#### [How to](/docs/expression_language/how_to)
|
||||
How to use core features of LCEL
|
||||
|
||||
#### [Cookbook](/docs/expression_language/cookbook)
|
||||
Examples of common LCEL usage patterns
|
||||
|
||||
#### [Why use LCEL](/docs/expression_language/why)
|
||||
A deeper dive into the benefits of LCEL
|
||||
**Seamless LangServe deployment integration**
|
||||
Any chain created with LCEL can be easily deployed using LangServe.
|
||||
@@ -8,7 +8,7 @@
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Interface\n",
|
||||
"---\n"
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -31,26 +31,17 @@
|
||||
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
|
||||
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
|
||||
"\n",
|
||||
"The **input type** varies by component:\n",
|
||||
"The **input type** and **output type** varies by component:\n",
|
||||
"\n",
|
||||
"| Component | Input Type |\n",
|
||||
"| --- | --- |\n",
|
||||
"|Prompt|Dictionary|\n",
|
||||
"|Retriever|Single string|\n",
|
||||
"|LLM, ChatModel| Single string, list of chat messages or a PromptValue|\n",
|
||||
"|Tool|Single string, or dictionary, depending on the tool|\n",
|
||||
"|OutputParser|The output of an LLM or ChatModel|\n",
|
||||
"| Component | Input Type | Output Type |\n",
|
||||
"| --- | --- | --- |\n",
|
||||
"| Prompt | Dictionary | PromptValue |\n",
|
||||
"| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |\n",
|
||||
"| LLM | Single string, list of chat messages or a PromptValue | String |\n",
|
||||
"| OutputParser | The output of an LLM or ChatModel | Depends on the parser |\n",
|
||||
"| Retriever | Single string | List of Documents |\n",
|
||||
"| Tool | Single string or dictionary, depending on the tool | Depends on the tool |\n",
|
||||
"\n",
|
||||
"The **output type** also varies by component:\n",
|
||||
"\n",
|
||||
"| Component | Output Type |\n",
|
||||
"| --- | --- |\n",
|
||||
"| LLM | String |\n",
|
||||
"| ChatModel | ChatMessage |\n",
|
||||
"| Prompt | PromptValue |\n",
|
||||
"| Retriever | List of documents |\n",
|
||||
"| Tool | Depends on the tool |\n",
|
||||
"| OutputParser | Depends on the parser |\n",
|
||||
"\n",
|
||||
"All runnables expose input and output **schemas** to inspect the inputs and outputs:\n",
|
||||
"- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n",
|
||||
@@ -66,8 +57,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI()\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n",
|
||||
@@ -1161,7 +1152,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
# Why use LCEL?
|
||||
|
||||
The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:
|
||||
|
||||
- first-class support for streaming: when you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. We’re constantly improving streaming support, recently we added a [streaming JSON parser](https://twitter.com/LangChainAI/status/1709690468030914584), and more is in the works.
|
||||
- first-class async support: any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](https://github.com/langchain-ai/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
|
||||
- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
|
||||
- support for retries and fallbacks: more recently we’ve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
|
||||
- accessing intermediate results: for more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. We’ve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and it’s available on every LangServe server.
|
||||
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
|
||||
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand what’s happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.
|
||||
@@ -28,3 +28,37 @@ If you want to install from source, you can do so by cloning the repo and be sur
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Langchain experimental
|
||||
The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
|
||||
Install with:
|
||||
|
||||
```bash
|
||||
pip install langchain-experimental
|
||||
```
|
||||
|
||||
## LangChain CLI
|
||||
The LangChain CLI is useful for working with LangChain templates and other LangServe projects.
|
||||
Install with:
|
||||
|
||||
```bash
|
||||
pip install langchain-cli
|
||||
```
|
||||
|
||||
## LangServe
|
||||
LangServe helps developers deploy LangChain runnables and chains as a REST API.
|
||||
LangServe is automatically installed by LangChain CLI.
|
||||
If not using LangChain CLI, install with:
|
||||
|
||||
```bash
|
||||
pip install "langserve[all]"
|
||||
```
|
||||
for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code.
|
||||
|
||||
## LangSmith SDK
|
||||
The LangSmith SDK is automatically installed by LangChain.
|
||||
If not using LangChain, install with:
|
||||
|
||||
```bash
|
||||
pip install langsmith
|
||||
```
|
||||
@@ -9,23 +9,19 @@ sidebar_position: 0
|
||||
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
|
||||
|
||||
This framework consists of several parts.
|
||||
You can see how the parts interact with each other below:
|
||||
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
|
||||
- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
|
||||

|
||||
|
||||
These parts include:
|
||||
|
||||
- **[LangChain Packages]**: The Python and JavaScript packages. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
|
||||
- **[LangChain Templates](https://github.com/langchain-ai/langchain/tree/master/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
|
||||
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
|
||||
- **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
||||
|
||||
Together, these products simplify the entire application lifecycle:
|
||||
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
|
||||
- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
|
||||
- **Deploy**: Turn any chain into an API with LangServe.
|
||||
|
||||
## LangChain Packages
|
||||
## LangChain Libraries
|
||||
|
||||
The main value props of the LangChain packages are:
|
||||
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
|
||||
@@ -33,31 +29,46 @@ The main value props of the LangChain packages are:
|
||||
|
||||
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
|
||||
|
||||
|
||||
## Get started
|
||||
|
||||
[Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.
|
||||
|
||||
We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.
|
||||
|
||||
Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.
|
||||
|
||||
:::note
|
||||
|
||||
These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library.
|
||||
|
||||
:::
|
||||
|
||||
## LangChain Expression Language (LCEL)
|
||||
|
||||
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
|
||||
|
||||
- **[Overview](/docs/expression_language/)**: LCEL and its benefits
|
||||
- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects
|
||||
- **[How-to](/docs/expression_language/interface)**: Key features of LCEL
|
||||
- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks
|
||||
|
||||
|
||||
## Modules
|
||||
|
||||
LangChain provides standard, extendable interfaces and integrations for the following modules, listed from least to most complex:
|
||||
LangChain provides standard, extendable interfaces and integrations for the following modules:
|
||||
|
||||
#### [Model I/O](/docs/modules/model_io/)
|
||||
Interface with language models
|
||||
|
||||
#### [Retrieval](/docs/modules/data_connection/)
|
||||
Interface with application-specific data
|
||||
#### [Chains](/docs/modules/chains/)
|
||||
Construct sequences of calls
|
||||
|
||||
#### [Agents](/docs/modules/agents/)
|
||||
Let chains choose which tools to use given high-level directives
|
||||
#### [Memory](/docs/modules/memory/)
|
||||
Persist application state between runs of a chain
|
||||
#### [Callbacks](/docs/modules/callbacks/)
|
||||
Log and stream intermediate steps of any chain
|
||||
Let models choose which tools to use given high-level directives
|
||||
|
||||
|
||||
## Examples, ecosystem, and resources
|
||||
|
||||
### [Use cases](/docs/use_cases/question_answering/)
|
||||
Walkthroughs and techniques for common end-to-end use cases, like:
|
||||
- [Document question answering](/docs/use_cases/question_answering/)
|
||||
@@ -65,15 +76,18 @@ Walkthroughs and techniques for common end-to-end use cases, like:
|
||||
- [Analyzing structured data](/docs/use_cases/qa_structured/sql/)
|
||||
- and much more...
|
||||
|
||||
### [Integrations](/docs/integrations/providers/)
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
|
||||
|
||||
### [Guides](/docs/guides/adapters/openai)
|
||||
Best practices for developing with LangChain.
|
||||
|
||||
### [Ecosystem](/docs/integrations/providers/)
|
||||
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/) and [dependent repos](/docs/additional_resources/dependents).
|
||||
### [API reference](https://api.python.langchain.com)
|
||||
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.
|
||||
|
||||
### [Developer's guide](/docs/contributing)
|
||||
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
|
||||
|
||||
### [Community](/docs/community)
|
||||
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
|
||||
|
||||
## API reference
|
||||
|
||||
Head to the [reference](https://api.python.langchain.com) section for full documentation of all classes and methods in the LangChain Python package.
|
||||
|
||||
@@ -1,6 +1,17 @@
|
||||
# Quickstart
|
||||
|
||||
## Installation
|
||||
In this quickstart we'll show you how to:
|
||||
- Get setup with LangChain, LangSmith and LangServe
|
||||
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers
|
||||
- Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
|
||||
- Build simple application with LangChain
|
||||
- Trace your application with LangSmith
|
||||
- Serve your application with LangServe
|
||||
|
||||
That's a fair amount to cover! Let's dive in.
|
||||
|
||||
## Setup
|
||||
### Installation
|
||||
|
||||
To install LangChain run:
|
||||
|
||||
@@ -20,7 +31,7 @@ import CodeBlock from "@theme/CodeBlock";
|
||||
|
||||
For more details, see our [Installation guide](/docs/get_started/installation).
|
||||
|
||||
## Environment setup
|
||||
### Environment
|
||||
|
||||
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
|
||||
|
||||
@@ -39,12 +50,12 @@ export OPENAI_API_KEY="..."
|
||||
If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:
|
||||
|
||||
```python
|
||||
from langchain.llms import OpenAI
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
llm = OpenAI(openai_api_key="...")
|
||||
llm = ChatOpenAI(openai_api_key="...")
|
||||
```
|
||||
|
||||
## LangSmith Setup
|
||||
### LangSmith
|
||||
|
||||
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.
|
||||
As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.
|
||||
@@ -58,50 +69,60 @@ export LANGCHAIN_TRACING_V2="true"
|
||||
export LANGCHAIN_API_KEY=...
|
||||
```
|
||||
|
||||
## Building an application
|
||||
### LangServe
|
||||
|
||||
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
|
||||
Modules can be used as standalones in simple applications and they can be combined for more complex use cases.
|
||||
LangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we'll show how you can deploy your app with LangServe.
|
||||
|
||||
The most common and most important chain that LangChain helps create contains three things:
|
||||
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
|
||||
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
|
||||
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
|
||||
Install with:
|
||||
```bash
|
||||
pip install "langserve[all]"
|
||||
```
|
||||
|
||||
In this getting started guide we will cover those three components by themselves, and then go over how to combine all of them.
|
||||
## Building with LangChain
|
||||
|
||||
LangChain provides many modules that can be used to build language model applications.
|
||||
Modules can be used as standalones in simple applications and they can be composed for more complex use cases.
|
||||
Composition is powered by **LangChain Expression Language** (LCEL), which defines a unified `Runnable` interface that many modules implement, making it possible to seamlessly chain components.
|
||||
|
||||
The simplest and most common chain contains three things:
|
||||
- LLM/Chat Model: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
|
||||
- Prompt Template: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
|
||||
- Output Parser: These translate the raw response from the language model to a more workable format, making it easy to use the output downstream.
|
||||
|
||||
In this guide we'll cover those three components individually, and then go over how to combine them.
|
||||
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
|
||||
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
|
||||
Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.
|
||||
|
||||
## LLMs
|
||||
### LLM / Chat Model
|
||||
|
||||
There are two types of language models, which in LangChain are called:
|
||||
There are two types of language models:
|
||||
|
||||
- LLMs: this is a language model which takes a string as input and returns a string
|
||||
- ChatModels: this is a language model which takes a list of messages as input and returns a message
|
||||
- `LLM`: underlying model takes a string as input and returns a string
|
||||
- `ChatModel`: underlying model takes a list of messages as input and returns a message
|
||||
|
||||
The input/output for LLMs is simple and easy to understand - a string.
|
||||
But what about ChatModels? The input there is a list of `ChatMessages`, and the output is a single `ChatMessage`.
|
||||
A `ChatMessage` has two required components:
|
||||
Strings are simple, but what exactly are messages? The base message interface is defined by `BaseMessage`, which has two required attributes:
|
||||
|
||||
- `content`: This is the content of the message.
|
||||
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
|
||||
- `content`: The content of the message. Usually a string.
|
||||
- `role`: The entity from which the `BaseMessage` is coming.
|
||||
|
||||
LangChain provides several objects to easily distinguish between different roles:
|
||||
|
||||
- `HumanMessage`: A `ChatMessage` coming from a human/user.
|
||||
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
|
||||
- `SystemMessage`: A `ChatMessage` coming from the system.
|
||||
- `FunctionMessage`: A `ChatMessage` coming from a function call.
|
||||
- `HumanMessage`: A `BaseMessage` coming from a human/user.
|
||||
- `AIMessage`: A `BaseMessage` coming from an AI/assistant.
|
||||
- `SystemMessage`: A `BaseMessage` coming from the system.
|
||||
- `FunctionMessage` / `ToolMessage`: A `BaseMessage` containing the output of a function or tool call.
|
||||
|
||||
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
|
||||
For more information on how to use these different messages most effectively, see our prompting guide.
|
||||
|
||||
Langchain provides a common interface that's shared by both LLMs and ChatModels.
|
||||
However it's useful to understand this difference in order to construct prompts for a given language model.
|
||||
LangChain provides a common interface that's shared by both `LLM`s and `ChatModel`s.
|
||||
However it's useful to understand the difference in order to most effectively construct prompts for a given language model.
|
||||
|
||||
The standard interface that LangChain provides has two methods:
|
||||
- `predict`: Takes in a string, returns a string
|
||||
- `predict_messages`: Takes in a list of messages, returns a message.
|
||||
The simplest way to call an `LLM` or `ChatModel` is using `.invoke()`, the universal synchronous call method for all LangChain Expression Language (LCEL) objects:
|
||||
- `LLM.invoke`: Takes in a string, returns a string.
|
||||
- `ChatModel.invoke`: Takes in a list of `BaseMessage`, returns a `BaseMessage`.
|
||||
|
||||
The input types for these methods are actually more general than this, but for simplicity here we can assume LLMs only take strings and Chat models only takes lists of messages.
|
||||
Check out the "Go deeper" section below to learn more about model invocation.
|
||||
|
||||
Let's see how to work with these different types of models and these different types of inputs.
|
||||
First, let's import an LLM and a ChatModel.
|
||||
@@ -112,50 +133,36 @@ from langchain.chat_models import ChatOpenAI
|
||||
|
||||
llm = OpenAI()
|
||||
chat_model = ChatOpenAI()
|
||||
|
||||
llm.predict("hi!")
|
||||
>>> "Hi"
|
||||
|
||||
chat_model.predict("hi!")
|
||||
>>> "Hi"
|
||||
```
|
||||
|
||||
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
|
||||
`LLM` and `ChatModel` objects are effectively configuration objects.
|
||||
You can initialize them with parameters like `temperature` and others, and pass them around.
|
||||
|
||||
Next, let's use the `predict` method to run over a string input.
|
||||
|
||||
```python
|
||||
text = "What would be a good company name for a company that makes colorful socks?"
|
||||
|
||||
llm.predict(text)
|
||||
# >> Feetful of Fun
|
||||
|
||||
chat_model.predict(text)
|
||||
# >> Socks O'Color
|
||||
```
|
||||
|
||||
Finally, let's use the `predict_messages` method to run over a list of messages.
|
||||
|
||||
```python
|
||||
from langchain.schema import HumanMessage
|
||||
|
||||
text = "What would be a good company name for a company that makes colorful socks?"
|
||||
messages = [HumanMessage(content=text)]
|
||||
|
||||
llm.predict_messages(messages)
|
||||
llm.invoke(text)
|
||||
# >> Feetful of Fun
|
||||
|
||||
chat_model.predict_messages(messages)
|
||||
# >> Socks O'Color
|
||||
chat_model.invoke(messages)
|
||||
# >> AIMessage(content="Socks O'Color")
|
||||
```
|
||||
|
||||
For both these methods, you can also pass in parameters as keyword arguments.
|
||||
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
|
||||
Whatever values are passed in during run time will always override what the object was configured with.
|
||||
<details> <summary>Go deeper</summary>
|
||||
|
||||
`LLM.invoke` and `ChatModel.invoke` actually both support as input any of `Union[str, List[BaseMessage], PromptValue]`.
|
||||
`PromptValue` is an object that defines it's own custom logic for returning it's inputs either as a string or as messages.
|
||||
`LLM`s have logic for coercing any of these into a string, and `ChatModel`s have logic for coercing any of these to messages.
|
||||
The fact that `LLM` and `ChatModel` accept the same inputs means that you can directly swap them for one another in most chains without breaking anything,
|
||||
though it's of course important to think about how inputs are being coerced and how that may affect model performance.
|
||||
To dive deeper on models head to the [Language models](/docs/modules/model_io/models) section.
|
||||
|
||||
## Prompt templates
|
||||
</details>
|
||||
|
||||
### Prompt templates
|
||||
|
||||
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
|
||||
|
||||
@@ -181,10 +188,10 @@ You can "partial" out variables - e.g. you can format only some of the variables
|
||||
You can compose them together, easily combining different templates into a single prompt.
|
||||
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
|
||||
|
||||
PromptTemplates can also be used to produce a list of messages.
|
||||
`PromptTemplate`s can also be used to produce a list of messages.
|
||||
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc.).
|
||||
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
|
||||
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
|
||||
Here, what happens most often is a `ChatPromptTemplate` is a list of `ChatMessageTemplates`.
|
||||
Each `ChatMessageTemplate` contains instructions for how to format that `ChatMessage` - its role, and then also its content.
|
||||
Let's take a look at this below:
|
||||
|
||||
```python
|
||||
@@ -211,13 +218,13 @@ chat_prompt.format_messages(input_language="English", output_language="French",
|
||||
|
||||
ChatPromptTemplates can also be constructed in other ways - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
|
||||
|
||||
## Output parsers
|
||||
### Output parsers
|
||||
|
||||
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
|
||||
There are few main types of OutputParsers, including:
|
||||
`OutputParsers` convert the raw output of a language model into a format that can be used downstream.
|
||||
There are few main types of `OutputParser`s, including:
|
||||
|
||||
- Convert text from LLM into structured information (e.g. JSON)
|
||||
- Convert a ChatMessage into just a string
|
||||
- Convert text from `LLM` into structured information (e.g. JSON)
|
||||
- Convert a `ChatMessage` into just a string
|
||||
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
|
||||
|
||||
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers).
|
||||
@@ -239,7 +246,7 @@ CommaSeparatedListOutputParser().parse("hi, bye")
|
||||
# >> ['hi', 'bye']
|
||||
```
|
||||
|
||||
## PromptTemplate + LLM + OutputParser
|
||||
### Composing with LCEL
|
||||
|
||||
We can now combine all these into one chain.
|
||||
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to a language model, and then pass the output through an (optional) output parser.
|
||||
@@ -247,15 +254,17 @@ This is a convenient way to bundle up a modular piece of logic.
|
||||
Let's see it in action!
|
||||
|
||||
```python
|
||||
from typing import List
|
||||
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.prompts.chat import ChatPromptTemplate
|
||||
from langchain.prompts import ChatPromptTemplate
|
||||
from langchain.schema import BaseOutputParser
|
||||
|
||||
class CommaSeparatedListOutputParser(BaseOutputParser):
|
||||
class CommaSeparatedListOutputParser(BaseOutputParser[List[str]]):
|
||||
"""Parse the output of an LLM call to a comma-separated list."""
|
||||
|
||||
|
||||
def parse(self, text: str):
|
||||
def parse(self, text: str) -> List[str]:
|
||||
"""Parse the output of an LLM call."""
|
||||
return text.strip().split(", ")
|
||||
|
||||
@@ -273,20 +282,118 @@ chain.invoke({"text": "colors"})
|
||||
# >> ['red', 'blue', 'green', 'yellow', 'orange']
|
||||
```
|
||||
|
||||
|
||||
Note that we are using the `|` syntax to join these components together.
|
||||
This `|` syntax is called the LangChain Expression Language.
|
||||
To learn more about this syntax, read the documentation [here](/docs/expression_language).
|
||||
This `|` syntax is powered by the LangChain Expression Language (LCEL) and relies on the universal `Runnable` interface that all of these objects implement.
|
||||
To learn more about LCEL, read the documentation [here](/docs/expression_language).
|
||||
|
||||
## Tracing with LangSmith
|
||||
|
||||
Assuming we've set our environment variables as shown in the beginning, all of the model and chain calls we've been making will have been automatically logged to LangSmith.
|
||||
Once there, we can use LangSmith to debug and annotate our application traces, then turn them into datasets for evaluating future iterations of the application.
|
||||
|
||||
Check out what the trace for the above chain would look like:
|
||||
https://smith.langchain.com/public/09370280-4330-4eb4-a7e8-c91817f6aa13/r
|
||||
|
||||
For more on LangSmith [head here](/docs/langsmith/).
|
||||
|
||||
## Serving with LangServe
|
||||
|
||||
Now that we've built an application, we need to serve it. That's where LangServe comes in.
|
||||
LangServe helps developers deploy LCEL chains as a REST API.
|
||||
The library is integrated with FastAPI and uses pydantic for data validation.
|
||||
|
||||
### Server
|
||||
|
||||
To create a server for our application we'll make a `serve.py` file with three things:
|
||||
1. The definition of our chain (same as above)
|
||||
2. Our FastAPI app
|
||||
3. A definition of a route from which to serve the chain, which is done with `langserve.add_routes`
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
from typing import List
|
||||
|
||||
from fastapi import FastAPI
|
||||
from langchain.prompts import ChatPromptTemplate
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.schema import BaseOutputParser
|
||||
from langserve import add_routes
|
||||
|
||||
# 1. Chain definition
|
||||
|
||||
class CommaSeparatedListOutputParser(BaseOutputParser[List[str]]):
|
||||
"""Parse the output of an LLM call to a comma-separated list."""
|
||||
|
||||
|
||||
def parse(self, text: str) -> List[str]:
|
||||
"""Parse the output of an LLM call."""
|
||||
return text.strip().split(", ")
|
||||
|
||||
template = """You are a helpful assistant who generates comma separated lists.
|
||||
A user will pass in a category, and you should generate 5 objects in that category in a comma separated list.
|
||||
ONLY return a comma separated list, and nothing more."""
|
||||
human_template = "{text}"
|
||||
|
||||
chat_prompt = ChatPromptTemplate.from_messages([
|
||||
("system", template),
|
||||
("human", human_template),
|
||||
])
|
||||
category_chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()
|
||||
|
||||
# 2. App definition
|
||||
app = FastAPI(
|
||||
title="LangChain Server",
|
||||
version="1.0",
|
||||
description="A simple api server using Langchain's Runnable interfaces",
|
||||
)
|
||||
|
||||
# 3. Adding chain route
|
||||
add_routes(
|
||||
app,
|
||||
category_chain,
|
||||
path="/category_chain",
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(app, host="localhost", port=8000)
|
||||
```
|
||||
|
||||
And that's it! If we execute this file:
|
||||
```bash
|
||||
python serve.py
|
||||
```
|
||||
we should see our chain being served at localhost:8000.
|
||||
|
||||
### Playground
|
||||
|
||||
Every LangServe service comes with a simple built-in UI for configuring and invoking the application with streaming output and visibility into intermediate steps.
|
||||
Head to http://localhost:8000/category_chain/playground/ to try it out!
|
||||
|
||||
### Client
|
||||
|
||||
Now let's set up a client for programmatically interacting with our service. We can easily do this with the `langserve.RemoteRunnable`.
|
||||
Using this, we can interact with the served chain as if it were running client-side.
|
||||
|
||||
```python
|
||||
from langserve import RemoteRunnable
|
||||
|
||||
remote_chain = RemoteRunnable("http://localhost:8000/category_chain/")
|
||||
remote_chain.invoke({"text": "colors"})
|
||||
# >> ['red', 'blue', 'green', 'yellow', 'orange']
|
||||
```
|
||||
|
||||
To learn more about the many other features of LangServe [head here](/docs/langserve).
|
||||
|
||||
## Next steps
|
||||
|
||||
This is it!
|
||||
We've now gone over how to create the core building block of LangChain applications.
|
||||
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
|
||||
We've touched on how to build an application with LangChain, how to trace it with LangSmith, and how to serve it with LangServe.
|
||||
There are a lot more features in all three of these than we can cover here.
|
||||
To continue on your journey:
|
||||
|
||||
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
|
||||
- Learn the other [key components](/docs/modules)
|
||||
- Read up on [LangChain Expression Language](/docs/expression_language) to learn how to chain these components together
|
||||
- Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
|
||||
- Explore [end-to-end use cases](/docs/use_cases/qa_structured/sql)
|
||||
- Read up on [LangChain Expression Language (LCEL)](/docs/expression_language) to learn how to chain these components together
|
||||
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers and learn the other [key components](/docs/modules)
|
||||
- Explore common [end-to-end use cases](/docs/use_cases/qa_structured/sql) and [template applications](/docs/templates)
|
||||
- [Read up on LangSmith](/docs/langsmith/), the platform for debugging, testing, monitoring and more
|
||||
- Learn more about serving your applications with [LangServe](/docs/langserve)
|
||||
|
||||